19th December 2023
Thank you Minister Sharaf and very pleased to join you for this important meeting on this important and timely subject.
Apologies for my delay; I had to sneak out of a lunch of the SC with the SG, and I think everyone there is having my part of desert.
Users of the cyberspace – meaning anyone of us – can confirm that online hate speech is a widespread issue. With our daily lives becoming increasingly virtual, and perpetrators emboldened by the anonymity of the digital space as well as the use of chatbots, online hate and harassment have risen to unprecedented heights.
Efforts to address online hate speech have faced significant challenges, however. Human based moderation has proved both faulty and costly which has led major platform to gradually move to online content moderation. More and more tech companies and online platforms are developing automated tools to detect and moderate harmful content, which is far from easy as it straggles to draw a fine line between protecting free speech and safeguarding internet users from harm.
Artificial Intelligence has quickly become an integral part of our everyday lives. Announced as the forth revolution, it has the potential to revolutionize almost all aspects of life as we know it: from education and politics to art and healthcare, from security to defense and warfare. It may well play a crucial part in the way information is used: either to inform, teach, educate or to mis and disinform and contribute to hate.
As we have pointed out in the past, there are actors, state and no state ones, that continually attempt to deliberately mislead, distort facts, spread out lies and complot theories and interfere in the democratic processes of others, spread hatred, promote discrimination, incite violence or conflict by misusing digital technologies.
To all these actors, the cyberspace and its tools, ever more powerful such as AI, can provide infinite possibilities for malicious activities. They can have a direct impact in breaking social cohesion within countries or even fomenting violence which may affect peace and security.
On the other hand, with its immense potential, AI has the power to revolutionize the way we tackle hate speech, disinformation, and misinformation.
By analyzing language patterns and sentiment, AI algorithms can be taught to identify and flag hate speech in real-time; it can be used to combat the spread of disinformation and misinformation, including terrorist propaganda and extremist content with unprecedented effectiveness.
However, we must never forget that there is hardly anything really intelligent in Artificial intelligence despite the misleading name: AI is the mixture of increased computing power with the vast amount of data and powerful and evolving algorithms. Therefore, we must keep in mind that the use of AI in combating hate speech, disinformation, and misinformation may raise ethical and privacy concerns. Without the ethical guardrails, it risks reproducing real world biases and discrimination and affecting fundamental human rights and freedoms, as it may inadvertently suppress legitimate speech or infringe on individual privacy rights.
As such, it is imperative that careful considerations are made to ensure that AI-powered solutions are implemented in ways that respect fundamental rights and democratic principles.
In this respect, we welcome the UNESCO Guidelines for the governance of digital platforms and the UNESCO’s Recommendation on the Ethic of AI, that help enhance information integrity online and address the risks of information manipulation and interference, while safeguarding human rights and fundamental freedoms.
We must make sure that we use the fast-evolving technological advances to address the pressing societal challenges. AI can help us create a more inclusive and informed online environment, where individuals are protected from the harmful effects of hate speech and misinformation. It is however crucial that the deployment of AI is done responsibly and ethically, to safeguard fundamental rights and ensure a balanced and fair online discourse.
Thank you!
Closing Remarks:
Let me thank all our briefers and all of you, colleagues, for the participation and contribution and especially for respecting the time limits knowing that there is much to say about these complex issues.
Internet is in our veins, communication is our daily food.
But, as many speakers highlighted, the unprecedented online connectivity carries also the dark side of human behavior and the implications are simply huge.
We know the limitations of the current measures to mitigate hate speech and curb the spread of harmful content online.
We heard insightful perspectives from experts in the field, and I am confident that the ideas shared today will contribute to ongoing efforts to successfully deal with these challenging issues.
To address the current limitations, a shift towards proactive measures and innovative solutions is, therefore, imperative.
As we heard, generative AI can do everything: spread hate or do good; it is both frightening and utterly inspiring.
It is therefore upon us to make sure to find the ways and adopt AI-powered measures, to fortify our digital communities and ensure a cohesive online world.
But it has to be done without sacrificing achievements, by ensuring the protection and promotion of human rights, which means that although we are in love with machines, we, humans must always be part of the process.