Society Insights

Trust and fake news: opinions on disinformation in the AI era

The trend

Artificial intelligence (AI) has become part of our daily routines by now, revolutionizing many aspects of our lives but also giving rise to momentous challenges and concerns. One is undoubtedly the proliferation of deepfakes, i.e. videos and other types of media created so convincingly by AI that they appear authentic. (For example, AI can superimpose one person’s facial features on the body of someone else.) Thanks in part to increasingly accessible and easy-to-use technological applications, such as ChatGPT and Midjourney, deepfakes have rapidly evolved into tools for manipulating the public, further accentuating an already tangible erosion of trust in the media and political institutions among the adult global population.

 

Trust is the focus of the new edition of the Ipsos Global Trustworthiness Monitor, based on a survey conducted in 29 countries on a sample of 21,816 adults. Findings show that on average, respectively 14% and 25% of people believe that politicians and journalists can be trusted.

 

In this record-breaking election year, more than two billion people will have gone to the polls by the end of 2024. With recent advances in generative AI and upgrades in its ability to manipulate videos and images to create false narratives, public opinion can be swayed in consequential ways, compromising people's ability to make informed decisions. In light of all this, it’s essential that companies, governments and the media think carefully about the potential and the risks of AI and how to effectively contribute to upholding democracy.

Key takeaways

The most recent data from the Ipsos Global Trustworthiness Monitor show that among survey respondents, 74% of people around the world agree that AI is capable of generating highly realistic fake content. The consensus on this perception among the countries included in the study also reflects a shared understanding of the potential impact of AI on the quality and reliability of information circulated by the media: 51% percent of the people surveyed believe that AI is largely responsible for the multiplication of misinformation at a global level.

 

Despite the general awareness of the potential of AI and global concerns regarding AI applications, most people think they can tell the difference between real content and fake content. This confidence at an individual level changes dramatically when the focus shifts from trust in one's own abilities to trust in the abilities of the average citizen, revealing a marked geographical disparity between levels of trust in the 29 countries in the study.

 

Indonesia stands out as the country with the most confidence in the average citizen's ability to discern fake news from real news, with a remarkable 70%. Among the least confident countries are Japan and the United States, both with a modest 26%, while Italy also appears near the bottom of the ranking with 38%, trailed by the Netherlands and Canada (both at 37%). This range of perceptions highlights the need to understand what sociocultural dynamics influence trust in citizens' ability to recognize the authenticity of the content they are exposed to.

 

What these results underscore is the urgent need for all countries of the world to reflect on how to responsibly manage artificial intelligence and combat disinformation. The divergence in attitudes and trust levels we see in the global population confirms the call for a balanced response, based on education, transparency and policies that favor informed, discriminating dissemination of information. A collaborative, enlightened approach is the only path to tackling emerging challenges and ensuring the responsible use of AI.

Trustworthiness and fake news

SHARE ON