The most recent data from the Ipsos Global Trustworthiness Monitor show that among survey respondents, 74% of people around the world agree that AI is capable of generating highly realistic fake content. The consensus on this perception among the countries included in the study also reflects a shared understanding of the potential impact of AI on the quality and reliability of information circulated by the media: 51% percent of the people surveyed believe that AI is largely responsible for the multiplication of misinformation at a global level.
Despite the general awareness of the potential of AI and global concerns regarding AI applications, most people think they can tell the difference between real content and fake content. This confidence at an individual level changes dramatically when the focus shifts from trust in one's own abilities to trust in the abilities of the average citizen, revealing a marked geographical disparity between levels of trust in the 29 countries in the study.
Indonesia stands out as the country with the most confidence in the average citizen's ability to discern fake news from real news, with a remarkable 70%. Among the least confident countries are Japan and the United States, both with a modest 26%, while Italy also appears near the bottom of the ranking with 38%, trailed by the Netherlands and Canada (both at 37%). This range of perceptions highlights the need to understand what sociocultural dynamics influence trust in citizens' ability to recognize the authenticity of the content they are exposed to.
What these results underscore is the urgent need for all countries of the world to reflect on how to responsibly manage artificial intelligence and combat disinformation. The divergence in attitudes and trust levels we see in the global population confirms the call for a balanced response, based on education, transparency and policies that favor informed, discriminating dissemination of information. A collaborative, enlightened approach is the only path to tackling emerging challenges and ensuring the responsible use of AI.