
A new international survey by the Reuters Institute for the Study of Journalism and the University of Oxford explores how people in six countries, Argentina, Denmark, France, Japan, the UK and the US, are using and perceiving generative AI in everyday life, across sectors and in the news media. The findings reveal significant growth in public awareness and adoption of AI tools.
Public use of generative AI tools has surged dramatically in the past year. The proportion of respondents who have used standalone systems such as ChatGPT rose from 40% to 61%, with weekly use nearly doubling from 18% to 34%. ChatGPT remains the most widely used system (22% weekly), though usage patterns vary by age.
Awareness of generative AI tools has also expanded sharply, from 78% in 2024 to 90% in 2025, with only 10% of respondents saying they have heard of none. However, usage is unevenly distributed, while younger people are frequent users, older demographics remain less engaged. Among 18–24 year olds, 59% report weekly use of any generative AI tool, compared to only 20% among those aged 55 and above.
Information-seeking has overtaken creative tasks as the primary motivation for using AI. Weekly use of AI for finding information more than doubled from 11% to 24%, surpassing media creation (21%). News consumption through AI systems remains limited at 6%, though this figure has doubled year-on-year.
Despite these advances, most users remain occasional rather than regular adopters and trust is concentrated in a few dominant brands. Across the six countries, 29% say they trust ChatGPT, compared with 18% for Gemini and 12% for both Copilot and Meta AI.
AI-generated search responses are now a routine feature for many users. More than half of respondents (54%) report encountering AI-generated answers in search results within the past week, with the highest exposure in Argentina (70%) and the lowest in France (29%).
Engagement with these AI-generated responses varies: one-third of users say they frequently click through to original sources, while 28% rarely or never do. Trust in AI search answers is moderate, 50% of users express trust, though conditionally, particularly in high-stakes areas such as health or politics. Many respondents say they verify AI-provided information against traditional sources before accepting it.
Public opinion suggests that generative AI is now perceived as pervasive, with 41% of respondents believing it is used “always or often” across different sectors. This rises to 68% for social media, 67% for search engines and 51% for news media.
Optimism outweighs pessimism for AI’s potential impact in science, healthcare and search, but the reverse holds true for news, government and politics. On average, 29% of respondents are optimistic about AI improving their interactions with various sectors, while 22% are pessimistic.
A key divergence emerges between personal and societal expectations: while most respondents believe AI will benefit their own lives, a smaller proportion think it will benefit society as a whole. Notably, women report lower optimism levels across all measures, both in terms of personal benefit and societal outcomes.
The report highlights a continuing “comfort gap” between AI and human-produced news. Only 12% of respondents say they are comfortable with news created entirely by AI, compared with 62% for fully human-produced content. Acceptance increases when AI is used with human oversight—rising to 21% for AI-generated news with human review and to 43% when AI assists journalists under human direction.
Comfort levels are highest for back-end editorial applications, such as grammar correction (55%) and translation (53%) and lowest for front-facing uses, including rewriting content (30%), creating synthetic images (26%) or using AI-generated presenters (19%).
More people now believe that journalists regularly use AI, but only 33% think newsrooms “always” or “often” verify AI outputs before publication. Confidence in oversight varies widely, with higher levels in Japan (42%) and Argentina (44%) and lower levels in the UK (25%). This perception correlates closely with overall trust in news organisations.
Public expectations of responsible AI use also vary by outlet. 43% believe some news organisations will use AI more responsibly than others, while 28% foresee minimal differences. Despite widespread discussion of AI in journalism, public visibility remains low: 60% of respondents say they do not regularly encounter AI-driven features in news platforms. Only 19% report seeing AI-generated labels daily, even though 77% consume news daily, indicating a gap between newsroom implementation and audience awareness.
Public sentiment varies significantly by region. Respondents in Japan and Argentina are consistently more optimistic about the potential of AI in both society and journalism. In contrast, those in the UK, Denmark, France and the US tend to express greater caution, particularly around issues of transparency, bias and trust.
Across all six countries, people expect AI to make news cheaper to produce (up +39 percentage points) and more up to date (+22), but also less transparent (–8) and less trustworthy (–19) than before.
The 2025 survey reveals a rapidly expanding engagement with generative AI but continuing concerns about its implications for credibility, oversight and accountability, especially within journalism.
While public understanding of AI is deepening, trust remains conditional, hinging on visible human involvement, transparency about AI use and responsible editorial practices. The data also point to growing generational and gender divides in perceptions of AI’s social value - factors that policymakers, regulators and media organisations will need to consider as AI technologies become increasingly embedded in everyday life.
<span class="news-text_medium">Source:</span> Dr Felix Simon, Prof Rasmus Kleis Nielsen and Dr Richard Fletcher, “Generative AI and News Report 2025: How People Think About AI’s Role in Journalism and Society” (Reuters Institute for the Study of Journalism, 7 October 2025). DOI: 10.60625/risj-5bjv-yt69.