New research conducted by the European Broadcasting Union (EBU) and the BBC reveals that leading AI assistants provide inaccurate information in close to 50% of their responses when queried about news content.
The study, which examined 3,000 responses from major AI assistants in 14 languages, assessed their accuracy, sourcing methods, and ability to differentiate between opinion and fact. Notable AI assistants included in the research were OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity.
The findings showed that 45% of the AI responses analyzed contained significant issues, with 81% exhibiting some form of problem.
According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers and 15% of individuals under 25 rely on AI assistants for news consumption.
Following the release of the study, Reuters reached out to the companies involved for their feedback on the results.

Initiatives for Improvement by Companies
Google’s AI assistant, Gemini, has expressed its openness to feedback on its platform to enhance user experience and functionality.
OpenAI and Microsoft have acknowledged the issue of “hallucinations,” where AI models generate incorrect information due to factors like inadequate data, and are working to address this concern.
Perplexity highlights on its website that one of its “Deep Research” modes boasts 93.9% accuracy in terms of factual information.

Sourcing Errors and Accuracy Issues
One-third of AI assistant responses exhibited significant sourcing errors such as missing or misleading attributions, as per the research.
The study noted that Gemini, Google’s AI assistant, had sourcing issues in 72% of its responses, significantly higher than other assistants.
In terms of accuracy, 20% of responses across all AI assistants studied contained inaccuracies, including outdated information.
CBC/Radio-Canada, Postmedia, Metroland, the Toronto Star, the Globe and Mail, and The Canadian Press have initiated a joint lawsuit against OpenAI, the creator of ChatGPT, for allegedly using news content
