Friday, February 13, 2026
HomeUpdatesAI Assistants Provide Inaccurate News Info: Study

AI Assistants Provide Inaccurate News Info: Study

New research conducted by the European Broadcasting Union (EBU) and the BBC reveals that leading AI assistants provide inaccurate information in close to 50% of their responses when queried about news content.

The study, which examined 3,000 responses from major AI assistants in 14 languages, assessed their accuracy, sourcing methods, and ability to differentiate between opinion and fact. Notable AI assistants included in the research were OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity.

The findings showed that 45% of the AI responses analyzed contained significant issues, with 81% exhibiting some form of problem.

According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers and 15% of individuals under 25 rely on AI assistants for news consumption.

Following the release of the study, Reuters reached out to the companies involved for their feedback on the results.

Screen showing the word Gemini
AI Assistants like Gemini, ChatGPT, Perplexity, and Copilot were found to have at least one issue in nearly half of their responses related to news. (Dado Ruvic/Illustration/Reuters)

Initiatives for Improvement by Companies

Google’s AI assistant, Gemini, has expressed its openness to feedback on its platform to enhance user experience and functionality.

OpenAI and Microsoft have acknowledged the issue of “hallucinations,” where AI models generate incorrect information due to factors like inadequate data, and are working to address this concern.

Perplexity highlights on its website that one of its “Deep Research” modes boasts 93.9% accuracy in terms of factual information.

Screen showing logo and word ChatGPT
The study analyzed 3,000 responses from various companies, including ChatGPT. (Dado Ruvic/Illustration/Reuters)

Sourcing Errors and Accuracy Issues

One-third of AI assistant responses exhibited significant sourcing errors such as missing or misleading attributions, as per the research.

The study noted that Gemini, Google’s AI assistant, had sourcing issues in 72% of its responses, significantly higher than other assistants.

In terms of accuracy, 20% of responses across all AI assistants studied contained inaccuracies, including outdated information.

WATCH | Why Canadian news organizations are suing ChatGPT:

Canadian news organizations, including CBC, sue ChatGPT creator

November 30, 2024|

Duration 2:02

CBC/Radio-Canada, Postmedia, Metroland, the Toronto Star, the Globe and Mail, and The Canadian Press have initiated a joint lawsuit against OpenAI, the creator of ChatGPT, for allegedly using news content

RELATED ARTICLES

Most Popular