A groundbreaking BBC and European Broadcasting Union (EBU) investigation has exposed a worrying trend — AI assistants are distorting up to half of the news content they deliver. As these systems increasingly replace traditional search engines, millions of people now rely on artificial intelligence for daily updates — often without realizing how unreliable it can be. According to the study, 7% of online users already use AI tools to get their news, with the figure climbing to 15% among people under 25. The research, which involved 22 public media organizations across 18 countries, evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity. Professional journalists assessed the answers based on accuracy, source credibility, distinction between opinion and fact, and contextual relevance.
The findings were eye-opening: in 45% of the responses, at least one major issue was identified; 31% referenced non-existent or misleading sources, while 20% included inaccurate, outdated, or fabricated information. Among the tested platforms, Gemini performed the worst, with a staggering 76% of its outputs showing significant problems—more than double the error rate of its competitors. Experts attributed this poor performance to the system’s inefficient search provider integration and inconsistent fact-verification mechanisms.
Despite minor improvements compared to results from earlier in the year, researchers concluded that the overall accuracy of AI assistants remains alarmingly low. “The findings show that these flaws are not isolated incidents. They are systemic, cross-border, and multilingual, undermining public trust in news media,” said Jean Philip De Tender, Deputy Director General of the EBU.
The BBC’s survey also revealed a worrying level of public trust in AI-generated news. Over one-third of British adults said they trust news summaries produced by AI, and among those under 35, nearly half do. This growing dependence highlights how easily misinformation could spread through automated platforms, reinforcing the need for transparency and accountability in AI-driven journalism.
Researchers emphasized that the goal is not to reject AI, but to develop better guardrails and educational tools to help users critically assess machine-generated news. The EBU’s team has already proposed a toolkit to improve the reliability of AI responses and enhance media literacy among the public.
In conclusion, the study underscores that while AI assistants have enormous potential to reshape how people access information, they also pose serious risks to accuracy and trust. As artificial intelligence becomes more embedded in the media landscape, the need for responsible AI governance, algorithmic transparency, and human oversight has never been more urgent.




