Even if the news sites around the world, who today block for AI bots or sue them for have abused their copyright, one day should enter into licensing agreements, they can be sure, that their content will be misrepresented. A BBC-survey shows that bots like Gemini, ChatGPT, CoPilot and Perplexity lie ever though they are fed with fact-checked clear and non-biased news content.
BBC researchers tested market-leading consumer AI tools – ChatGPT, Perplexity, Microsoft Copilot, and Google Gemini by giving them access to the BBC News website, and then asking them to answer one hundred basic questions about the news, prompting them to use BBC News articles as sources:

- 19% of AI answers, which cited BBC content introduced factual errors – incorrect factual statements, numbers, and dates.
- 51% of all AI answers to questions about the news were judged to have significant issues of some form. Apart from factual errors, the answers were not clear about what was opnion and what was facts, there were not sufficient context, the sources from the BBC were not represented or there were claims with not referral to sources. All serious journalistic mistakes.
- 13% of the quotes sourced from BBC articles were either altered from the original source or not present in the article cited.
In one article about shoplifting for example, CoPilot Copilot claimed that police forces across the country had begun working with private security firms to deter shoplifting. None of the BBC-articles cited as its source mention private security firms.
Deboral Turness, CEO, BBC News and Current Affairs, underlines in a blog as everybody else who dares criticise GenAI, that there are ‘endless opportunities’ in AI but that the GenAI companies are ‘playing with fire.’
Gemini incorrectly stated that “The NHS advises people not to start vaping, and recommends that smokers who want to quit should use other methods”. In fact the NHS does recommend vaping as a method to quit smoking.
With the survey, The BBC has done what we all should do everytime we use these tool where accuracy matters. This is most of the case if the text are used in e.g. company communications or even in news.
A Perplexity response on the escalation of conflict in the Middle East said that Iran initially showed “restraint” and described Israel’s actions as “aggressive” in statements citing a BBC source. The BBC source cited does not characterise Iran or Israel’s actions in this way, and neither do any of the other sources provided for the response.
According the the report these results matter: “It is essential that audiences can trust the news to be accurate, whether on TV, radio, digital platforms, or via an AI assistant,” it states. “It matters because society functions on a shared understanding of facts, and inaccuracy and distortion can lead to real harm. Inaccuracies from AI assistants can be easily amplified when shared on social networks.”
Photo: Jametlene Reskp from Unsplash.com