The world’s top AI chatbots, such as ChatGPT, are providing answers containing false information twice as much as they did last year, a new study from disinformation-fighting watchdog NewsGuard has found.
The disinformation rates of the top ten leading chatbots have doubled, going from 18% in August 2024 to 35% a year later.
The tools now regularly reproduce false claims on topics such as health, politics, international affairs, companies, and business brands. The chatbots now provide false claims to news prompts more than one-third of the time.
“When it comes to providing reliable information about current affairs, the industry’s promises of safer, more reliable systems have not translated into real-world progress,” said NewsGuard.
The natural question is, why? After all, with the AI revolution gaining speed, the chatbots should be improving and becoming more reliable.
Malign actors polluting data
The problem, according to NewsGuard, is a “structural tradeoff.” The chatbots have by now adopted real-time web searchers and moved away from declining to answer questions: their non-response rates fell from 31% in August 2025 to exactly 0% a year later.
Instead of citing data cutoffs or refusing to weigh in on sensitive topics, the large language models now pull from a polluted online information ecosystem and treat unreliable sources as credible, NewsGuard explained.
Surprising no one, this ecosystem is sporadically deliberately seeded by networks of malign actors, including...
Read Full Story:
https://news.google.com/rss/articles/CBMifkFVX3lxTE5sWW05MFhIQm9faEFVNnNNcTky...