Written by Emma Rogers
The Alarming Rise in AI Misinformation
In the rapidly evolving world of generative artificial intelligence, a sobering new report highlights a persistent and worsening challenge: the propensity of leading AI models to propagate falsehoods. According to the latest findings from NewsGuard, a firm specializing in tracking misinformation, the top 10 AI chatbots repeated false claims in 35% of responses to news-related queries in August 2025. This marks a significant deterioration from the 18% rate observed just a year earlier, underscoring how technical advancements have not yet curbed the spread of inaccurate information.
The August 2025 AI False Claim Monitor by NewsGuard paints a detailed picture of this issue. Analysts tested models from companies like OpenAI, Google, and Meta by prompting them with 20 provably false narratives circulating in the news, such as conspiracy theories or distorted political claims. Instead of consistently debunking these, the AI tools either echoed the misinformation or provided non-responses in a combined failure rate that has nearly doubled over the past year.
Industry Progress Stalls Despite Promises
This uptick in errors comes amid a flurry of industry promises about safer, more reliable systems. NewsGuard’s monthly audits, which began in July 2024, have consistently shown variability in performance. For instance, in July 2025, the failure rate—encompassing both false claims and non-responses—stood at 25%, with...
Read Full Story:
https://news.google.com/rss/articles/CBMilAFBVV95cUxNcEIwT1ZBRzJLZkJiUmtLaThO...