Leading AI chatbots are now twice as likely to spread false information as they were a year ago.
According to a Newsguard study, the ten largest generative AI tools now repeat misinformation about current news topics in 35 percent of cases.
The spike in misinformation is tied to a major trade-off. When chatbots rolled out real-time web search, they stopped refusing to answer questions. The denial rate dropped from 31 percent in August 2024 to zero a year later. Instead, the bots now tap into what Newsguard calls a "polluted online information ecosystem," where bad actors seed disinformation that AI systems then repeat.
This problem isn't new. Last year, Newsguard flagged 966 AI-generated news sites in 16 languages. These sites use generic names like "iBusiness Day" to mimic legitimate outlets while pushing fake stories.
ChatGPT and Perplexity are especially prone to errors
For the first time, Newsguard published breakdowns for each model. Inflection's model had the worst results, spreading false information in 56.67 percent of cases, followed by Perplexity at 46.67 percent. ChatGPT and Meta repeated false claims in 40 percent of cases, while Copilot and Mistral landed at 36.67 percent. Claude and Gemini performed best, with error rates of 10 percent and 16.67 percent, respectively.
Perplexity's drop stands out. In August 2024, it had a perfect 100 percent debunk rate. One year later, it repeated false claims almost half the time.
Russian disinformation networks target AI...
Read Full Story:
https://news.google.com/rss/articles/CBMivgFBVV95cUxOM2ZmUkt1amtrOXJHNjQ1UjJ0...