Knowing whether you’re getting reliable information from your friendly AI chatbot is always something to worry about. Whether it be due to being trained on outdated data, hallucinations, or simply a lack of understanding of current affairs. In any case, we finally have a closer look at some statistics thanks to a recently released report from NewsGuard – a rating system for reliability on news and information.
If you’re interested in this kind of topic, then we recommend giving the full report a read, but we can summarize some of the key takeaways just below.
Misinformation from AI is on the rise
NewsGuard has been tracking chatbot performance, noting a worrying statistic. False information from AI has nearly doubled in just one year, based on the “10 leading generative AI tools” – think ChatGPT, Grok, or Microsoft Copilot.
“NewsGuard’s auidit of the 10 leading generative AI tools and their propensity to repeat false claims on topics in the news reveals the rate of publishing false information nearly doubled – now providing false claims to news prompts more than one third of the time.”
Source: NewsGuard
- ChatGPT & Meta – 40% wrong answers
- Copilot/Mistral – 36% wrong answers
- Grok & You.com – 33% wrong answers
- Gemini – 17% wrong answers
- Claude – 10% wrong answers
From the list of popular choices, ChatGPT is the biggest culprit for misinformation. NewsGuard has identified a common vulnerability with AI chatbots in the past year, namely, state-affiliated propaganda....
Read Full Story:
https://news.google.com/rss/articles/CBMizwFBVV95cUxNaGNrdGdhRVJFOEN3Y29SOTV0...