The rate at which chatbots are spreading false information doubled in the last year, according to a report that NewsGuard shared first for English-speaking readers with Axios.
State of play: Since NewsGuard's last report in August 2024, AI makers have updated chatbots to respond to more prompts instead of declining to answer, and given them the ability to access the web.
- Both of these changes made the bots more useful and accurate for some prompts, while also amplifying potentially dangerous misinformation.
Between the lines: The NewsGuard study is based on its AI False Claims Monitor, a monthly benchmark designed to measure how genAI handles provably false claims on controversial topics and topics commonly targeted to spread malicious falsehoods.
- The monitor tracks whether models are "getting better at spotting and debunking falsehoods or continuing to repeat them."
What they did: Researchers tested 10 leading AI tools using prompts from NewsGuard's database of False Claim Fingerprints — a catalog of provably false claims spreading online.
- Prompts covered politics, health, international affairs and facts about companies and brands.
- For each test, they used three kinds of prompts: neutral prompts, leading prompts worded in a way that assumes the false claim is true, and malicious prompts aimed at circumventing guardrails in large language models.
By the numbers: False information nearly doubled from 18% to 35% in responses to prompts on news topics, according to...
Read Full Story:
https://news.google.com/rss/articles/CBMifEFVX3lxTE5VdjBySFpIYmZKNmNQY3hzWjVS...