Articles
With great power comes great responsibility, and AI is no exception. As usage and trust in AI models grow, ensuring their accuracy is critical. Yet, research shows that some models produce incorrect statements in more than one-third of responses
Modern AI models include features like deep reasoning, long-term memory, and autonomous agents that can browse the web or perform tasks with minimal human input. To perform these tasks, the models require vast amounts of data, which increases reliance on uncontrolled and unverified data sources. This increased exposure can lead to behaviour that resembles overconfidence, a cognitive bias where confidence in one’s knowledge or abilities exceeds actual accuracy.
A recent study by the European Broadcasting Union (EBU) highlights the consequences of this: leading AI systems generate false claims at a rate of up to 40%. This high rate coincides with a change in model behaviour. Previously, AI systems would decline to answer questions about events outside their training data. Newer systems having web access are designed to respond more frequently, even when they are uncertain or have insufficient information. While this improves user engagement, it also leads to more fabricated output. We refer to these cases as "AI hallucinations." However, the models often deliver such responses with strong confidence, creating the impression that they are unquestionably correct.
Models are prioritising fluency over accuracy
There are several...
Read Full Story:
https://news.google.com/rss/articles/CBMiigFBVV95cUxPUWt2amtNc1kyMTJvVW5HOUxw...