WASHINGTON:
Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said on Tuesday, raising fresh doubts about its reliability as a debunking tool.
With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilising AI-powered chatbots – including xAI's Grok – in search of reliable information, but their responses are often themselves prone to misinformation.
"The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank.
"Grok demonstrated that it struggles with verifying already-confirmed facts, analysing fake visuals, and avoiding unsubstantiated claims."
The DFRLab analysed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media."
Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found.
It oscillated – sometimes within the same minute – between denying the airport's destruction and confirming it had...
Read Full Story:
https://news.google.com/rss/articles/CBMijgFBVV95cUxNQ1FQMUl3bDNESTBRbXRJbHJ2...