Published on:
Elon Musk’s artificial intelligence chatbot Grok has come under intense criticism. The AI chatbot allegedly spread inaccurate and misleading information about the deadly Bondi Beach attack in Sydney. The episode has once again raised concerns over the reliability of generative AI tools during rapidly evolving breaking news situations.
Errors Surface During Breaking News Coverage
The controversy started on December 14, when users on the social media platform X questioned Grok’s responses to images and videos related to the shooting at a Hanukkah gathering near Bondi Beach. Australian authorities classified the incident as a terrorist attack, confirming that at least 16 people were killed, including one of the assailants.
Misidentified Videos and Images
According to a report by Gizmodo, Grok repeatedly misidentified widely circulated footage tied to the attack. In one instance, when asked to explain a video showing Al Ahmed, a bystander credited with confronting one of the attackers, the chatbot incorrectly described the clip as an old viral video of a man climbing a palm tree in a parking lot. It also questioned the clip’s authenticity and claimed there was no confirmation of injuries.
In another example, Grok reportedly mislabelled an image of injured Al Ahmed as that of an Israeli hostage taken during the October 7 Hamas attacks. Additionally, footage showing a police shootout with the attackers was mistakenly identified as video from Tropical Cyclone...
Read Full Story:
https://news.google.com/rss/articles/CBMiuwFBVV95cUxNRXU5djUwV2JNNHlLc1QtMkRl...