×
Monday, April 27, 2026

Can a 'bot' be held accountable for defamatory statements? - The Atlanta Journal Constitution

Brian Hood is a whistleblower who was praised for “showing tremendous courage” when he helped expose a worldwide bribery scandal linked to Australia’s National Reserve Bank.

But if you ask ChatGPT about his role in the scandal, you get the opposite version of events.

Rather than heralding Hood’s whistleblowing role, ChatGPT falsely states that Hood himself was convicted of paying bribes to foreign officials, had pleaded guilty to bribery and corruption and been sentenced to prison.

When Hood, who is now mayor of Hepburn Shire near Melbourne in Australia found out, he was shocked.

In what could be the first defamation suit of its kind against the artificial intelligence chatbot, Hood plans to sue the company behind ChatGPT, saying it is telling lies about him,

The case is the latest example on a growing list of AI chatbots publishing false statements about real people. The chatbot recently invented a fake sexual harassment story involving a real law professor, citing a Washington Post article that did not exist as its evidence.

If it reaches the courts, the case will test uncharted legal waters, forcing judges to consider whether the operators of an artificial intelligence bot can be held accountable for its allegedly defamatory statements.

On its website, ChatGPT prominently warns users that it “may occasionally generate incorrect information.” Hood believes that caveat is insufficient.

“Even a disclaimer to say we might get a few things wrong – there’s a massive...



Read Full Story: https://news.google.com/rss/articles/CBMib2h0dHBzOi8vd3d3LmFqYy5jb20vb3Bpbmlv...