The AI chatbot fabricated a sexual harassment scandal involving a law professor--and cited a fake Washington Post article as evidence.
One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list.
Tech is not your friend. We are. Sign up for The Tech Friend newsletter.
The chatbot, created by OpenAI, said Turley had touched a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.
A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.
“It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”
Advertisement
Turley’s experience is a case study in the pitfalls of the latest wave of language bots, which have captured mainstream attention with their ability to write computer code, craft poems and hold eerily humanlike conversations. But this creativity can also be an engine for erroneous claims; the models can misrepresent key facts with great flourish, even fabricating...
Read Full Story:
https://news.google.com/rss/articles/CBMiQmh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0...