OpenAI is under fire once again in Europe as privacy watchdogs take issue with its AI chatbot, ChatGPT, for spreading false information.
The latest complaint, filed with the support of privacy rights group Noyb, highlights the serious consequences of AI-generated misinformation – this time involving an individual falsely accused of murder.
AI Hallucinations
The complaint centers on Arve Hjalmar Holmen, a Norwegian citizen who was shocked to discover that ChatGPT falsely claimed he had been convicted of murdering two of his children and attempting to kill a third.
While the chatbot got some personal details correct – such as his number of children and hometown – it fabricated a horrifying criminal past.
This is not the first time ChatGPT has generated inaccurate personal data. Previous incidents have involved errors in birth dates and biographical details. However, the gravity of this case raises urgent questions about AI’s responsibility in handling personal information.
The GDPR’s Role in AI Accountability
Under the European Union’s General Data Protection Regulation (GDPR), individuals have the right to correct inaccurate personal data. Additionally, companies processing personal data must ensure its accuracy.
Despite this, OpenAI has largely responded to concerns by offering to block responses to problematic queries rather than providing a way for individuals to correct false information. Privacy advocates argue that disclaimers about AI mistakes are insufficient.
As...
Read Full Story:
https://news.google.com/rss/articles/CBMihgFBVV95cUxObENNcVVPZWlvRVlwZldnaEVY...