OpenAI, the company that created ChatGPT, recently announced that in the coming weeks it plans to roll out a voice recognition feature for its chatbot, which will make its artificial intelligence technology appear even more humanlike than before. Now the company appears to be encouraging users to think of this as an opportunity to use ChatGPT as a tool for therapy.
Lilian Weng, head of safety systems at OpenAI, posted on X, formerly known as Twitter, on Tuesday that she had held a “quite emotional, personal conversation” with ChatGPT in voice mode about “stress, work-life balance,” during which she “felt heard & warm.”
“Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool,” she said.
OpenAI president and co-founder Greg Brockman appeared to co-sign the sentiment — he reposted Weng’s statement on X and added, “ChatGPT voice mode is a qualitative new experience.”
OpenAI profits from exaggerating and misleading the public about what its technology can and can’t do.
This is a disconcerting development. That the company’s head of safety and its president are encouraging the public to think of a chatbot as a way to get therapy is surprising and deeply reckless. OpenAI profits from exaggerating and misleading the public about what its technology can and can’t do — and that messaging could come at the expense of public health.
Weng’s language anthropomorphized ChatGPT by talking about feeling “heard” and “...
Read Full Story:
https://news.google.com/rss/articles/CBMiTmh0dHBzOi8vd3d3Lm1zbmJjLmNvbS9vcGlu...