If you spend any time on social media, listen to podcasts, or generally pay attention to the news, chances are you’ve heard of ChatGPT. The chatbot, launched by OpenAI in November, can write code, draft business proposals, pass exams, and generate guides on making Molotov cocktails. It has rapidly become one of those occasional technologies that attract so much attention and shape so many conversations that they seem to define a particular moment. It may also quickly become a threat to national security and raises a host of concerns over its potential to spread disinformation at an unprecedented rate.
ChatGPT, or Chat Generative Pre-trained Transformer, is an iteration of a language model that scans enormous volumes of web content to generate responses that emulate human language patterns. In just five days following the prototype launch, over one million users had signed up to explore and experiment with the chatbot. Although the release of ChatGPT was intended as a “research preview,” it still poses a potential threat to many users who use it to get answers on topics they do not fully grasp. A Princeton University computer science professor who tested the chatbot on basic information determined that “you can’t tell when it’s wrong unless you already know the answer.” This is why AI experts are concerned about by the prospect of users employing the chatbot in lieu of conducting their own research. Their concerns are exasperated by the fact that ChatGPT does not provide...
Read Full Story:
https://news.google.com/rss/articles/CBMiOmh0dHBzOi8vbXdpLnVzbWEuZWR1L2Rpc2lu...