Popular Artificial Intelligence (AI) programs like ChatGPT and Google Bard can produce impressive results. It can also produce results with references that are false, and the references do not exist. The AI fabricated very credible-sounding yet completely false claims against prominent people with references to non-existent news articles.
- Advertisement -
A famous example was the response ChatGPT provided to the question of lawyers accused of sexual harassment. It named Johnathan Turley as being accused of sexual harassment. In the accusation, it referenced a non-existent Washington Post article, a school at which he does not teach, and a school trip he did not take.
AI, like ChatGPT, currently does not understand the concepts behind the words that it is writing; it is a large language model. It looks for patterns in the language it encounters and reproduces those patterns in the data that it outputs. To ChatGPT, a claim that would destroy a person’s reputation is the same level of information as the color of the shirt they are wearing.
It can paraphrase information reasonably well, and sometimes its product will seem to be very insightful. It has access to a massive amount of information via the Internet. However, the reliability of the information that it receives and produces can be questionable. The current AI programs are going to the limit of their knowledge to generate impressive results. This greatly increases their chances of being wrong.
ChatGPT is capable of...
Read Full Story:
https://news.google.com/rss/articles/CBMiUWh0dHBzOi8vd3d3Lmhlcm5hbmRvc3VuLmNv...