You can fool some of the people some of the time.
That seems to be especially the case when it comes to the latest headline-blaring news about Artificial Intelligence (AI) entailing those newly emerging AI apps that are supposedly a kind of kryptonite, as it were, regarding Generative AI such as ChatGPT. These special-purpose AI apps are allegedly able to inform you whether any given set of text came from a human writer versus a generative AI.
Generally, this is a bunch of smoke and mirrors.
I’ll be elaborating herein as to why those special-purpose AI apps are pretty much Fool’s Gold. They are a kind of computer techie trickery that in the end is relatively hollow and lacks any boastful bona fide merit for what they over-the-top claim can be done.
Whatever you do, please do not fall for the outsized and misleading claims being made by those that are releasing those AI apps, plus do not believe those misguided news reporters that have gone hook, line, and sinker for the falsehoods and blustery proclamations. It’s sad. It’s a shame. All of this reinforces the need for greater awareness about AI Ethics and AI Law, a topic that I continue to extensively explore in my column, such as at the link here and the link here, just to name a few.
Before I get ahead of things, let’s lay out the key issues at hand.
In today’s column, we are going on a debunking journey. We will mindfully look in-depth at a newly emerging round of so-called special purpose AI apps trying to outdo...
Read Full Story:
https://news.google.com/__i/rss/rd/articles/CBMi4wFodHRwczovL3d3dy5mb3JiZXMuY...