At C2C Journal, AI management pro Gleb Lisikh warns in a long form essay,
Amidst the flurry of new AI models, performance claims, capabilities, market implications and anxiety about what might come next, it is easy to overlook arguably the most important question: what quality of information and visual content are these AI engines actually providing to the user and, from there, the intended audience for whom the content is created? And how much of these AI engines’ prodigious and ever-growing output is actually true?
“Lies Our Machines Tell Us: Why the New Generation of “Reasoning” AIs Can’t be Trusted,” April 16, 2025
It varies. Pomona business prof Gary Smith’s investigations have uncovered major problems in this area, as he has detailed here at Mind Matters News.
Given the deeply nested woke biases in Silicon Valley, Europe and Canada, the mere proliferation of AI offerings does not guarantee that the objectivity, balance and quality of information they generate will improve. The new competitors from Communist-run China only compound these concerns. Just because they have more choices, AI users and target audiences are nowhere near out of the woods; if anything, the threat is only growing, since AI is rapidly penetrating ever-more aspects of our professional and personal lives. “Can’t be Trusted,”
Indeed, people have come to depend on the bots. Smith reports that in one case, all the students in a class used chatbots to compose an answer to a problem and none of...
Read Full Story:
https://news.google.com/rss/articles/CBMiiAFBVV95cUxQb2t4b2p5VGNyZnFpeXNZSkJp...