×
Monday, January 19, 2026

AI Needs Whistleblowers to Expose Problems as Technology Grows - Bloomberg Law News

Artificial intelligence has become nearly ubiquitous. Sixty-two percent of American adults report using AI weekly, and 78% of organizations say they have adopted the technology’s tools as part of their operations.

Reports of AI dangers, however, continue to proliferate, including concerns of ignored safety risks, intellectual property theft, and discrimination.

In an effort to address such risks, a number of states have started enacting legislation regulating AI. Those efforts have been thrown into question, when President Donald Trump, following lobbying by AI companies such as OpenAI and Microsoft, signed a new executive order in December that aims to block states from regulating AI.

With states such as California set to challenge the legality of the order, we’re in for a pitched battle over the fate of these nascent state-level AI regulations. As a result, AI companies may continue to operate unchecked for the time being.

In the absence of meaningful oversight, we will have to continue to rely on whistleblowers to expose serious concerns about safety, privacy, ethics, or legal risks. For example, former OpenAI employee Suchir Balaji blew the whistle in 2024 on the company’s use of copyrighted material to build ChatGPT. Other former OpenAI employes have spoken publicly about the company’s reckless disregard for safety. There also are real concerns that shown convincing evidence that AI magnifies stereotypes and bias.

These whistleblowers come forward at great personal...



Read Full Story: https://news.google.com/rss/articles/CBMiywFBVV95cUxQVjR4U3VabEE3cUNSRlAyZTc5...