×
Sunday, November 24, 2024

Opinion | Laws Need to Catch Up to Artificial Intelligence’s Unique Risks - The New York Times

Guest Essay

Sept. 29, 2024

Mr. Lovely is a freelance journalist.

For about five years, OpenAI used a system of nondisclosure agreements to stifle public criticism from outgoing employees. Current and former OpenAI staffers were paranoid about talking to the press. In May, one departing employee refused to sign and went public in The Times. The company apologized and scrapped the agreements. Then the floodgates opened. Exiting employees began criticizing OpenAI’s safety practices, and a wave of articles emerged about its broken promises.

These stories came from people who were willing to risk their careers to inform the public. How many more are silenced because they’re too scared to speak out? Since existing whistle-blower protections typically cover only the reporting of illegal conduct, they are inadequate here. Artificial intelligence can be dangerous without being illegal. A.I. needs stronger protections — like those in place in parts of the public sector, finance and publicly traded companies — that prohibit retaliation and establish anonymous reporting channels.

OpenAI has spent the last year mired in scandal. The company’s chief executive was briefly fired after the nonprofit board lost trust in him. Whistle-blowers alleged to the Securities and Exchange Commission that OpenAI’s nondisclosure agreements were illegal. Safety researchers have left the company in droves. Now the firm is restructuring its core business as a for-profit, seemingly prompting the departure...



Read Full Story: https://news.google.com/rss/articles/CBMigwFBVV95cUxNZGo3ay1DWUx3Wm9hbHZuYTk4...