Workers at artificial intelligence companies want Congress to grant them specific whistleblower protection, arguing that advancements in the technology pose threats that they can’t legally expose under current law.
“What people should be thinking about is the 100 ways in which these companies can lose control of these technologies,” said Lawrence Lessig, a Harvard law professor who represented OpenAI employees and former employees raising issues about the company.
Current dangers range from deepfake videos to algorithms that discriminate, and the technology is quickly becoming more sophisticated. Lessig called the argument that big tech companies and AI startups can police themselves naïve.
“If there’s a risk, which there is, they’re not going to take care of it,” he said. “We need regulation.”
Congress may have a window to address the issue in the coming months. Leaders could push an AI package in the post-election lame-duck session that would boost research and development and promote guidelines for the technology’s use.
“As Congress considers AI legislation this year, we must ensure that consumers, regulators, and workers have the tools and safeguards to identify and address these harms,” Sen. Ed Markey (D-Mass.) said in a statement Nov. 1, when asked if he would seek whistleblower elements in any late-year AI bill. In September,...
A chain of clinics in Texas is facing a potential $300 million fine after a jury ruled it knowingly submitted more than 20,000 fraudulent reimbursement claims to Medicare. The lawsuit against Heal...