California has passed the first state-level AI safety law in the United States, after seemingly making peace with former opponents of regulation such as Meta and OpenAI. The new law is pared down from the contents of a failed 2024 effort but nevertheless covers a broad range of areas, from forcing AI developers to be more transparent about risks and privacy issues to establishing new protections for whistleblowers.
Several of the AI safety law’s terms are even more stringent than comparable rules put into play in the European Union. But despite seemingly broad support there is still criticism of its expected negative impact on innovation, and from the privacy and security side some note that key regulations have yet to be placed on AI developers.
California AI safety law emerges after initial tech industry backlash
California first began attempting to pass an AI safety law last year, but quickly ran into major blowback from the big developers that call the state home. Both measures were headed up by San Francisco representative Scott Weiner, but Governor Gavin Newsom vetoed the 2024 version before signing this revised attempt.
Newsom called it a more “balanced” regulation, a sentiment that has been broadly echoed by the big AI players in the tech industry. Some of its elements make it surprising to see the likes of OpenAI and Anthropic embrace it (and Meta and others at least stand out of the way this time). Chief among these are transparency requirements that are among...
Read Full Story:
https://news.google.com/rss/articles/CBMi3AFBVV95cUxQQnVuc3k5LVdBWmxqYzFmelpj...