California Governor Gavin Newsom has vetoed the controversial AI regulation bill SB 1047, which aimed to address the risks associated with advanced artificial intelligence, citing the need for a more optimal regulatory framework.

California Governor Gavin Newsom Vetoes Controversial AI Regulation Bill

California Governor Gavin Newsom vetoed a contentious bill, SB 1047, on Sunday, that aimed to mitigate the catastrophic risks associated with highly advanced artificial intelligence models. The bill had garnered significant attention and public debate, dividing activists and AI companies into opposing camps.

The bill, entering the legislative session as its most controversial piece, raised alarms over potential misuse of advanced AI, with authors highlighting dangers such as the development of chemical or nuclear weapons that could result in mass casualties. Proponents argued for the necessity of regulation to curb these risks, while opponents warned that stringent regulations might stifle AI innovation and drive companies out of California.

In his veto message, Governor Newsom acknowledged the legitimate concerns regarding AI but criticised the proposed bill for not establishing an optimal regulatory framework. “I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Newsom stated. “Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.”

One of the prominent supporters of SB 1047 was the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA), which represents Hollywood actors. Additionally, a group named “Artists for Safe AI” issued an open letter endorsing the bill, with notable signatories including J.J. Abrams, Shonda Rhimes, Judd Apatow, Rob Reiner, Jane Fonda, Rian Johnson, Adam McKay, Mark Hamill, Mark Ruffalo, Don Cheadle, among others.

SAG-AFTRA has previously expressed concerns about AI cloning actors for movies or TV shows without consent, marking the first occasion the union has engaged with broader risks posed by advanced AI models. Jeffrey Bennett, the union’s general counsel, remarked, “It really stems from the fact we have experienced firsthand the dangers of one aspect of AI. This bill seems to be the one bill that targets only the incredibly powerful expensive systems that have the capability to cause a mass critical problem. Why not regulate at that level? Why not build in some sensible, basic safety protocols at this stage of the game?”

Concurrently, Governor Newsom announced plans to convene experts to develop regulations promoting the safe advancement of AI technology and committed to working on the issue in the following year.

The veto arrived after Newsom signed two other AI-related bills, supported by SAG-AFTRA, that regulate AI usage within the entertainment industry. Both bills were signed at the SAG-AFTRA headquarters earlier in the month. Although SAG-AFTRA initially steered clear of the SB 1047 debate, the union sent a letter on September 9 urging Newsom to approve the bill. Shane Gusman, the union’s lobbyist in Sacramento, wrote, “AI-assisted deepfake technology has been utilised to create fake nude and pornographic images of SAG-AFTRA members. In our view, policymakers have a responsibility to step in and protect our members and the public. SB 1047 is a measured first step to get us there.”

During the deliberations, other Hollywood unions and companies largely abstained from the SB 1047 discussions, as the bill focused on “frontier” AI models, which are still in developmental stages.

In addition to SB 1047, Newsom also signed another AI-related bill, AB 2013. This legislation mandates AI developers to disclose whether they have used copyrighted work for training their models. The Concept Art Association, representing artists involved in movies, animation, and video games, strongly supported this bill. These artists have increasingly witnessed their work being utilised in AI models, jeopardising their professional roles.

AB 2013 does not demand developers to reveal their entire data sets or pay for the use of copyrighted content—a contentious issue still under legal scrutiny. Instead, it requires developers to acknowledge the utilisation of copyrighted data or other “personal” information. Deana Igelsrud, legislative and policy advocate for the Concept Art Association, commented, “Any disclosure we can get is a good thing. It’s very general, but it’s a start.”

Further reflecting the ongoing legislative efforts, Hollywood unions have backed a similar bill proposed by Rep. Adam Schiff in Congress earlier this year. Igelsrud emphasised, “None of these AI systems would be able to output anything if they weren’t filled with all the art in the history of the world. I don’t think people really understand that real human beings are attached to the data. Everyone just assumes if you put it on the internet, it’s a free for all. It’s not.”

The developments illustrate the escalating attempts to balance innovation in AI with safeguarding public and sector-specific interests, a debate likely to intensify as technologies continue to evolve.

Source: Noah Wire Services

Share.
Leave A Reply

Exit mobile version