California Governor Gavin Newsom has vetoed a bill that would have forced foundation or “frontier” model providers like OpenAI to test and audit their software, to be liable for harms their models cause and to create “kill switches” that would immediately halt a model’s work.
Instead, Newsom said he has signed 17 bills over the past 30 days that cover the deployment and regulation of generative AI technology.
“This was all a balancing act for Governor Newsom,” said John Cunningham, a corporate compliance and investigations partner at Dickinson Wright, in an interview. “This is about the cost-benefit issue and the balance between continued innovation in the AI area and reasonable regulatory oversight. If we can put reasonable reins around what we do with AI, this is going to be good for everybody.”
In explaining why he vetoed the bill, Newsom said its focus on the largest providers of AI models was misplaced. “SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” he said.
Some of the most critical decisions and most sensitive data are in financial services. Banks’ use of AI in lending and hiring decisions has been heavily scrutinized by their regulators, but not subject to new laws. A revised bill that focuses on riskier use cases could have banks in its crosshairs.
California is one of several states trying to put guardrails around advanced AI in the absence of national laws. California, Pennsylvania, Massachusetts, New Jersey and the District of Columbia have had AI laws on the books for some time. Another five states – Colorado, Illinois, Maryland, New York and Utah – enacted AI legislation this year, according to a
National AI laws have been floated in the U.S. Congress. For instance, an
The vetoed bill
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1074, would have been the toughest state AI legislation in the country. California has long been known for coming up with strict consumer protections. For instance, it enacted the first data protection law, the
“California, like New York, is often on the vanguard of regulation,” Cunningham said. “So a lot of folks will look to them to say, hey, how do we start to get our hands around the regulatory piece here before AI gets too far, from a regulatory perspective? A lot of folks rely on states like California and New York to be the deeper thinkers on that.”
SB 1074 would have required developers of large artificial intelligence models, like OpenAI, Anthropic, Google and Meta, to put safeguards and policies in place to prevent catastrophic harm. For instance, they would have to provide a “kill switch” that could shut down their systems. They would have to provide safety plans and audit reports. The bill would also provide protections for whistleblowers and establish a state entity called the Board of Frontier Models to oversee the development of these models.
Many of the companies that would have been affected by the bill, including OpenAI, Meta, Google and Anthropic, are based in California. In a
Newsom said the bill’s focus on only the most expensive and large-scale models could give the public a false sense of security, whereas smaller, specialized models could be just as dangerous.
Newsom said his thought process in this decision was guided by several AI experts, including Fei-Fei Li, a professor of computer science at Stanford University; Tino Cuéllar, member of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research; and Jennifer Tour Chayes, dean of the College of Computing, Data Science, and Society at UC Berkeley. He has asked these advisors to help develop responsible guardrails for the deployment of generative AI.
Banking experts say the shift from policing the foundational model providers to smaller, more specific laws governing generative AI makes sense.
“The regulation as written was too general and risked pushing innovative companies out of California with little concrete consumer protection to show for such potential impact,” said Ian Watson, research director at Celent. “Pushing it out to smaller teams of experts not only allows California more time to let a more national consensus form but it sets up the possibility to draft a series of more targeted industry specific regulations that address tangible pain points for state politicians’ constituents.”
Some thought the California bill’s focus on the existential risks of AI was misguided.
“AI can be very dangerous, but I strongly feel the immediate dangers are to consumers, via predatory practices and surveillance, and to our democratic institutions, via misinformation and surveillance, and not to humanity’s survival,” said Patrick Hall, assistant professor at George Washington University.
It’s too early to say what a new law might look like.
“Newsom’s messaging sounds like he would like a tougher and better bill, but that doesn’t mean he will get one,” Hall said. “My research and experience leads me to believe that regulating use cases, and the people around those use cases — such as establishing chief model risk officers — is much more effective than directly regulating a technology.”
Hall liked some aspects of the vetoed California bill, such as the kill switch requirement.
“I have argued for this for years as it’s clear that AI systems can malfunction sometimes and turning them off quickly is a good option in some cases,” he said.
The 17 bills California has passed
The 17 AI bills Newsom has signed are intended to crack down on deepfakes, require AI watermarking, protect children and workers, and combat AI-generated misinformation.
Several apply to foundation model developers and companies, like banks, that use those models.
For instance, one of the bills (AB 1008) clarifies that the California Consumer Privacy Act applies to personally identifiable information stored by AI systems. Another bill (AB 1836), prohibits anyone from producing, distributing, or making available the digital replica of a deceased personality’s voice or likeness without prior consent.
A third, AB 2013, requires AI developers to post information on the data used to train the AI system or service on their websites. A fourth, SB 942, requires the developers of covered generative AI systems to include provenance disclosures in the original content their systems produce and make tools available to identify generative AI content produced by their systems.