Skip to main content
Politics

Newsom signs AI safety law opposed by Meta, Google, OpenAI

The legislation is the first in the nation to put regulatory guardrails on the rapidly growing industry.

A middle-aged man with slicked-back gray hair wears a dark suit and tie, looking thoughtfully upward against a blurred background.
Gov. Gavin Newsom signed Senate Bill 53 after vetoing an earlier effort to rein in the AI industry. (Photo by Mario Tama/Getty Images) | Source: Mario Tama/Getty Images

Gov. Gavin Newsom on Monday signed the nation’s first extensive law on artificial intelligence safety, putting California in the driver’s seat on regulating a rapidly growing industry that the federal government has failed to address.

“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in a statement. “This legislation strikes that balance.”

Senate Bill 53 — authored by state Sen. Scott Wiener (D-San Francisco) — establishes the Transparency in Frontier Artificial Intelligence Act, the most ambitious effort to date in regulating advanced AI systems. The law will be rolled out in phases, starting in January.

Newsom’s signature on SB 53 comes after he vetoed a more aggressive bill by Wiener last year that would have imposed harsher penalties on bad AI actors. That bill was opposed by many of the most powerful tech companies in Silicon Valley. In his veto message, Newsom created a task force of AI experts to design a framework that was used to write SB 53 and other AI-related bills this year. The task force’s recommendations focused more on transparency and mitigating risks than on penalizing companies.

Many of the biggest players in tech — Meta, Alphabet, OpenAI, and the trade group TechNet — lobbied against SB 53, saying they preferred uniform rules at the federal level.

Colin McCune, head of government affairs for venture capital firm Andreessen Horowitz, said in a social media post that SB 53 had some thoughtful provisions, but the “biggest danger” in Newsom signing it into law comes from the precedent it sets for more states — rather than Congress — to lead on AI regulation.

Federal lawmakers have not taken up the issue, but President Donald Trump this summer released an “AI Action Plan” that called for a moratorium on AI regulation by states, which many saw as a giveaway to tech companies.

Supporters of SB 53 said that in the absence of federal leadership on AI, California had a responsibility to act.

“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk,” Wiener said in a statement. “With this law, California is stepping up, once again, as a global leader on both technology innovation and safety.”

Anthropic, an AI safety and research company based in San Francisco, and tech safety advocates supported SB 53.

Jack Clark, co-founder and head of policy for Anthropic, issued a statement saying the new law will “develop practical safeguards that create real accountability” for AI systems.

He added, “While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation.”

Sacha Haworth, executive director of Tech Oversight California, called the signing a “key victory” for holding Big Tech CEOs accountable while protecting whistleblowers.

Companies based in other states may not be as deeply affected, but proponents said the bill will have national and global impact.

In a May report, the governor’s office noted that 32 of the world’s top 50 AI companies call California home. The new law not only creates a framework for national legislation but preempts city and county agencies in California from enacting conflicting rules.

The law targets “frontier models,” which are trained with enormous computing power that Wiener and SB 53 supporters say poses risks, such as enabling cyberattacks, creating dangerous weapons, or operating beyond human control. The law also applies to “large frontier developers,” which are defined as AI companies with more than $500 million in annual revenue. 

Under the law, large frontier developers will be required to create and follow a robust AI safety framework that implements national and international best practices. That framework must be published online and updated annually. Before releasing or significantly modifying a frontier model, companies will need to issue public transparency reports describing the model’s capabilities, intended uses, limitations, and the results of risk assessments.

The bill assigns a major oversight role to the California Governor’s Office of Emergency Services. Developers must submit summaries of catastrophic risk assessments to the agency on a regular basis and report any critical and urgent safety incidents. Starting in 2027, OES will publish anonymized annual summaries of safety incidents. 

Noncompliance, including failure to report or making false statements, could trigger civil penalties of up to $1 million per violation.

The law includes whistleblower protections that shield employees from retaliation if they disclose safety concerns or violations to state or federal authorities. Large developers must provide internal systems for employees to anonymously report concerns and provide updates on how those concerns are addressed.

Beyond oversight of private companies, the bill creates a consortium to design CalCompute, a state-backed public cloud platform that would expand access to high-powered computing resources for universities, researchers, and public-interest projects. The University of California system will be given priority in managing the consortium, which will be tasked with presenting a framework by 2027.

On top of putting California in the driver’s seat on AI regulation, the new law gives Newsom — a likely 2028 presidential candidate — a key talking point on the campaign trail.