On Monday, Anthropic announced an official endorsement of SB 53, a California Bill of State Sensor Scott Wiener who would impose the requirements for first-in-nation transparency on the world’s largest AI model developers. Anthropics endorsement marks a rare and bigger victory for SB 53, at a time when larger tech groups such as the Consumer Technology Association (CTA) and Chamber for Progress lobbying against the bill.
“While we believe that Frontier AI safety is best addressed at the federal level instead of a patchwork of state rules, powerful AI progress will not wait for consensus in Washington,” anthropic said in a blog post. “The question is not whether we need AI government – that is whether we are developing the thoughtful today or reactive tomorrow. SB 53 offers a solid path to the former.”
If it was passed, SB 53 would require Frontier AI model developers such as Openai, Anthropic, Google and Xai to develop security frameworks and release public security and security reports before implementing powerful AI models. The bill will also establish whistleblower protection for employees who come with security concerns.
Senator Wiener’s BILL focuses specifically on limiting AI models from contributing to “catastrophic risks”, which the bill defines as the death of at least 50 people or more than a billion dollars in compensation. SB 53 focuses on the extreme side of AI-risk that restrict AI models from being used to providing expert-level aid to the creation of biological weapons or being used in cyberattacks-snarers than more near-term worries such as AI dybfakes or Sycophabit.
California’s Senate approved an earlier version of SB 53, but still needs to hold a final vote on the bill before it can move on to the governor’s desk. Governor Gavin Newsom has so far remained silent about the bill, even though he closed Senator Weiner’s last AI security bill, SB 1047.
Bills that regulate Frontier AI model developers have been subjected to significant pushback from both Silicon Valley and Trump administration, both of which claim that such efforts can limit America’s innovation during China. Investors such as Andreessen Horowitz and Y Combinator led some of pushback against SB 1047, and in recent months the Trump administration has repeatedly threatened to block states from passing AI regulation completely.
One of the most common arguments against AI security bills is that states must leave the case up to federal governments. Andreessen Horowitz’s leader of AI policy, Matt Perault and Chief Legal Officer, Jai Ramaswamy, published a blog post last week and argued that many of today’s state AI bills risk violating the commodity’s trade clause -which restricts state governments from the laws of laws that go beyond their borders.
TechCrunch -event
San Francisco
|
27-29. October 2025
However, anthropic co-founder Jack Clark in a post of X claims that the tech industry will build powerful AI systems in the coming years and cannot wait for the federal government to act.
“We have long said that we prefer a federal standard,” Clark said. “But in the absence of this, this creates a solid plan for AI management that cannot be ignored.”
Openais Chief Global Affairs Officer, Chris Lehane, sent a letter to Governor Newsom in August and argued that he should not adopt any AI regulation that would push startups out of California – even though the letter did not mention SB 53 by name.
Openai’s former head of political research, Miles Brundage, said in a post on X that Lehane’s letter was “filled with misleading waste around SB 53 and AI policy in general.” In particular, SB 53 aims to regulate the world’s largest AI companies exclusively – especially those who generated a gross income of more than $ 500 million.
Despite the criticism, political experts say SB 53 is a more modest approach than previous AI security bills. Dean Ball, a senior fellow at Foundation for American Innovation and former AI policy adviser at the White House, said in a blog post in August that he believes SB 53 has a good chance of now being allowed. Ball, who criticized SB 1047, said that SB 53’s draft has “shown respect for technical reality,” as well as a “target of regulatory restraint.”
Senator Wiener said earlier that SB 53 was heavily influenced by an expert-political panel governor Newsom convened-CO-Led by leading Stanford scientist and co-founder of World Labs, FEI-FEI LI-LI-to advise California on how to regulate AI.
Most AI laboratories already have some version of the internal security policy that SB 53 requires. Openai, Google Deepmind and Anthropic regularly publish security reports for their models. However, these companies are not bound by anyone other than themselves, so sometimes they fall behind their self -imposed security obligations. SB 53 aims to impose these requirements as state legislation with financial consequences if an AI laboratory does not comply.
Earlier in September, California’s lawmakers changed SB 53 to remove a section of the bill that would have required AI model developers to be revised by third parties. Technical companies have previously fought for these types of third-party audits in other AI political struggles and argued that they are too burdensome.
