California’s State Senator Scott Wiener introduced new changes to his latest bill on Wednesday, SB 53, it would require the world’s largest AI companies to publish security and security protocols and issuing reports when security events arise.
If California was signed, California would be the first state to set meaningful transparency requirements to lead AI developers, probably including Openai, Google, Anthropic and Xai.
Senator Wiener’s previous AI bill, SB 1047, included similar requirements for AI model developers to publish security reports. However, Silicon Valley fought violently against this bill, and it was ultimately vetoed against governor Gavin Newsom. California Governor then called for a group of AI leaders-Inclusive the leading Stanford scientist and co-founder of World Labs, FEI-FEI LI-to form a policy group and set goals for the state’s AI security efforts.
California’s AI policy group recently published their final recommendations and cited a need for “requirements for industry to publish information about their systems” to establish a “robust and transparent evidence environment.” Senator Wiener’s office said in a press release that SB 53’s changes were strongly influenced by this report.
“The bill remains an ongoing work, and I look forward to working with all stakeholders in the coming weeks to refine this proposal for the most scientific and fair law it may be,” Senator Wiener said in the release.
SB 53 aims to find a balance that Governor Newsom claimed that SB 1047 did not reach -ideally, creating meaningful transparency requirements for the largest AI developers without averting the rapid growth of California’s AI industry.
“This is concern that my organization and others have been talking about for a while,” said Nathan Calvin, VP for state affairs for nonprofit AI Safety Group, in an interview with Techcrunch. “Having companies explaining the public and the government what measures they take to tackle these risks feel like a soft minimum, reasonable step to take.”
The bill also creates whistleblower protection for employees of AI laboratories who believe that their company’s technology poses a “critical risk” to society -defined in the bill as a contribution to death or harm to more than 100 people or more than $ 1 billion in injury.
In addition, the bill aims to create Calcompute, a public cloud computing cluster to support startups and researchers developing large-scale AI.
Unlike SB 1047, Senator Wiener’s new bill does not make AI model developers responsible for the damage to their AI models. The SB 53 was also designed not to pose a burden on startups and scientists who fine-tune AI models from leading AI developers or using open source models.
With the new changes, SB 53 is now on its way to California’s State Assembly Committee for Privacy and Consumer Protection for approval. Should it pass there, the bill must also review several other legislative bodies before reaching Governor Newsom’s desk.
On the other hand of the United States, New York Governor Kathy Hochul is now considering a similar AI security bill, Raise Act, which would also require large AI developers to publish security and security reports.
State AI-Love as Raise Act and SB 53 were briefly in danger as federal legislators considered a 10-year AI moratorium for state AI-regulation-an attempt to limit a “patchwork” of AI-Love that companies were to navigate. However, this proposal failed in a 99-1 Senate vote earlier in July.
“To ensure that AI is developed safely, should not be controversial – it must be fundamental,” said Geoff Ralston, former President of the Y Combinator, in a statement to TechCrunch. “Congress must be a leader and demand transparency and accountability from the companies that build border models. But without any serious federal action in sight, states must intensify. California’s SB 53 is a thought -provoking, well -structured example of state leadership.”
Until this time, lawmakers have not failed to get AI companies on board with state transparency requirements. Anthropically has widely approved the need for increased transparency to AI companies and even expressed modest optimism about the recommendations of California’s AI policy group. But companies like Openai, Google and Meta have been more resistant to these efforts.
Leading AI model developers typically publish security reports for their AI models, but they have been less consistent in recent months. For example, Google decided not to publish a security report for its most advanced AI model ever released, Gemini 2.5 Pro, until months after it was made available. Openai also decided not to publish a security report for its GPT-4.1 model. Later, a third-party survey came out that suggested it may have been less adapted than previous AI models.
SB 53 represents a toned-down version of previous AI security bills, but it can still force AI companies to publish more information than they do today. For the time being, they will look closely as Senator Wiener will again test these boundaries.