Policy Details

SB53 – Artificial Intelligence Models: Large Developers

Date Passed

9/29/2025

Effective Date

1/1/2026

Summary

Regulates frontier (highly advanced, large-scale, general-purpose) models and their large developers. Requires a published safety framework, transparency reports with catastrophic-risk summaries, 15-day critical-incident reporting to California's Governor's Office of Emergency Services, whistleblower protections, and penalties up to $1M. Directs the Government Operations Agency to design a “CalCompute” cloud (if funded).

Healthcare Implications

Indirect but meaningful for health AI. Safety frameworks and transparency reports strengthen due-diligence when procuring or integrating frontier models in clinical tools. Incident reporting to California's Governor's Office of Emergency Services and whistleblower protections help surface risks with public-health impact. CalCompute may expand academic/health-system access to compute for R&D and evaluation.

Impact Level

Low

Keywords

Safety & Risk; Transparency & Governance

Stakeholders

Providers & Health Systems; Developers & Vendors; Regulators & Government