Summary
Creates comprehensive duties for developers and deployers of “high‑risk” AI systems making consequential decisions, including those affecting access to healthcare. Requires documented risk‑management programs, impact assessments, dataset/incident tracking, and clear consumer notices; grants the Attorney General enforcement authority. Most provisions take effect Feb 1, 2026, with phased requirements thereafter.
Healthcare Implications
Hospitals, payers, and digital health vendors using AI for diagnosis, triage, utilization review, claims, or care access are squarely within scope. Covered entities must stand up bias‑ and safety‑mitigation processes, retain detailed documentation, and provide explanations/disclosures to patients when AI informs determinations about services. Expect legal exposure for inadequate governance and a strong push to formalize model validation and monitoring in clinical workflows.