Summary
Amends the state technology, education, and civil service laws to regulate automated decision-making systems used by government agencies; requires “meaningful human review” for high-stakes uses, mandates periodic impact assessments (including testing for bias, privacy, cybersecurity, and public-health/safety risks), and requires disclosure of existing ADM tools to the governor and legislature.
Healthcare Implications
Directly affects state health, Medicaid, mental-health, and social-service programs that use AI/ADM for eligibility, fraud detection, case prioritization, or enforcement. Agencies must evaluate public-health/safety risks, bias, and privacy impacts before deploying ADM systems, and keep humans in the loop for decisions affecting benefits, rights, and welfare, reducing the risk of unsafe or discriminatory health-related decisions.