Summary
OCR issued a Dear Colleague letter reminding Section 1557–covered entities (health care providers, health plans, and other HHS-funded health programs) that they may not discriminate through their use of “patient care decision support tools,” including AI-driven algorithms and predictive analytics. The letter explains how OCR will apply federal civil rights laws to AI and other emerging technologies, and highlights risks such as biased training data, opaque model behavior, and inadequate human oversight.
Healthcare Implications
The guidance reinforces that hospitals, clinicians, and insurers remain legally responsible for outcomes when they rely on AI tools in clinical care or coverage decisions, even when those tools are built by vendors. It encourages governance steps such as pre-deployment impact assessments, monitoring model performance across protected groups, updating contracts with AI vendors, and revising internal policies to ensure AI-assisted decisions do not lead to discriminatory access, coverage, or quality of care.