Summary
The AMA adopts a policy calling for explainable clinical AI tools with accessible information on safety and effectiveness. The policy seeks greater transparency so physicians and other qualified professionals can understand and communicate how AI tools generate outputs used in patient care.
Healthcare Implications
Health systems and clinicians should prioritize AI solutions that provide clear, clinically useful explanations and documented evidence of safety and efficacy. Procurement, validation, and monitoring processes should include explainability, independent evaluation where feasible, and clinician‑facing documentation to support informed use.