Summary
Requires operators of artificial intelligence companions and companion platforms to disclose when users are interacting with artificial output, adopt protocols for suicidal ideation or self-harm, and make certain disclosures when a user is believed to be a minor. Also requires annual public reporting of referrals to crisis resources.
Healthcare Implications
Creates direct guardrails for consumer-facing companion bots that may be used for emotional support, mental health, or wellness. Developers and health-facing platforms should add disclosures, crisis workflows, and youth protections to reduce self-harm risk.