Summary
Regulates artificial intelligence companion chatbots by requiring user notice that the user is interacting with artificial output, protections for minors, and protocols for detecting and responding to suicidal ideation or self-harm.
Healthcare Implications
Applies to consumer-facing companion bots that may be used for emotional support or mental-health-adjacent interactions. Developers and health-facing platforms should add disclosure, escalation, and minor-protection workflows to reduce risk.