Summary
Establishes the Conversational AI Safety Act by requiring operators of AI companions and companion platforms to disclose when users are interacting with artificial output, adopt suicide- and self-harm-prevention protocols, and add protections when a user appears to be a minor.
Healthcare Implications
Creates direct safety and disclosure duties for consumer AI systems that may be used for emotional support or mental-health-adjacent conversations. Health systems and vendors should treat it as a baseline compliance issue for chatbots, patient support tools, and crisis-routing workflows.