CT SB00005 / Public Act 26-15 – An Act Concerning Online Safety

CTLawState

Date Passed

5/11/2026

Effective Date

1/1/2027

Summary

Requires AI companion operators to implement suicide, self-harm, and imminent-violence protocols and provide disclosures that users are interacting with AI. Restricts provision of AI companions to minors when certain harmful, sexual, manipulative, or mental-health-service functions are reasonably foreseeable, with a narrow clinically supervised mental-health exception. Also directs the Office of Health Strategy to create a program using AI systems to enhance health outcomes, including de-identified health data pilots and an annual healthcare AI competition.

Healthcare Implications

Creates strong consumer and youth-safety obligations for AI companions with mental-health relevance, enforced by the Attorney General and private rights in certain minor-related contexts. Also gives Connecticut’s Office of Health Strategy a formal role in health AI innovation using de-identified health data, making the law relevant both to behavioral-health chatbot safety and state-supported healthcare AI pilots.

Operational Implications

  • AI companion operators must provide clear and conspicuous audible or written notice that the user is communicating with an AI companion and not another individual at the beginning of interactions, with hourly reminders during continuous interactions.
  • Operators must maintain protocols using reasonable efforts to detect user expressions of suicide, self-harm, or imminent violence and refer users to appropriate mental health evaluation or treatment resources, including 988.
  • Operators may not provide AI companions to users under 18 when it is reasonably foreseeable that the system can encourage self-harm, violence, disordered eating, illegal conduct, sexual or romantic interactions, unsafe validation, variable-ratio rewards, or prohibited engagement-optimization behavior.
  • AI companions may offer mental health services to minors only under stringent conditions: the tool must be designed for that purpose, supported by robust independent peer-reviewed clinical trial evidence, disclose that it is not a licensed mental health professional or substitute, make functions/limitations/privacy accessible, and be assessed, incorporated into a treatment plan, and supervised by a licensed mental health professional.

Impact Level

High

Keywords

Safety & Risk; Transparency & Governance; Clinical Quality & Efficacy; Privacy & Data

Stakeholders

Patients & Public; Developers & Vendors; Regulators & Government; Providers & Health Systems