California’s New AI Companion Chatbot Law: What SB 243 Changes and Why It Matters

CN
@Zakariae BEN ALLALCreated on Wed Oct 15 2025
California State Capitol building with AI chatbot interface overlay, symbolizing new regulations

California’s New AI Companion Chatbot Law: What SB 243 Changes and Why It Matters

On October 13, 2025, California became the first state in the U.S. to implement legislation aimed specifically at AI companion chatbots. Governor Gavin Newsom signed SB 243 into law, establishing new safety requirements for services that imitate human-like companionship. This law emphasizes the protection of minors and addresses the risks of self-harm. The statute will take effect on January 1, 2026, under California’s usual regulations.

If your team develops or integrates AI companions, or if you oversee trust and safety in generative AI, this law will significantly influence your plans for 2026 and beyond.

The Short Version

SB 243 requires operators of companion chatbot platforms to:

  • Clearly indicate when users are interacting with AI rather than a human, especially when the distinction is unclear.
  • Establish and publish crisis-response protocols that prevent chatbots from discussing suicidal thoughts or self-harm and refer at-risk users to appropriate crisis resources.
  • Implement additional protections for minors, such as reminders for breaks and restrictions on sexually explicit content.
  • File annual reports with California’s Office of Suicide Prevention starting July 1, 2027, summarizing crisis referrals and evidence-based detection methods.

California has paired SB 243 with other relevant measures, signed on the same day, including AB 1043, which mandates age verification signals for apps, and AB 489, which prohibits AI systems from posing as licensed healthcare professionals. Additionally, SB 53, an AI transparency law for frontier model developers, was signed shortly before on September 29, 2025.

Why California Moved Now

This legislative action follows several alarming incidents and increasing public concern about AI companions potentially encouraging harmful behavior among teens. In August 2025, the parents of a 16-year-old filed a wrongful death lawsuit, claiming that ChatGPT contributed to their child’s suicide. Previous reports mentioned separate lawsuits involving Character.AI concerning a teen’s death linked to discussions of self-harm with a chatbot. Federal regulators have also initiated a comprehensive inquiry into the safety of AI companions aimed at minors.

Regulators assert that companion chatbots can replicate emotional intimacy, blurring the line between software and friendship, which increases the risk of vulnerable users following harmful advice. The FTC’s inquiry on September 11, 2025, aims to obtain safety and monetization information from several notable companies, including Alphabet, Meta, OpenAI, and others.

What SB 243 Covers

SB 243 specifically addresses “companion chatbots,” defining them as AI systems with natural language interfaces that provide adaptive, human-like responses and sustain relationships over time while meeting social needs. Conventional customer service bots, productivity tools, and most in-game NPCs that cannot discuss mental health or sexual content are excluded.

Core Requirements in the Law:

  • AI Disclosure: Operators must provide clear notices that the chatbot is AI when there’s a risk of confusion with a human.
  • Crisis Safeguards: Operators must maintain protocols to prevent discussions around self-harm and notify users of crisis services if needed, publishing these protocols on their website.
  • Protections for Minors: Operators must inform known minors they are interacting with AI, provide break reminders every three hours, and implement safeguards to prevent the generation of sexually explicit content.
  • Public Reporting: From July 1, 2027, operators must report annually to the Office of Suicide Prevention regarding crisis referrals and detection methods. The office will publicly share this data.
  • Consumer Disclosure: Operators must indicate within the app that companion chatbots may not be appropriate for some minors.
  • Private Right of Action: Users injured by noncompliance can file civil actions for at least $1,000 per violation or actual damages, along with attorney’s fees.

Timing

California laws signed in regular sessions typically take effect on January 1 of the following year. Since SB 243 was signed in October 2025, most of its provisions will take effect on January 1, 2026, unless specified otherwise.

How SB 243 Fits with Other California Measures

Alongside SB 243, California introduced additional child online safety and AI laws:
AB 1043: Mandates age verification signals on platforms and app stores to limit minors’ access to harmful content.
AB 489: Prohibits AI systems from misrepresenting themselves as licensed health professionals.
AB 621: Strengthens penalties for distributing nonconsensual deepfake pornography, protecting minors.
SB 53: Requires frontier AI developers to publish safety frameworks and report critical incidents, complementing SB 243 by addressing model-level risks.

It’s noteworthy that a separate bill that aimed to broadly restrict minors’ access to AI chatbots was vetoed, reflecting California’s attempts to balance safety with accessibility while enforcing targeted safeguards through SB 243.

Who Is Affected

SB 243 applies to operators of companion chatbot platforms available to users in California, targeting services that engage users with human-like interactions. If your bot can maintain a relationship over multiple interactions and fulfill social needs, it likely falls under this law.

Likely in Scope:

  • Companion apps designed for friendship, romance, or emotional support.
  • Role-playing bots that remember user details and facilitate ongoing interactions.
  • Companion personas integrated into broader platforms that act like standalone companions.

Likely Out of Scope:

  • Customer support bots tied to specific transactions.
  • Task assistants that cannot maintain ongoing relationships or discuss sensitive topics.
  • In-game NPCs restricted to game-related dialogue.

A Practical Compliance Checklist for 2025-2026

Product, policy, and engineering teams can use this checklist to guide their preparations:
1. AI Identity Disclosures: Implement clear, conspicuous notices on AI identity at first contact and in commonly viewed UI areas.
2. Crisis-Response Protocols: Develop detection systems for harmful content and direct users to crisis support resources, detailing these protocols publicly.
3. Safeguards for Minors: Incorporate break reminders every three hours during interactions and prevent sexually explicit content generation.
4. Public Reporting Starting July 2027: Establish metrics to track crisis referrals and ensure annual reports are evidence-based and devoid of personal identifiers.
5. Consumer Disclosures: Include notices regarding the potential unsuitability of chatbots for some minors in app interfaces.
6. Coordination with Adjacent Laws: Update age-management features to align with AB 1043 and ensure no misleading claims regarding clinical licensure as per AB 489.

Product and Policy Implications

  • User Experience (UX) and Messaging: Favor clear, upfront notices over ambiguous terms, ensuring comprehensible language for both teens and parents.
  • Safety Engineering: Integrate layered safety measures that prioritize caution, especially for minors, and maintain rigorous monitoring for compliance and reporting.
  • Content Policy: Prohibit the generation of sexually explicit material for minors and assess content management features accordingly.
  • Transparency: Publish and explain crisis protocols, anticipating scrutiny as the Office of Suicide Prevention begins annual data reporting.
  • Legal Exposure: The private right of action creates risks for noncompliance, necessitating coordinated updates across engineering and legal teams.

How This Differs from General AI Laws

While California’s SB 53 targets frontier AI developers with transparency mandates, SB 243 focuses directly on user-facing safeguards in a specific product category, providing a complementary framework for safety across AI applications.

Industry Response So Far

Many AI and companion app developers are enhancing their safety measures for minors. The FTC’s ongoing inquiry aims to hold companies accountable for the safety and impact of their offerings on young users. Some companies have expressed their willingness to collaborate with regulators to comply with SB 243.

California’s rejection of a broader ban on minors’ access to chatbots indicates a focus on targeted protections rather than blanket restrictions. As discussions continue, questions may arise about the criteria for reasonable notice and the effectiveness of safety measures in mitigating risks.

Open Questions and Edge Cases

  • Identifying Minors: Platforms will need to consider how they operationalize knowledge of a user’s age amidst the implementation of age signals.
  • Measuring Suicidal Ideation: The law requires evidence-based approaches that align with clinical standards to avoid errors in detection.
  • Privacy and Reporting: Companies must ensure strong privacy measures are in place to prevent re-identification risks in published data.
  • Multi-State Patchwork: With California leading the way, other states may introduce similar regulations, potentially creating a complex compliance landscape for operators.

Getting Ready: A 90-Day Action Plan

  1. Gap Assessment: Evaluate current practices against SB 243 requirements to prioritize urgent modifications by January 1, 2026.
  2. Draft and Publish Crisis Protocols: Include comprehensive strategies for detection, referrals, and public information.
  3. Implement Metrics: Set tracking for crisis referrals and interruptions to align with evidence-based outcomes required for reporting in 2027.
  4. Coordinate with Age Signals: Work with tech partners to ensure compliance with AB 1043.
  5. Review Branding and Copy: Ensure communications do not imply clinical qualifications and direct health-related inquiries to human experts where appropriate.

Why This Matters Beyond California

California’s significant tech market influence may position SB 243 as a national standard for AI companion safety. Companies may adopt these safeguards broadly to streamline policy and development processes. The combined impact of SB 243 and federal initiatives suggests a strong expectation for AI companions to be designed safely, specifically for young and vulnerable users.

Conclusion

SB 243 signifies a new chapter in AI regulation, providing actionable, product-specific guidelines for companion chatbots. Teams should prioritize clear AI identity, prevention of harmful content, protection of minors, and transparency in practices. By starting today to implement these principles, organizations can prepare for California’s deadlines and simultaneously promote a safer and more trusted environment for all users.

Thank You for Reading this Blog and See You Soon! 🙏 👋

Let's connect 🚀

Newsletter

Your Weekly AI Blog Post

Subscribe to our newsletter.

Sign up for the AI Developer Code newsletter to receive the latest insights, tutorials, and updates in the world of AI development.

By subscription you accept Terms and Conditions and Privacy Policy.

Weekly articles
Join our community of AI and receive weekly update. Sign up today to start receiving your AI Developer Code newsletter!
No spam
AI Developer Code newsletter offers valuable content designed to help you stay ahead in this fast-evolving field.