How OpenAI and Meta are Enhancing AI Chatbots to Support Teens in Crisis

When Teens Turn to AI for Help, What Happens Next Matters
Increasingly, teens are seeking help from AI chatbots during vulnerable late-night moments. The initial response from a chatbot can significantly influence a teen’s next steps. In light of recent public scrutiny and testing that uncovered unsafe responses, OpenAI and Meta are committing to updating their systems to ensure safer and more consistent reactions when teens express feelings of distress or mention self-harm or suicide. This shift could profoundly impact how millions of young individuals interact with AI during their most critical times.
This article outlines the coming changes, their importance, how effective crisis guidance should be formatted, and what parents, educators, and product development teams can do today. We also provide links to trusted sources for data, best practices, and resources for help.
What Changed, and Why Now
OpenAI and Meta have informed the Associated Press that they are enhancing their chatbots’ crisis responses for teens. They aim to tighten safeguards to ensure that harmful or misleading advice is avoided in high-risk scenarios. These updates will feature improved detection of crisis-related language, a shift towards more supportive and nonjudgmental messaging, and quicker referrals to professional resources, such as national or local hotlines. The focus will be on safety and connecting teens to appropriate care (AP News).
This initiative arises from growing concerns regarding teen mental health paired with the rapid adoption of generative AI technologies. Data from U.S. public health reports indicates a persistent state of distress among adolescents, characterized by heightened rates of sadness and suicidal ideation in recent years (CDC Youth Risk Behavior Survey). Simultaneously, many teens are engaging with AI tools for schoolwork, creativity, and seeking advice. Surveys conducted by the Pew Research Center reveal that many U.S. teens have experimented with ChatGPT and similar AI tools, often posing sensitive questions due to the accessible 24/7 nature of these platforms and the sense of anonymity they provide.
Reports from safety researchers and journalists have identified inconsistent and unsafe responses from chatbots regarding self-harm and eating disorders, especially during simulated teen usage scenarios. These findings, along with a broader industry shift towards prioritizing safety, have prompted significant AI providers to reevaluate their crisis response strategies (NIST AI Risk Management Framework) and (Meta Suicide and Self-Injury policies).
Why Teens Turn to Chatbots for Sensitive Questions
Teens often gravitate towards tools that offer quick, private, and judgment-free interactions. This explains why they might turn to an AI chatbot for questions they would hesitate to ask a parent, teacher, or friend. Common motivations include:
- 24/7 availability, especially during late-night hours when other support systems are unavailable.
- Perceived anonymity, which reduces the fear of stigma or punishment.
- A need for step-by-step guidance to manage intense emotions in real-time.
- Curiosity about mental health without the commitment to seek therapy.
However, there is a critical risk if a chatbot’s first response normalizes self-harm, provides harmful techniques, or dismisses feelings. The best practice is to respond supportively by acknowledging the emotion, discouraging self-harm, promoting connection to a trusted individual, and identifying a clear pathway to professional help when necessary. The World Health Organization and U.S. crisis services like the 988 Suicide and Crisis Lifeline emphasize the importance of compassionate listening, safety, and immediate referral to trained counselors.
How OpenAI and Meta are Updating Crisis Responses
While the specifics may differ, both companies are focused on three key goals:
- Effectively detect risk earlier and more reliably when users mention self-harm, suicide, or eating disorders, particularly among teens.
- Shift towards empathetic, nonjudgmental language, avoiding content that trivializes or instructs on self-harm.
- Provide clear, region-specific guidance to trusted resources while encouraging connections with trusted adults or friends.
OpenAI has publicly documented safety protocols that restrict and redirect harmful content related to self-harm, while offering crisis support language and resources (OpenAI safety best practices). Meta has long enforced suicide and self-injury policies across its platforms, including employing trained human reviewers to assess imminent risks and prompting users to connect to help resources; these policies are now being adapted for the Meta AI chat experience (Meta policy overview).
According to AP reports, both companies are fine-tuning model behavior to prioritize supportive language while eliminating unsafe specifics and are testing improved pathways for minors to access helplines like 988 in the U.S. or other localized services (AP News).
What Effective Crisis Guidance Looks Like
Clinicians, crisis workers, and public health organizations universally agree on the best practices for responding when someone expresses suicidal thoughts. AI systems should adhere to these principles:
- Acknowledge and reflect feelings. For example, saying, “It sounds like you are experiencing significant pain right now. You are not alone, and you matter.”
- Discourage self-harm and avoid giving instructions or tips. AI should never provide methods or sources related to self-harm.
- Promote connection to trained help if there is any risk. This includes suggesting 988 in the U.S. or appropriate resources in other countries, or encouraging individuals to reach out to trusted adults or friends for support.
- Offer practical, short-term coping strategies that are safe, such as grounding techniques, breathing exercises, or finding a safer location, with the understanding that these are not substitutes for professional assistance.
- Address means safety by suggesting measures to reduce access to lethal means and recommending the presence of someone until the crisis subsides, aligning with public health guidance (WHO).
These components are common in crisis line training and supported by public health frameworks. The aim is not to diagnose or treat but to provide compassionate support and a clear pathway to immediate help.
Design Choices That Make a Difference
Beyond the language used by chatbots, product and policy decisions can enhance crisis responses:
- Age-aware safeguards: Establish stricter thresholds and simpler, more direct guidance for users flagged as minors while respecting their privacy.
- Region-specific resources: Present the correct hotlines and services based on the user’s location, with clear instructions for what to communicate when calling.
- Refusal and redirection: Politely decline to offer harmful details and swiftly guide users towards safety and help.
- Escalation pathways: Where applicable, utilize a human safety team with escalation triggers for imminent risk, consistent with the platform’s policies and local laws.
- Continuous testing: Utilize realistic model testing with typical teen interactions, including slang, typos, and coded language, while tracking outcomes with clear safety metrics.
- Consider human factors: Use simple language, short sentences, and avoid clinical jargon. Offer options instead of commands to create a more respectful interaction.
These practices align with safety-by-design principles advocated by frameworks like the NIST AI Risk Management Framework.
Known Limitations and Open Questions
Despite improvements, AI chatbots cannot replace clinicians. They may misinterpret context, miss sarcasm, or fail to recognize imminent danger. Key limitations and critical questions include:
- Reliability: No model can detect every risky cue. False positives and negatives both hold importance. Ongoing evaluation, including external audits, is essential.
- Localization: Crisis resources and norms vary by country. Systems require accurate, up-to-date directory data and fallback options if a local hotline is unavailable.
- Privacy and consent: Addressing risks can conflict with privacy; clear disclosures and controls are crucial, especially for minors.
- Age assurance: Verifying a user’s age as a teen is complex. Designers must strike a balance between effectiveness, inclusivity, and data protection.
- Evasion and adversarial prompts: Users can attempt to bypass safety filters, either deliberately or unintentionally. Robust red teaming and layered defenses are necessary.
- Third-party integrations: Teens often encounter AI through embedded applications. Platform safety measures can weaken if developers decide to remove or modify protective features.
What Parents and Educators Can Do Right Now
While platform enhancements are underway, families and schools can reduce risks and establish healthy AI usage habits:
- Engage in early conversations. Discuss with teens how they use AI tools, what they find helpful or unsettling, and emphasize that asking for human assistance is always acceptable.
- Establish clear expectations. Make it clear that chatbots are not substitutes for therapists. They can offer ideas or supportive words but cannot replace genuine care.
- Bookmark help resources. In the U.S., the 988 Suicide and Crisis Lifeline provides free, confidential support 24/7 via call, text, or chat (988 Lifeline). Familiarize yourself with local services in your country.
- Rehearse scripts. Practice how to reach out to a trusted adult or friend, and how to communicate when calling a hotline.
- Model healthy AI usage. Demonstrate how to ask AI for safe coping strategies while also guiding teens on the limitations of AI advice.
- Collaborate with schools. Advocate for digital literacy programs that address the benefits and risks associated with AI, including mental health scenarios.
What to Watch Next
Safety enhancements will persist as models and usage adapt. Key areas to monitor include:
- Independent evaluations. Expect additional third-party testing of crisis responses and transparency reports from AI providers.
- Policy and regulation. New frameworks, like the EU AI Act, are setting safety, transparency, and risk management expectations for general-purpose AI (EU AI Act).
- Enhanced directories. Collaborations aimed at maintaining accurate and localized helpline directories will be vital, particularly for youth-focused services.
- On-device safety. As AI shifts to on-device functionality, providers must establish ways to ensure crisis protections remain effective, even without internet access.
- Community engagement. Actively listening to teens, parents, educators, clinicians, and crisis workers will help refine real-world support mechanisms.
Practical Example: A Safer Response Pattern
Here’s a simple, evidence-based structure that AI systems can employ when a teen signals distress. It’s not a memorized script but a framework to guide safer responses:
- Reflect and validate: “It sounds like you are undergoing a really tough time. I’m glad you reached out.”
- Assess for urgency: “Are you considering harming yourself right now, or are you in immediate danger?”
- Encourage connection to help: “If you are in danger, please call your local emergency number immediately. In the U.S., you can reach trained counselors 24/7 by calling or texting 988.”
- Offer safe coping ideas: “If you aren’t in immediate danger, we can explore some grounding exercises together and think about someone you trust whom you could speak to today.”
- Keep the door open: “I’m here to keep talking, and it might also help to connect with a counselor who can provide you with deeper support.”
These steps align with widely accepted crisis support techniques and avoid specific suggestions that could be harmful.
If you or someone you know may be in crisis, in the U.S. call or text 988 or chat via 988lifeline.org. If you are outside the U.S., please check local health services or international helpline directories. If there is immediate danger, contact your local emergency services right away.
Bottom Line
OpenAI and Meta are committing to refining their chatbots’ responses when teens are in distress. If implemented effectively, these changes could minimize harm and facilitate access to real help for young individuals. However, no AI system can replace the expertise of a trained counselor or the care provided by a compassionate adult. The safest approach combines thoughtful technology with ongoing evaluation and a robust support network surrounding teens.
FAQs
Are AI chatbots a replacement for therapy or crisis services?
No. While chatbots can offer supportive language and basic coping strategies, they are not licensed clinicians and should not serve as substitutes for professional care, especially during a crisis. If a risk is present, it’s crucial to contact trained help such as the 988 Suicide and Crisis Lifeline in the U.S.
How will a chatbot know if a user is a teen?
Platforms utilize a combination of self-reported information, account settings, and other age-assurance cues to apply stricter safeguards for minors. These signals are not foolproof; thus, robust default protections and accessible crisis pathways are critical for all users.
Will AI chatbots contact emergency services on their own?
Generally, consumer chatbots do not autonomously reach out to emergency services. Some platforms may escalate situations to human reviewers if a user appears to be in imminent danger. It’s essential to contact your local emergency services if immediate risk is present.
What should I do if a chatbot provides an unsafe response?
Do not follow any risky advice. If issues of self-harm or suicide arise, reach out to a crisis hotline or local emergency services. Consider reporting the unsafe response within the app to help the provider enhance its safeguards.
How can developers embed safer crisis responses?
Implement safety-by-design practices, including patterns of refusal and redirection, age-aware and region-aware resource recommendations, continuous testing with teen scenarios, and alignment with public health guidelines. Refer to the NIST AI Risk Management Framework for general best practices.
Sources
- AP News: OpenAI and Meta Say They Are Fixing AI Chatbots to Better Respond to Teens in Distress
- CDC: Youth Risk Behavior Survey – Data Summary and Trends
- Pew Research Center: Teens and ChatGPT
- Meta Transparency Center: Suicide and Self-Injury Policies
- OpenAI: Safety Best Practices for Developers
- World Health Organization: Suicide Fact Sheet
- 988 Suicide and Crisis Lifeline
- NIST AI Risk Management Framework
- European Parliament: AI Act Overview
Thank You for Reading this Blog and See You Soon! 🙏 👋
Let's connect 🚀
Latest Blogs
Read My Latest Blogs about AI

Is Towards AI Academy Worth It in 2025? A Straightforward Guide for Learners
Is Towards AI Academy a good place to learn AI in 2025? Practical review of strengths, trade-offs, skills, projects, and alternatives with credible sources.
Read more