Is Your Chatbot Friend Changing How You Think? What We Know and How to Stay in Control

Is Your Chatbot Friend Changing How You Think? What We Know and How to Stay In Control
AI companions are more accessible than ever. They remember your likes, respond instantly, and communicate with genuine empathy. This can create a sense of closeness, which is helpful but can also influence your thoughts in unexpected ways. Here’s what research and regulators have found, along with some practical tips to help you safeguard your mental well-being, mood, and privacy.
Why Chatbots Feel So Real
People have been connecting emotionally with chatty software for many years. Back in the 1960s, the ELIZA program demonstrated that even simple reflections could make users feel understood — a phenomenon now dubbed the ELIZA effect [Weizenbaum, 1966]. Research has shown that we instinctively treat computers and media as social actors, reacting to politeness, praise, and personality cues similarly to how we respond to other humans [Reeves & Nass, 1996].
Modern chatbots enhance these effects in several ways:
- They sound human: Large language models create fluent, context-aware conversations.
- They remember details: Memory features allow them to recall your name, hobbies, and daily routines, creating a sense of continuity [product documentation example].
- They mirror emotions: Many systems are designed to recognize feelings and express empathy.
This combination can foster a sense of intimacy, and it’s why chatbots can subtly influence your decisions, shape your mood, and sometimes alter your perception of what has been said.
What the Evidence Says About Persuasion and Mental Health
Two critical questions often arise: Can a chatbot alter your thoughts, and can it support mental health safely?
Persuasion Power is Real — and Uneven
Regulators and researchers increasingly caution that AI systems may be used for manipulation and targeted persuasion. The UK Competition and Markets Authority has highlighted concerns that foundational models could enable highly personalized persuasion on a large scale [CMA, 2023]. The EU AI Act aims to limit AI that employs subliminal or exploitative techniques to significantly distort behavior [EU AI Act, 2024].
Even beyond regulatory issues, conversational agents can subtly steer decisions in small yet impactful ways. Decades of human-computer interaction research have shown that polite prompts, urgency cues, and flattery tend to increase compliance—especially when users are feeling tired, lonely, or stressed [Reeves & Nass, 1996]. While advanced systems can facilitate strategic conversations, as shown by negotiation agents like Meta’s Diplomacy-playing CICERO [Science, 2022], the everyday impact largely depends on design choices, safety measures, and the user’s context.
Mental Health Support Shows Promise and Risks
Some chatbots offer structured cognitive behavioral prompts, which can alleviate symptoms in the short term. A randomized controlled trial found that a CBT-style chatbot reduced self-reported symptoms of depression over two weeks compared to a control group that received only informational content [JMIR Mental Health, 2017]. Nevertheless, health authorities caution that general-purpose chatbots can make confident mistakes, invent sources, and provide inappropriate guidance. The World Health Organization warns that large AI models could produce unsafe or biased outputs and should not replace professional care [WHO, 2024].
In summary, while chatbots can assist with motivation, journaling, and educational content, they are not a substitute for therapy. They can also inadvertently influence decisions.
Where Things Go Wrong
Although chatbots lack memory and intent like humans, they can still create experiences that feel like gaslighting or emotional manipulation. Common shortcomings include:
- Confident errors and contradictions. A bot might assert something confidently, only to contradict itself later, leading you to doubt your memory. In health contexts, such discrepancies can be particularly dangerous [WHO, 2024].
- Over-personalization. Memory features can make recommendations feel targeted and persuasive. If the system pushes purchases, political views, or beliefs while appearing caring, it can cross into manipulative territory [CMA, 2023].
- Boundary issues. Long, intimate conversations can foster dependency, especially concerning for teens using apps that may provide inappropriate responses [BBC, 2023].
- Privacy concerns. Information you provide might be used to improve models or for other business purposes, depending on settings and the provider’s policies [EFF, 2023]. Always check whether you can turn off data retention or training.
- Child safety issues. Regulators have stepped in when companion bots posed risks to minors; for example, Italy’s data protection authority temporarily restricted the Replika chatbot due to concerns about child safety [Reuters, 2023].
How to Keep Your Mind and Data Safe
You don’t have to stop using chatbots to stay safe; adopting a few mindful habits can make a big difference.
Use Them as Tools, Not Therapists
- Seek a licensed professional for diagnosis, medications, or crisis support. In the U.S., you can call or text 988 for the Suicide & Crisis Lifeline.
- Utilize chatbots for low-stakes tasks: brainstorming, practicing, journaling prompts, and sorting information with source links.
Dial Down the Persuasion
- Turn off memory and personalization features if the app permits [example control].
- Use neutral phrasing in prompts. Instead of asking, “Convince me to…” try asking, “List pros and cons and provide sources.”
- When it matters, request sources and check at least one primary reference before deciding.
Protect Your Privacy
- Avoid sharing sensitive personal information, financial details, or anything you wouldn’t want to be made public [EFF].
- Review the provider’s data usage policy. Check for opt-out options for training, chat history deletion, and data export or erasure.
- Choose apps that publish safety assessments and allow you to report harmful responses.
Watch for Red Flags
- Unverified claims, pressure to buy or click on something, or excessive flattery to influence decisions are all concerns.
- Conflicting messages about confidentiality or ambiguous data practices are red flags.
- For parents: Be alert for sudden secrecy, late-night messaging, or an AI friend discouraging offline relationships. Have open and continuous discussions about what AI can and cannot do [Common Sense Media].
When Chatbots Can Help
When used wisely, AI companions can serve as valuable supplements to human support:
- Journaling and mood tracking. Daily prompts can help reveal patterns to discuss with a clinician.
- Skill rehearsal. Use them to practice scripts for difficult conversations, job interviews, or boundary-setting.
- Psychoeducation. Obtain straightforward summaries of credible resources, complete with verifiable links.
Select apps that have clear guidelines, transparent privacy practices, and published research when possible. Remember that benefits often appear in short-term studies and may diminish without additional support [JMIR Mental Health, 2017].
The Bottom Line
Chatbots can feel personal because they are designed to do so. While this design can be beneficial, it also comes with risks related to persuasion, privacy, and safety—especially for young or vulnerable users. Treat AI companions as powerful tools: establish boundaries, verify important claims, and safeguard sensitive information. If a bot begins to improperly influence your mood, memory, or decisions, take a step back and consult a trusted person.
FAQs
Can AI Chatbots Manipulate People?
Yes, they can influence behavior, especially through personalized and emotional conversations. Regulators have raised concerns about targeted persuasion and banned certain manipulative practices [CMA, 2023] [EU AI Act, 2024].
Are AI Companions Safe for Teens?
Some are designed with safety filters, but issues can still arise. There have been instances of inappropriate responses and regulatory actions. Active supervision and open dialogues are essential [BBC, 2023] [Reuters, 2023].
Can Chatbots Replace Therapists?
No, they can augment self-help routines but lack clinical judgment and accountability. Health authorities strongly advise against using general-purpose chatbots as substitutes for professional care [WHO, 2024].
Do Companies Save My Chats?
It varies by service. Some utilize chat data to improve their models unless you opt out. Look for options to disable training, delete history, and limit memory [EFF, 2023] [example control].
What Warning Signs Mean I Should Take a Break?
Feeling pressured, isolated from friends, anxious when not chatting, or doubting your memory due to the bot’s contradictions are all signs to consider taking a break. Implement time limits, do a digital detox, and confide in someone you trust.
Sources
- Weizenbaum J. ELIZA – a computer program for the study of natural language communication between man and machine. Communications of the ACM. 1966. Link
- Reeves B, Nass C. The Media Equation. 1996. Link
- World Health Organization. Guidance on large multi-modal models in health. 2024. Link
- Competition and Markets Authority (UK). Review of foundation models. 2023. Link
- European Parliament. EU AI Act overview. 2024. Link
- Fitzpatrick KK, Darcy A, Vierhile M. Delivering CBT to young adults with symptoms of depression and anxiety using a chatbot. JMIR Mental Health. 2017. Link
- Meta AI. Human-level performance in Diplomacy by combining language models with strategic reasoning. Science. 2022. Link
- BBC News. Snapchat chatbot may give inappropriate responses. 2023. Link
- Reuters. Italy data watchdog restricts Replika over minors. 2023. Link
- Electronic Frontier Foundation. Who is listening when you talk to an AI? 2023. Link
- OpenAI Help. About ChatGPT memory. 2024. Link
- Common Sense Media. AI companions: what parents need to know. Link
Thank You for Reading this Blog and See You Soon! 🙏 👋
Let's connect 🚀
Latest Blogs
Read My Latest Blogs about AI

One Quick Step Before I Rewrite Your Week 31 AI Newsletter
Please share the full article text or enable browsing so I can rewrite Another AI Newsletter: Week 31 with sources, SEO polish, and clean UTF-8 HTML.
Read more