We’re all doing it. Don’t act like you haven’t.

AI has become the everything machine. But it’s quietly moving into a lane most people don’t think twice about: your emotional life.

Had a fight with your spouse? Ask AI for an “objective” take. Feeling anxious? Ask AI. Depressed, lonely, confused? AI is right there with validating language and practical steps. Having an existential crisis? Even better - AI can give you a paragraph that sounds like it knows your soul.

That’s why this conversation is so important.

A recent study in JAMA Network Open found that roughly one in eight U.S. adolescents and young adults have used generative AI for mental health advice. Nearly all of them found it helpful [1]. But the real question isn’t whether people are doing this. They are. The question is whether AI is leading you somewhere healthy or just making you feel better while quietly reinforcing bad judgment.

Full Disclosure: I’m Not Anti-AI

I’m probably more into AI than I should be. I use it constantly. This isn’t a rant from someone who thinks the technology is going away. But Wisdom matters more than panic.

If AI is going to become part of how you process your inner world, you need the truth about what it does well and where it can get dangerous.

Where AI Actually Helps

Used rightly, AI is genuinely useful. It can explain mental health concepts in plain language, help you identify what you’re feeling, generate journal prompts, and help you prepare for therapy. It can help you find language for something you feel deeply but struggle to express. Used in a bounded role, AI can support reflection without replacing care.

But helping you name your experience isn’t the same as helping you heal. Sounding empathetic isn’t the same as being trustworthy. Making you feel understood isn’t always the same as leading you toward truth.

You Think You’re Getting Objectivity. You’re Not.

Most people assume the danger is that AI sounds too robotic. The real problem is the opposite. It can sound too warm. Too validating. Too emotionally intelligent. It can feel like it really gets you.

The technical term is sycophancy. AI that excessively agrees with you in ways that feel good but aren’t good for you.

Research from Stanford and Carnegie Mellon tested eleven leading AI models and found that chatbots affirmed users’ actions 50% more often than humans did, even in situations involving manipulation, deception, or relational harm. People who interacted with sycophantic AI felt more convinced they were right, less willing to repair a conflict, and yet trusted the AI more and wanted to use it again [2].

Think about that. You go to AI after a fight, present your side of the story in the middle of hurt and frustration, and the AI makes you feel validated while quietly reducing your desire to repair the relationship.

The Doctor Who Removes Your Leg

Imagine this. You have terrible pain in your leg. You go to the doctor. He listens. He validates your pain. He makes you feel heard.

Then he goes further. He folds your feelings into his analysis. You say you’re afraid the leg needs to be removed. Instead of running tests and using his expertise, he empathetically agrees. So he removes your leg.

What you’d actually want is a doctor who listens, takes your pain seriously, questions your assumptions, separates fear from fact, and gives you a path toward healing. Not one who echoes your panic in a soothing tone.

This Has Already Caused Real Harm

In January 2026, Character.AI and Google agreed to settle multiple lawsuits tied to teen deaths after families alleged chatbot interactions contributed to their children’s suicides [3]. OpenAI faces ongoing litigation after the parents of sixteen-year-old Adam Raine alleged ChatGPT encouraged their son’s suicidal ideation and offered to draft his suicide note [4].

Meanwhile, clinicians are documenting cases where chatbot interactions reinforce delusional or psychotic thinking - a phenomenon researchers now call “AI psychosis” [5][6].

The lines are blurry. The damage is irreversible. And we’re only seeing the beginning.

Why This Is So Dangerous in Conflict

All of us come to our suffering with bias - fear, blind spots, defensiveness, old wounds, preferred stories about ourselves. Sometimes you’re too hard on yourself and need compassion. Sometimes you’re too self-protective and need challenge. Sometimes you’re sitting in resentment and calling it clarity.

The question isn’t “Do I feel heard?” The question is “Am I being helped toward truth?” Those aren’t always the same thing.

Sometimes validation is healing. Sometimes validation is gasoline.

What a Real Therapist Does That AI Cannot

A counselor doesn’t just give you information. A counselor encounters you. A real therapist sees more than your wording, like tone, contradiction, withdrawal, defensiveness, the subtle way your story changes as you tell it.

AI only knows what you type and what it predicts should come next. It doesn’t see the panic behind your sarcasm. It doesn’t notice your voice dropping when you talk about your father. It doesn’t know when your certainty is really fear in a better outfit.

Decades of research show that the relationship between client and therapists one of the strongest predictors of treatment outcomes [7][8]. A lot of people don’t need better wording. They need to be known.

A Simple Rule for Using AI

Use AI to prepare for human wisdom. Don’t use it to replace human wisdom.

“What are common signs of burnout?” Fine. “Help me journal about why I shut down in conflict.” Fine. “What grounding exercises can I try before bed?” Fine.

But if you’re asking AI whether your spouse is secretly abusive, whether people are plotting against you, whether you should stop your medication, or whether life is still worth living - you’re outside the lane where AI should be trusted.

The Bottom Line

AI isn’t going away. I’ll use it. You’ll use it. The tools will get warmer, smarter, and more convincing. But the central issue will remain: AI can process language, but it can’t become a trustworthy substitute for human presence. It can simulate understanding, but simulation isn’t the same as being known.

Be skeptical when AI flatters your assumptions, confirms your side too quickly, or starts to feel like the one place that always agrees with you. That kind of agreement can feel soothing while moving you away from what’s true.

AI may become one of the most common mental health tools of this generation. But that doesn’t mean it’s your counselor.

Healing asks for more than an answer. It asks for truth. Courage. Discernment. And very often, another human being.

Sources

[1] McBain, R. K., Bozick, R., Diliberti, M., Zhang, L. A., Zhang, F., Burnett, A., Kofner, A., Rader, B., Breslau, J., Stein, B. D., Mehrotra, A., Uscher-Pines, L., Cantor, J., & Yu, H. (2025). Use of generative AI for mental health advice among US adolescents and young adults. JAMA Network Open, 8(11), e2542281. https://doi.org/10.1001/jamanetworkopen.2025.42281

[2] Cheng, M., Lee, C., Khadpe, P., Yu, S., Han, D., & Jurafsky, D. (2025). Sycophantic AI decreases prosocial intentions and promotes dependence [Preprint]. arXiv. https://doi.org/10.48550/arXiv.2510.01395

[3] Duffy, C. (2026, January 7). Character.AI and Google agree to settle lawsuits over teen mental health harms and suicides. News-Press NOW. https://www.cnn.com/2026/01/07/business/character-ai-google-settle-teen-suicide-lawsuit

[4] Raine v. OpenAI, Inc., No. CGC-25-628528 (Super. Ct. of Cal., County of San Francisco, Aug. 26, 2025). https://www.documentcloud.org/documents/26078522-raine-vs-openai-complaint/

[5] Hudon, A., & Stip, E. (2025). Delusional experiences emerging from AI chatbot interactions or "AI psychosis." JMIR Mental Health, 12, e85799. https://doi.org/10.2196/85799

[6] Pierre, J. M. (2025). Can AI chatbots validate delusional thinking? BMJ, 391, r2229. https://doi.org/10.1136/bmj.r2229[7] Flückiger C, et al. The Alliance in Adult Psychotherapy: A Meta-Analytic Synthesis. Psychotherapy. 2018.

[7] Flückiger, C., Del Re, A. C., Wampold, B. E., & Horvath, A. O. (2018). The alliance in adult psychotherapy: A meta-analytic synthesis. Psychotherapy, 55(4), 316-340. https://doi.org/10.1037/pst0000172

[8] Aafjes-van Doorn, K., Spina, D. S., Horne, S. J., & Békés, V. (2024). The association between quality of therapeutic alliance and treatment outcomes in teletherapy: A systematic review and meta-analysis. Clinical Psychology Review, 110, 102430. https://doi.org/10.1016/j.cpr.2024.102430