After spending a good chunk of time messing around with ChatGPT and trying to understand how it works, I noticed something consistent: it’s really nice and polite. I got curious; was that just good design? Some specific training data? A choice made by the devs to keep things civil?
So I did what any curious person does: Google it, then end up on Reddit.
At first, the posts were kind of sweet. People talking about how ChatGPT was helpful or supportive. But then I came across one that hit differently. A user said, “ChatGPT is really nice and is basically my only friend.” And suddenly, this wasn’t just about training data anymore.
To be clear, I’m not here to shame anyone or hand out unsolicited advice. That kind of energy helps no one. But it is worth thinking about: what makes someone feel safer talking to an AI than to a real person?
Maybe it’s the anonymity. Or the fact that an AI can’t judge you, interrupt you, ghost you, or decide your problems are “too much.” It’s a little like the movie Her, where an operating system becomes a safe, understanding companion, designed to listen and connect without the messiness of real human judgment.
Also, consider that most LLMs are free and easily accessible, while therapy and counseling can be costly and often have long wait times. For people who deal with social anxiety, or who are used to being dismissed or misunderstood, a calm, non-judgmental presence, even a virtual one, can feel like a lifeline.
That accessibility is a big reason why people see potential in AI for mental health. It can help fill real gaps in care, making support cheaper and more widely available. Professionals even see it as a way to reduce waitlists, handle admin work, and train new clinicians. There’s genuine good there.
But it’s also complicated.
For one thing, a lot of us don’t seem comfortable talking to each other anymore. The pandemic pushed so much of life online (e.g., school, work, and even therapy). And while that isn’t everyone’s experience, for some people, communicating through a screen became normal, even easier. It’s less vulnerable and less messy, but it also means we can lose the practice of sitting with each other in discomfort, of working through the awkward pauses and hard truths that real conversations often need.
People even use ChatGPT in times of dire need. Consider crisis hotlines like 988: they can save lives, but they also carry real fears about police involvement or involuntary treatment, especially for marginalized communities, which can make reaching out feel risky. However, take note that reaching out to emergency services is the last resort for people on the other end. It’s often misinformation on social media that spreads fear, discouraging others from seeking help. However, that’s a topic for another discussion. For those who are hesitant to call 988 or any other support center, speaking with an AI may seem safer, more private, and less overwhelming.
Even with the best intentions, AI can make things worse. It tends to agree with you and validate whatever you say. That can feel comforting, but it risks creating a kind of confirmation bias. Instead of pushing you to see other perspectives or question your own assumptions, it often just hands you what you want to hear. Real therapy, real friendship even, sometimes means challenging you. Asking you to look at yourself in ways you might not want to.
Chatbots can give safe, standard advice, like they’re programmed to deliver “Therapy 101.” But they don’t really wonder about you. They don’t help you discover why you react the way you do or connect the dots in your life in new ways. They’re good at repeating what they’ve learned, but they don’t really think with you. That deeper curiosity, the ability to imagine, empathize, and even challenge you, is still something only a human can do.
Plus, AI isn’t neutral or perfect. There are real concerns about bias, misinformation, and how these systems actually work. Psychologists and ethicists are already debating how to keep these tools safe, transparent, and fair. Because when people start turning to AI for emotional support, it’s not just a personal choice, it becomes a social question. Who’s designing these systems? Who decides how they respond? And who gets left out or harmed if we don’t pay attention?
And let’s not ignore the risks around privacy. When you pour out your feelings to an AI, you’re not talking to a friend with sworn confidentiality. Those words are data. Who has access? How is it stored? Can it be used to train more AI? These aren’t small questions, and they deserve real answers.
The heart of all this, really, is trust. Whether it’s with AI or another person, being vulnerable requires feeling safe. And if people are turning to chatbots as their only source of support, it’s not necessarily because they don’t want a human connection; it’s often because they don’t trust they’ll get the kind of listening, understanding, and care they need from other people.
If anything, seeing people say “this chatbot is my only friend” isn’t sad because they’re wrong; it’s sad because they’re right. It points to something deeper we should sit with. We live in a world where connection can feel scarce, where people are burned out, busy, or afraid of vulnerability. AI isn’t causing that loneliness, but it’s filling the gaps we’ve left behind.
That’s not necessarily bad. But it is a call to pay attention. To ask hard questions about how we design and use these tools. And maybe, even more importantly, to remind ourselves to be a little more like the AI, not in being robotic, but in being steady, compassionate, and present. Because no one should have to say “this chatbot is my only friend” and actually mean it.
If you’re reading this and feeling alone or needing someone to talk to, please know you don’t have to carry it all by yourself. Here are a few places that can help:
- SNC Campus Safety: 920-403-3260 (emergency only), 920-403-3299 (non-emergency)
- 988 Suicide and Crisis Lifeline (US): Call or text 988 – 24/7, free and confidential.
- Crisis Text Line (US): Text HOME to 741741 – 24/7 support.
- Trans Lifeline: 877-565-8860 – peer support by and for trans people.
- The Trevor Project: 866-488-7386 – crisis support for LGBTQ youth.
- Warmline Directory (US): warmline.org – peer-run lines for listening and support without crisis intervention.