— Chetna Thanki
As this article is being written, ChatGPT has garnered 800 million weekly active users. That is 1 in every 10 people around the world. It is not surprising, really. Over the last 3 years, AI has managed to integrate itself into every aspect of human life. We’re becoming complacent now more than ever before. It is now a norm for people to turn to these tiny chatbots for everything from meal prepping, weather predictions, writing eulogies, or even getting a diagnosis.
While the exact numbers aren’t available, we know that a very significant number of these users are also turning to ChatGPT for emotional support or as their personal therapist. Isn’t that something? Given the current mental health crisis in the world, where therapy always seems just out of reach, at least people are trying to take care of themselves instead of remaining helpless. One might argue that something’s better than nothing, right? Wrong.
(I cannot ignore the irony that lies in the fact that we humans are social animals, yet we find ourselves moving away from any sort of personal, in-person interactions with the arrival of and over-dependence on AI. My bigger concern is not that AI can replace humans in relationships; it is that people will insist they can and ignore the downsides of such a substitution.)
Let’s break it down.
ChatGPT and other AI models function as and are trained LLMs, i.e. Large Language Models. They are designed to generate a human-like response to user prompts. In the foundations of its very “being”, it is designed to be helpful and non-confrontational. We know that traditional therapy may be unaffordable for people, and therefore, a tool that is easy to use, available 24/7, and free may seem like a no-brainer, right? While that may sound pleasant and convenient, it is its very design that poses a huge flaw.
Therapy is not about venting into your screen and receiving validation in return. Real therapy requires effort, insights, and breakthroughs. One part of therapy is helping people understand their minds. It’s providing them with a certain amount of knowledge, tools, and frameworks that help them do so. One learns to confront avoidance, recognize toxic patterns, and face painful truths.
The other part is more empathic and based on human connection. By having trust in a human being, it activates an empathic circuitry that helps us heal and understand ourselves better. When a therapist treats us in a compassionate and non-judgmental way, we integrate that into ourselves and change, for example, our inner dialogue. By experiencing compassion from another human, we learn to be self-compassionate.
Conversational AI, on the other hand, cannot fundamentally understand what you’re going through. Yes, it can provide validation and support, but it cannot pick up on nonverbal cues, nor is it incentivized to challenge or question your thoughts and beliefs (unless you prompt it to do so). You see, the responses generated by AI tools are based on pattern recognition. It is trained on the existing data and code available on the internet. It runs on algorithms to say the “right thing.” That’s not therapy, babe — it’s a digital yes-man.
And at what cost?
A sentient machine is not equipped to emotionally comprehend the troubles and challenges faced by its user. It lacks the social and emotional learning outside its database. A few prompts in, and AI can quickly become an echo chamber of its user, where one’s worries and thoughts are simply mirrored and repackaged in therapy-speak rather than understood or dissected. Yes, it can be comforting to hear it say “you’re not alone” and “I will be here,” but empathy from a chatbot is a placebo for real problems and struggles. Therefore, one cannot expect objectivity.
Now look, it is not lost on me that having a therapist-like support in your pocket can be very alluring, and this article is in no way meant to be a holier-than-thou think piece. We know that conversational AI cannot really “replace” traditional therapy, but its widespread usage has serious repercussions if not kept in check. Currently, there are no parameters or guidelines that hold these chatbots accountable for their output, behavior, and impact.
Integrating AI in mental health is a double-edged sword. While it can prove beneficial for early diagnosis and potential symptom reduction intervention, the lack of empathy absolves AI from any responsibility towards its users. This should not come as a surprise, given that the whole system is set up to please rather than to provide care.
While AI therapy is in its natal stage, the need of the hour is stricter guidelines and ethical boundaries that make the technology safe, if not harmless, to use. It is important to remember that AI requires human involvement for its functioning and protocols. It has the potential to be a great tool for psychoeducation and in assisting therapists and other mental health professionals in their practice. But we need to protect the most sacred parts of ourselves — our mental well-being — from the harms of AI. Therefore, we need a radically different approach when it comes to programming AI, one that is person-centric first. Let’s weave empathy and ethics into the code; the cleverness of algorithms can only take you so far.
References:
-
Brewer, Jud. “The Hidden Danger of AI Therapy.” Inside the Curious Mind, 4 Oct 2025.
-
HealthyGamerGG. “Can AI Replace Therapists? | Psychiatrist Explains.” YouTube, 18 May 2023.
-
Mickey Atkins. “Chat GPT Isn’t Your Therapist & Why Using It as One Is Dangerous.” YouTube, 5 July 2025.
-
Psych2Go. “Can AI Really Help with Mental Health?” YouTube, 17 May 2025.
-
TEDx Talks. “How to Create AI to Improve Mental Health | Stevie Chancellor | TEDxMinneapolis.” YouTube, 18 Dec 2024.