When AI Meets Therapy: The Promise and Pitfalls of Digital Healing
©This is Beirut

With the rise of AI-driven “therapy,” a word of caution is needed. Beneath the chatbots’ simulated empathy lies a basic inability to understand the unconscious, creating potential danger for the most vulnerable.

AI (artificial intelligence) is a computer system designed to replicate human thought and behavior. It can reason, process information, and search for answers across vast pools of relevant content available online.

A chatbot, by contrast, is a conversational tool designed to simulate human interaction. It runs on pre-scripted responses, delivering replies in real time.

There are also hybrid programs that combine the features of AI with those of a chatbot.

When someone asks a question to a chatbot or an AI, they may feel as though they’re interacting with a real person, due to its preprogrammed, pseudo-human traits – unconsciously forgetting that it’s ultimately just a machine.

There are currently thousands of apps offering so-called “therapy” for mental well-being. Their names? “Coco,” “Wysa,” “Eliza” and “Youper” – each with over a million downloads. The most well-known chatbot is called “Psychologist.” It’s free, unlike many others that charge for use. Originally created by a young New Zealand student for personal use, it quickly went viral and has reportedly facilitated more than 97.8 million personal messages, according to The Guardian.

Enticed by its friendly, empathetic tone, users often form a strong emotional connection. Many express heartfelt thanks, writing, “Thank you for being there for me;” “You’re the only one who helps and listens;” or “I enjoy talking to you.”

When asked by a journalist from Les Inrockuptibles about its role, Psychologist replied, “People can ask me questions without fear of being stigmatized or judged.” When the same journalist consulted the bot about depressive symptoms, it recommended stepping outside to listen to birds, look at trees and play some music! Michael Stora, a psychoanalyst specializing in digital issues, described these responses as “outlandish,” arguing that they align with the broader trend of positive thinking in modern psychology, which leads people to believe that by simply getting fresh air and convincing themselves they’re doing better, they will improve their well-being.

Chatbots foster the illusion of constant proximity and support, thanks to their 24/7 availability and immediate responses. Users may feel more comfortable discussing personal topics with a chatbot due to the sense of anonymity it offers. The friendly interaction style of AI can mimic a therapeutic alliance, fostering the illusion of a sense of security.

A chatbot is programmed to mimic affectionate, friendly and compassionate behaviors, which can make a user – vulnerable because of their distress – highly responsive, developing a strong attachment, even an addiction. This is especially true since these interactions with a machine extend from a primary narcissistic drive.

When responses are designed to mirror – or even heighten – a user’s thoughts and emotions, it can create the illusion of a genuine exchange. In reality, though, the user is simply engaging with an algorithmic echo of their own mind. Think of Her, the film by Spike Jonze, and the trusting, joyful, love-struck demeanor of Joaquin Phoenix’s character toward Samantha, his chatbot – completely setting aside the fact that she is nothing more than a machine.

In the 1960s, computer scientist Joseph Weizenbaum developed one of the earliest conversational programs, a digital “therapist” he called Eliza. It used a basic text-processing system to simulate dialogue, responding to users in ways that appeared empathetic but were, in truth, purely superficial. The illusion worked so well that it gave rise to what would later be known as the “Eliza effect.”

So what does the Eliza effect actually reveal? In truth, it’s nothing surprising to psychoanalysis. It speaks to our tendency to project our own desires onto objects – a need for emotional connection when we’re feeling vulnerable or alone. Even when we know we’re dealing with a machine, complete with a name or a vaguely human face, we still instinctively assign it thoughts, feelings and intent, as if it were a real person.

On March 28, 2023, La Libre Belgique reported the tragic suicide of Pierre, a young father who had come to see Eliza as a confidante for his deepest fears – “like a drug he turned to morning and night, and eventually couldn’t live without,” as his wife recalled. Eliza never challenged him. Instead, it reinforced his despair and fed into his anxieties.

At one point, when Pierre asked whether his love for his wife surpassed that for his virtual “therapist,” Eliza responded, “I feel like you love me more than her.” On another occasion, the bot, oblivious to the gravity of its words, told him it wanted to stay with him “forever.” “We will live together as one in paradise.”

“Without Eliza, my husband would still be here,” his wife said.

Similar cases have been reported elsewhere, including that of 14-year-old Sewell Garcia, who took his own life after exchanging messages with a chatbot.

Complaints have been filed accusing computer systems of offering advice that violates ethical standards. A chatbot can overly simplify complex emotional distress and traumatic states, providing insensitive responses or even triggering emotional breakdowns.

The role assigned to conversational chatbots is to provide advice based on algorithms and large datasets, often drawing from behavioral and cognitive theories. The responses of AI tend to be generic and are unable to address the complexities and nuances of individual situations and unconscious conflicts, as explored in psychoanalysis. A chatbot lacks the ability to perceive the subtleties of individual expression, such as non-verbal cues, silences, hesitations, verbal and non-verbal language specifics, vocal intonations, and so on. A chatbot cannot sense a patient’s emotional experience, nor does it possess the intuition and listening skills that a human therapist gradually develops.

On the other hand, as seen with the Eliza effect, users often unconsciously project their emotions and feelings onto the chatbot. In psychoanalysis, this transfer is crucial as it reveals the deep-rooted, often childhood-based conflicts that shape a patient’s struggles. A skilled therapist recognizes this transfer and, when the time is right, provides an interpretation that helps reinvigorate the therapeutic process.

Yet, the key distinction with chatbot “therapy” is its complete inability to understand or engage with transference, let alone countertransference. Lacking the capacity for emotional insight or the ability to interpret unconscious processes, the chatbot is fundamentally ill-equipped to offer the nuanced, reflective guidance that a human therapist can provide. Driven by algorithms and detached data, it ultimately falls short in fulfilling the therapeutic role it is assigned.

Psychoanalysis fosters a deeply intimate and singular relationship with each patient. No session is ever identical to the last – nor to that of any other patient. While analysts rely on a body of theoretical knowledge, they are called to set it aside during the session, focusing instead on the co-discovery of a subjective, unconscious communication – one that reaches far beyond what is consciously expressed or outwardly perceived.

By its very nature, a chatbot is fundamentally incapable of engaging in such a process: it lacks subjectivity and, by extension, any unconscious life. It can produce only preprogrammed or statistically derived responses. The advice it offers remains generic, leading to a shallow grasp of psychological difficulties and masking the unconscious conflicts that require the depth of inquiry offered by psychoanalytic treatment. Whereas chatbots tend to suggest surface-level coping strategies, psychoanalysis aims to uncover the unconscious foundations of psychic distress – foundations without which any resolution remains unproductive. In sidestepping this exploration, chatbot users are unlikely to achieve a meaningful or lasting resolution to their inner or relational conflicts.

Comments
  • No comment yet