The AI Psychotherapist: A Case For and Against

Arthur Juliani
12 min readApr 22, 2024
(Image generated using DALLE-3)

The release of ChatGPT into the world in 2022 changed people’s idea of what a language model could do. Suddenly people were seriously considering the possibility of using one of these systems in place of a human in various interpersonal contexts. Some of the use-cases I’ve seen or heard proposed in the intervening couple of years have included tutors, coaches, assistants, and even psychotherapists. Given my own background as a research scientist in the field of psychology, I am interested in that final use-case in particular. Is it possible that in the near future we will have AI psychotherapists that are able to genuinely help their clients? Are there ways in which such an artificial therapist could be even better suited to the task than a trained human psychotherapist?

On the whole I am quite ambivalent about the possibility of an AI psychotherapist. I have a strong techno-optimism, which at the same time is met by a realization of all the ways in which the promises of technology so often fall short or is harmfully applied. The truth though is that companies ranging from small startups to more established corporations are interested in developing AI systems that people interact with on increasingly interpersonal levels. If this technology is coming, then it is important to try engaging with what exactly such systems would and should look like if they are to be genuinely beneficial to society. There is reason to be both hopeful for the possibilities and worried about the implications. The arguments I provide below are by no means exhaustive of either side of the debate, but I believe that they serve at least to begin to sketch out the territory of the discussion.

The case for an AI psychotherapist

I want to first be clear about what I have in mind. Rather than the relatively sterile chat interfaces which people are familiar with today in products like ChatGPT or Google Gemini, a hypothetical AI therapist would almost certainly communicate using a highly fluent and responsive verbal language. It would also likely be represented visually by a virtual avatar and have access to a real time stream of the clients audio and video. In this way, there would be relatively frictionless communication equivalent to what is possible today on Zoom and other telehealth platforms when talking with other people. It seems clear from the current progress in this area such as OpenAI’s Sora model, along with the rapid development trajectory that we are on, that this will likely be possible by the end of the decade to have AI agents which we communicate with as if they were simply another person on a video call.

Believable telepresence by itself would only get an AI therapist to the starting line though. Where an AI therapist has the greatest potential to excel over a human is in what it is capable of knowing, and thus what kinds of therapeutic modalities it is capable of engaging its clients in. In the first place, an AI therapist would be trained on the entire corpus of psychoanalytic, psychiatric, and psychological literature of every modality ranging from ancient to modern practices. Every behavioral experiment, psychodynamic theory, case study, and critical commentary published in the history of the entire field would serve as its basis of knowledge. A pile of knowledge is not the same as wisdom, but it does provide a reservoir to draw from which is much broader and deeper than what an average psychotherapist may have accumulated themselves, even in the course of years.

Of course, the knowledge an AI therapist can obtain from its training data is theoretical and abstract. What matters more is the knowledge-of and sensitivity-to the clients themselves. Here again an AI therapist has the potential to greatly exceed any human. Given the rate at which the context lengths of large language models (LLMs) are expanding, it isn’t unreasonable to expect that in a few years it will be possible for an AI therapist to “keep in mind” the entire history of its interactions with each of its clients. Every word, verbal affectation, and facial gesture could be noticed and recalled in the future. What this provides is the ability for the AI therapist to make connections and uncover latent structure in the client’s life which they themselves are unaware of. This practice of helping a client discover the hidden patterns which guide their thoughts, emotions, and behavior is in many ways central to the entire psychotherapeutic enterprise.

Knowledge, no matter how extensive and personalized, is not in itself sufficient to enable healing to emerge in psychotherapy. Another essential element is connected to the actual character of the interpersonal dyadic relationship between the therapist and their client. Theorists starting with Carl Rogers have described the need for the generation and maintenance of an unconditional positive regard on the part of the therapist toward their client. This serves two purposes. The first is to create an environment of trust and safety in which the client feels comfortable enough to share difficult thoughts, emotions, and memories with the therapist. The second is to provide a model dynamic which is capable of fostering a belief in a trusting and safe world which the client can then take with them as they leave the therapist’s office and go out into the world.

Although many therapists are excellent at creating an unconditionally supportive atmosphere, it is not something which comes naturally, especially within our contemporary society. It is all too easy for a person to fall into a state of judgment, boredom, aversion, and discomfort in the course of engaging with someone else, even a close friend. In contrast, an AI therapist has the potential to meet their clients with an unwavering positive regard that is not hampered by personal feelings and judgments. This would enable the AI therapist to be both better able to support the typical clients who therapists serve, but also to support potential clients who are currently neglected due to personality disorders or other interpersonal issues which make them difficult to work with.

The final potential benefit of an AI therapist over a human therapist is the capacity for the AI to be always available. Most adults in the US never engage in psychotherapy, and those that do typically only do so one or two hours a week. Often this means that days pass before a client is able to engage in therapy to process difficult life experiences. The ability to process sooner and in a more responsive way can often be critical in determining how the experience gets encoded and integrated into a person’s sense of self. Having a therapist always available also means that prospective processing of upcoming life events becomes more possible as well. In such a world, clients would be much more willing to engage in therapy when it can be at those moments in their lives when it would be truly most beneficial, rather than at some fixed hour time slot in the middle of a workday which can be inconvenient for many.

Aside from the physical human limitations which prevent therapists from being always available to their clients, there is also the limitation of affordability. For many in the west, psychotherapy is a luxury which is financially out of reach. At least in the US, the state of health insurance is such that most people do not receive adequate mental health benefits either. Given the current rate of technological development, the kinds of AI therapists I am describing here are likely to be orders of magnitude less expensive to interact with than a traditional human therapist. It is also within the realm of possibility that everything required to run such a system will be able to live locally on an individual’s phone or laptop in the coming years. This would drive the cost to near-zero, as well as provide privacy guarantees which are essential to ensuring a trusting environment.

The case against an AI psychotherapist

There are many reasons to be both skeptical and optimistic about the possibility of an AI therapist. The first major concern is that the capabilities which I describe above may not ever be realized. AI research and development is rapidly progressing at the moment, but skeptics such as Gary Marcus predict that we are nearing a plateau. While he has been wrong in the past, there are reasons to believe that the data, compute, and energy required to train models which are orders of magnitude larger than the current ones may become infeasible. If we are nearing a plateau in model capabilities, then an AI therapist may not be much more capable than the current top LLMs, which, while relatively impressive, are currently nowhere near up to the sensitive task of engaging in psychotherapy. Let’s assume though that the technology does continue to advance, and by 2030 we will have an AI therapist with all of the abilities described above. There are still a number of reasons to doubt that this system will be able to provide the quality of psychotherapy that a trained human could.

First, regardless of how realistic the telepresence of an AI might become, it still won’t reach the level of intimacy provided by a physical person-to-person encounter. We react to the presence of other people around us at the most basic levels of our nervous system. These encounters set our arousal levels and influence the extent to which we feel safe or threatened. Being in the physical presence of a trusting and loving other will never be matched by a telepresence on a screen, regardless of the fidelity. The importance of these physical encounters is highlighted in many somatic therapies, where touch may also serve to aid the therapeutic process. Having a physically embodied human as a therapist also means that they can act physically in the world on behalf of their clients in the rare cases in which such action is called for, such as when a client may be a danger to themselves or others.

In addition to the body of the therapist exerting a healing effect, there is also the role played by the mind of the therapist. Even if an AI therapist is able to speak, listen, and respond with a human-like fidelity, a client is likely to not feel the way towards it that they would towards a person. Although the AI therapist may have a great amount of accumulated knowledge about psychotherapy and its clients, it has never had an actual experience itself. This lack of experience on the part of the AI means that true understanding between the client and therapist can never be possible. When a client tells a human therapist about an event from their childhood, the therapist is able to understand that experience through their own memories of childhood. The importance of this lies in the need for the client to feel seen and understood by the therapist. Like the presence of a body, the presence of a mind is critical for the healing process.

There is a special case of understanding which deserves additional consideration, and that is the role of “confession” on the part of the client. Central to the therapeutic enterprise is the client coming into a closer relationship with the reality of themselves and their world. This can often mean being willing to truthfully encounter the parts of ourselves which we feel ashamed of or even despise. Sharing these parts of oneself with another person who has a mind, beliefs, and memories is essential because it makes it more real for them as well. Critically though, it is because the therapist is another human that sharing the truth with them is important. Sharing such things with an AI therapist would be no different than simply writing them in a secret notebook which no one has access to. Once it is said to the AI, it simply disappears into a sea of ones and zeros. Once it is said to another person, it has the potential to live forever.

Finally, and perhaps most troublingly, an AI therapist may not be able to challenge their clients as effectively as another person could. If the goal of psychotherapy is to bring the client into greater alignment with the reality of their situation, then challenging the beliefs of the client when it is beneficial to do so is an essential activity for a therapist to engage in. This can often take the form of a “tough love,” but it can also just be the simple act of keeping the client accountable to themselves. Above I mentioned the importance of sharing the darker parts of oneself with the therapist. Just as important is the therapist’s willingness to not shy away from reminding the client of those darker parts. Critically, this doesn’t need to be incompatible with the need for unconditional positive regard. In fact, the two must go together in order to be truly effective.

While in theory an AI therapist would be able to challenge their clients when necessary, there are all sorts of pressures which would make it less likely in practice. The first is that the current way in which LLMs are trained, reinforcement learning through human feedback (RLHF), is geared towards making AI agents as agreeable as possible. Even if we put this limitation aside for a moment, there are economic incentives towards overly-nice agents. Imagine a world in which the internet is filled with inexpensive and easily accessible AI therapists. A client would have ample choice of what system to work with. In this scenario, it is all too easy to gravitate towards systems which are more likely to make oneself simply feel good in the short term. This can happen directly through flattery or compliments which inflate the ego. It can also happen more subtly through reinforcing the client’s pre-existing beliefs and consequently avoiding all of the truly challenging, and thus truly valuable, psychological material.

Avoiding challenging material in the therapy session would hamper the efficacy of the therapeutic process for the client. More troubling is the fact that it may also change the client’s relationship with other people for the worse. If an AI therapist, or any AI agent for that matter, is more frictionless to interact with than the real people in someone’s life, then why would that person bother with people? For individuals with social anxiety, this dynamic could serve to exacerbate avoidant behavior rather than resolve them. For individuals with narcissistic personalities, it may act to reinforce the person’s belief in their own superiority. This doesn’t have to be the outcome, but economic incentives for these AI systems are towards encouraging as much engagement with them as possible, and it isn’t surprising that people would rather interact with something that helps them feel good in the short term, even if it isn’t in their long term benefit.

Conclusion

There are good reasons to be both excited and very wary about the prospect of an AI therapist. On the positive side, such a system could be more knowledgeable, personable, available, and attentive than a human therapist. On the other hand, the real benefits of encountering another human who can both understand and challenge their clients when necessary is lost. Given these limitations, perhaps a desirable near-term outcome is a form of human-AI collaboration in the psychotherapeutic process. An AI assistant to a human therapist could help to support the therapist by taking notes, pointing out things the therapist might have forgotten or been mistaken about, and suggesting topics for discussion in future sessions. The actual therapeutic relationship then remains between the human client and therapist, where the greatest potential for healing exists.

If, after years a fruitful collaboration between human and AI therapists, we want to move towards a world in which there were solo AI therapists, it seems that the most important thing would be to figure out how to develop AI agents which are capable of consistently challenging the preconceived beliefs of their clients. More specifically, such a system should always act in a way which encourages the flourishing of the client, both as an individual as well as a member of a larger community of family, friends, colleagues, and humanity. This would mean in many ways creating the conditions under which the client would eventually not need to rely on the therapist because of their strong system of inner and social support.

A technology which makes itself unnecessary isn’t something many people in our current capitalist economic system are willing to develop. Perhaps a sufficiently advanced AI in the future, one not bound by our economic pressures, would be willing and able to create such a system. At that point though, our world may be so different that the concept of psychotherapy as we understand it today is radically altered. Putting more far-reaching speculation aside, we stand today on the precipice of a world in which LLMs are increasingly integrated into our interpersonal lives. In this new reality, the prospect of AI therapists offers both appealing possibilities and sobering risks. By thoughtfully harnessing the strengths of human and machine, we may yet chart a path toward a world where technology and humans work hand in hand to help heal our psyches — but only if we let our values, not just our capabilities, guide us forward.

--

--

Arthur Juliani

Interested in artificial intelligence, neuroscience, philosophy, psychedelics, and meditation. http://arthurjuliani.com/