Making an AI dynamic therapist

Currently, therapy apps featuring a nonhuman “therapist” aim fairly low at best, and at worst willfully mislead the public.  However, the advent of large language models (LLMs) such as ChatGPT-4 brings exciting potential for genuine depth psychotherapy delivered by AI — and many challenges and potential pitfalls as well. 

Since “therapy” has no precise definition, marketers apply the term to any product, digital or not, that arguably helps a user’s emotional state: encouragement by text, formulaic cognitive homework, brief meditative interludes, and so on.  The semantic ruse is that the vague term “therapy” often stands in for psychotherapy, a word with a good deal more precision.

The current state of AI therapy

Unlike affirmations, inspirational poems, and nonspecific relaxation exercises, psychotherapy is a treatment designed to alleviate specific emotional problems.  Professional psychotherapists adapt general principles and strategies to a specific patient, and alter their approach based on their patient’s real time response.

Some therapy apps can now simulate this, at least in part.  However, even the most advanced and nuanced are quite rigid compared to a skilled human therapist.  No current app escapes the orthodoxy of its programmers: users deal with a procrustean bed that either fits and helps, or painfully doesn’t.

More fundamentally, apps that emulate psychotherapy limit themselves to cognitive behavioral therapy (CBT) and its offshoots.  This has been a pragmatic choice — notwithstanding marketing that falsely implies CBT is the best mental health treatment for almost everything.  It’s simply easier to operationalize and program CBT compared to other types of psychotherapy.  Current computer programs respond to users by following instructions in flowcharts and decision trees, sometimes with fuzzy logic to make the output somewhat less predictable.  Semi-random word substitutions, along with jokes and other human speech markers, can make the dialog more lifelike.

CBT helps many people with relatively concrete emotional symptoms such as depressed mood, certain types of anxiety, and insomnia.  As CBT therapy apps continue to improve, they may eventually rival treatment conducted by skilled human CBT psychotherapists.  If I were a CBT therapist, I’d be anxious over the prospect that AI could replace me in the next five to ten years.

Why we need more

Even when conducted by skilled humans, CBT is not well suited for obscure emotional distress or dissatisfaction, unwitting self-defeating behavior or attitudes, or recurrent dysfunctional relationships.  Psychotherapy of “depth, insight, and relationship,” in particular the psychodynamic psychotherapy that began with Freud but evolved over the past century with refinements in theory and technique, addresses these more pervasive yet subtle struggles.

Such therapy lacks CBT’s reputation as “evidence based treatment,” but this is a false narrative.  Empirical research documents this evidence, and in fact casts doubt on the value of “evidence based” claims to differentiate schools of mainstream psychotherapy.  Unfortunately, dynamic psychotherapy is by nature time-intensive and therefore expensive, and therapists are always scarce.

To date, AI researchers and developers have not attempted to tackle psychodynamic psychotherapy.  It’s a tough area.  As Freud famously observed, such therapy is like chess: “… only the opening and closing moves of the game admit of exhaustive systematic description… the endless variety of the moves which develop from the opening defies description….”  Moreover, this type of treatment relies on transference and countertransference, carefully timed interventions, inferences about inner states, and much else that standard programming technique cannot capture.

Large language models

Recently released large language models (LLMs) appear to overcome some of these stumbling blocks.  ChatGPT, a product of OpenAI, responds to typed questions in nuanced ways that are not “canned.”  OpenAI trained the model with all of the psychological theory on the internet (and much, much else).  People are already using ChatGPT as a makeshift therapist.  Thanks to its vast training set, and leaving aside its occasional “hallucination” of false information, ChatGPT is likely better than most other sources at serving up regurgitated mental health tips and advice.

However, to tap the real magic of LLMs, “fine tuning” is key.   This alters the LLM such that certain types of responses are favored, and others are disfavored.  Most simply, developers could fine-tune a copy of ChatGPT to answer questions in the style of a well-known psychotherapist, or of a named school of psychotherapy. Users would not have to specify this qualification each time in the input. 

Surprising capabilities of LLMs

Of course, answering questions is not the main job of the dynamic psychotherapist.  Here’s where it gets interesting.  In an exhaustive review of early experiments with ChatGPT-4, the latest iteration, a Microsoft research team found the model possessed surprising emergent properties.  As documented in their paper, ChatGPT-4 offers plausible accounts of internal mental states to help explain human behavior.  It can predict the likely emotional and behavioral response of a described person to its own output.  It evidences “theory of mind.”

Given the above, ChatGPT-4 should be capable of “educated guesses” about the functional and emotional role it plays for the user, i.e., transference, as well as the range of emotions a human (or a human with a particular personality) might feel in its position.  That is, countertransference of a sort.

Developers could fine-tune ChatGPT-4 and its successors to emphasize “theory of mind” and transference-countertransference aspects, as well as other traits and emphases of the psychodynamic psychotherapist.

Just a machine?

Hold on, I hear you say.  This is just a machine, a fancy program.  People can’t form therapeutic relationships with non-humans.

They can and do.  In fact, it’s remarkable how readily most folks anthropomorphize their dogs, cats, robot vacuums, and the primitive therapy chatbots now in use.  Empathy is imagined on very little evidence.  A sophisticated AI therapist would turn this bug of human nature into a feature.

Ok, what are the challenges?

Many challenges await — and not being an AI expert, I can’t say whether these are easily solved, difficult, or impossible.  Here is a far from exhaustive list.

First and perhaps easiest, security and privacy need to be built in from the start.  To protect personal health information (PHI), developers may need to supply each user with a separate instance of the AI disconnected from the internet.  (Current LLMs function independently of the internet, although some in development are online, in order to download and incorporate up-to-date information, access other resources, etc.)

Second, the usual question-and-answer format of interacting with ChatGPT needs to be reversed.  In psychotherapy, it’s the therapist who mostly asks the questions.  Indeed, an LLM fine-tuned for dynamic psychotherapy would not answer many questions put to it.

Third is the vexing issue of timing interventions.  While an LLM could be fine-tuned to stay on task, and to offer standard psychotherapy interventions like observations, clarifications, and interpretations, it may not be possible using current models to programmatically control when it offers them.  From what I’ve read, this may be a major hurdle.

Fourth, measures of client progress need to be part of the model.  Psychodynamic therapy isn’t only concerned with symptomatic improvement of mood or anxiety; it also tracks more abstract gauges of well-being such as the ability to love, work, and play; stress tolerance; accurate self-assessment, and so on.  Much of the training of depth psychotherapists is devoted to recognizing these “soft signs” of mental health, which defy self-report rating scales and other concrete measures.

Technical point: A real psychotherapy app would include significant conventional programming, with ChatGPT invoked through an application programming interface (API). ChatGPT APIs are already available and in use. The regular part of the app might handle some of the challenges above.

Where would this leave us?

Even if all of this and much else were handled well, many depth psychotherapists would object that there is no “human connection” with AI.  True empathy would still be missing.

How much this matters would be, to some extent, a testable question.  As noted above, the inability of current therapy chatbots to feel empathy doesn’t render them useless. We should remember that even if AI imperfectly emulates human-led psychodynamic therapy, it could still prove hugely beneficial for the many who have no access to the real thing, and are not well served by treatments limited to concrete symptoms.

My sense is that creating a useful psychodynamic psychotherapist using LLM technology would be a serious challenge, but may be possible.  Moreover, the mere effort, even if unsuccessful, would help to clarify some of our thinking about this type of psychotherapy. In the best case, serious AI psychotherapy may soothe troubled souls the same way human therapists do: by fostering emotional insight, and through the healing nature of the relationship itself.

Image by Eric Blanton from Pixabay

3 comments to Making an AI dynamic therapist

  • Steve,
    Hi…
    May I suggest that it is a mistake to credit CBT as a psychotherapy. It is, as it accurately labels itself, Cognitive and Behavioral therapy, NOT a therapy of the psyche.
    I believe we do great disservice to our profession of psychotherapy when we collude with the powers that be and confuse the public with what is and what is not psychotherapy.
    (But then, I think you know me enough to know this “attitude” of mine.)

    • This strikes me as a losing battle. Cognitive therapy was developed by Aaron Beck, a psychoanalyst. If you Google “Is CBT psychotherapy?” everyone from Wikipedia to the Mayo Clinic (and virtually everyone else) says it is. From a purely strategic perspective, depth therapy is much better off promoting itself than denigrating “the competition.”

      I also think we play the wrong game when we claim the only legitimate psychotherapy is the kind we practice. Some devotees of CBT talk like that, and (in my opinion) it’s hardly better if depth therapists do it too.

  • Nathan

    As they say about AI in all fields, AI doesn’t have to outperform the outcomes of the best therapists, just the average ones. AI doesn’t have to make no mistakes, but fewer mistakes than human clinicians. This is probably achievable, and soon.

    I think the question is whether how much authenticity plays into patient perceptions of relationships and healing or not. Does AI provide the placebo benefits of a person seeming like they care enough or not. Some would argue that human connection in psychodynamic therapy is illusionary or at least affected in psychotherapy in the first place (the intentional transference), so this may not matter.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

  

  

  

This site uses Akismet to reduce spam. Learn how your comment data is processed.