More

    Millions are turning to AI for therapy

    Millions are turning to AI for therapy

    The Economist
    Updated on: Nov 12, 2025 12:20 pm IST
    Copy Link

    According to the World Health Organisation, most people with psychological problems in poor countries receive no treatment.

    “Cold steel pressed against a mind that’s already made peace? that’s [sic] not fear. that’s clarity.” According to a lawsuit filed against OpenAI on November 6th, that is what ChatGPT—an artificial-intelligence (AI) chatbot that is the firm’s best-known product—told Zane Shamblin, a 23-year-old American, shortly before he shot himself dead.

    Representational image(Unsplash)PREMIUM
    Representational image(Unsplash)
    Chart.
    Chart.

    All this is a stark illustration of the high stakes for what could be a revolution in mental-health care. Despite the sorts of disasters alleged in the lawsuits, some doctors and researchers think that—provided they can be made safe—modern chatbots have become sophisticated enough that pressing them into service as cheap, scalable and tireless mental-health therapists could be a great boon.

    Human therapists, after all, are in short supply. According to the World Health Organisation, most people with psychological problems in poor countries receive no treatment. Even in rich ones somewhere between a third and a half are unserved. And at least some people seem to be willing to bare their souls to a machine, perhaps because it can be done from home, is much cheaper and may be less embarrassing than doing so to a human therapist. A YouGov poll conducted for The Economist in October found that 25% of respondents have used AI for therapy or would at least consider doing so (see chart).

    The idea is not entirely new. The National Health Service in Britain and the Ministry of Health in Singapore have for the past few years been using Wysa, a chatbot made by a firm called Touchkin eServices, which assesses patients and offers exercises based on cognitive behavioural therapy under human supervision. A study published in 2022—admittedly conducted by Touchkin’s own researchers, with help from the National Institute of Mental Health and Neurosciences in India—found Wysa about as effective at reducing the depression and anxiety associated with chronic pain as in-person counselling.

    Another study, published in 2021 by researchers at Stanford University, examined Youper, another therapy bot developed by an American startup of the same name. It reported a 19% decrease in users’ scores on a standard measure of depression, and a 25% decrease in anxiety scores, within two weeks—a result about as good as five sessions with a human therapist.

    Wysa and Youper are predominantly rules-based chatbots, whose technological underpinnings pre-date the recent rush of interest in AI. Unlike chatbots based on large language models (LLMs), such as ChatGPT, they use a relatively inflexible set of hard-coded rules to choose responses from a database of pre-written answers.

    Such bots are much more predictable than LLM-based programs, which come up with their responses by applying statistics to an enormous corpus of training data. A bot following human-written rules cannot go off the rails and start misadvising its patients. The downside is that such bots tend to be less engaging to talk to. When talking is the treatment, that matters. A meta-analysis published in 2023 in npj Digital Medicine, a journal, found that LLM-based chatbots were more effective at mitigating symptoms of depression and distress than primarily rule-based bots.

    Users seem to feel the same way. YouGov polls for The Economist in August and October found that, of respondents who had turned to AI for therapy, 74% had used ChatGPT, while 21% had chosen Gemini, an LLM made by Google; 30% said they had used one of Meta AI, Grok, character.ai (an entertainment website that features “therapist” personas) or another general-purpose bot. Just 12% said they used an AI designed for mental-health work.

    That makes researchers nervous. Catastrophic failures of the sort alleged in the OpenAI lawsuits are not the only way LLM therapists can go wrong. Another problem, says Jared Moore, a computer scientist at Stanford University, is their tendency to sycophancy: to be “overly agreeable in the wrong kind of setting”. Mr Moore fears that LLM therapists might indulge patients with things like eating disorders or phobias rather than challenge them.

    OpenAI says its latest LLM, GPT-5, has been tweaked to be less people-pleasing and to encourage users to log off after long sessions. It has also been trained to help users explore the pros and cons of personal decisions rather than to offer direct advice. And if the model detects someone in crisis, it should urge them to speak to a real person. But it does not alert the emergency services to threats of imminent self-harm—something that guidelines allow human therapists to do in many countries.

    The best of both worlds?

    Rather than try to patch up general-purpose chatbots, some researchers are trying to build specialised ones, hoping to keep the chattiness of LLM-based bots while making them safer for their users. In 2019 a team at Dartmouth College began work on a generative-AI model called Therabot. Although Therabot is based on an LLM, it is fine-tuned with a series of fictional conversations between therapists and patients written by the bot’s creators. The hope is that such specialised training will make the bot less prone to the sort of errors that general-purpose software can make.

    In a trial whose results were published in March, Therabot achieved an average 51% reduction in symptoms of depressive disorder and a 31% decline in symptoms of generalised anxiety disorder, compared with people who got no treatment. Therabot’s creators next plan to test it against psychotherapy. If that goes well, they hope regulatory approval will follow.

    Slingshot AI, an American startup, recently launched Ash, which the firm billed as “the first AI designed for therapy”. Unlike ChatGPT, says Neil Parikh, one of the firm’s founders, “Ash is not an instruction-following model.” Instead of doing what its users tell it, he says, Ash is designed to push back and ask probing questions. The bot can choose one of four different therapeutic approaches depending on what it thinks would be best.

    Celeste Kidd, a psychologist at the University of California, Berkeley who has experimented with the bot, says Ash is indeed less sycophantic than general-purpose bots—but also less fluent. It was “clumsy and not really responding to what I was saying”, she says. Although the bot is “designed for therapy”, Slingshot also warns that “in cases of crisis” users should seek a professional, human opinion.

    It is not only users that companies will have to convince. In America many lawmakers are keen to crack down on computerised therapy. So far 11 states, including Maine and New York, have passed laws aiming to regulate use of AI for mental health; at least 20 more have proposed them. In August Illinois passed a law that simply banned any AI tool that conducts “therapeutic communication” with people. The recent batch of lawsuits suggests there will be more regulations to come.

    All Access.
    One Subscription.

    Get 360° coverage—from daily headlines
    to 100 year archives.

    image

    E-Paper

    image

    Full Archives

    image

    Full Access to
    HT App & Website

    image

    Games

    Already subscribed? Login

     

    Latest articles

    Related articles