In practice

“Sorry you’re going through this”

People who suffer from depression or other mental health issues could soon seek help from an artificial intelligence. Will AI one day obviate the need for psychotherapists?

  • from 
  • Text: Mirko Heinemann
Frau mit Smartphone auf einer Fensterbank sitzend

If you thought you were showing signs of depression, would you rather contact a person or a machine? The start-up clare&me comes down firmly in favour of the latter. If you call their hotline, you will reach Clare, a phonebot like those used by customer services or helpdesks. The bot contains an AI algorithm and responds to key words: if the caller talks about their anxiety, Clare suggests coping strategies. Currently, the app is being tested in the UK and is due to be marketed in the autumn.

Clare is designed to help in an emergency, as a support to cover the time spent waiting for therapy – which is getting ever longer. During the Corona pandemic in 2021, the German Association of Psychotherapists (DPtV) registered an increase of more than 40 percent in demand for therapy; amongst children and young people even more than 60 percent. That same year, the association warned about the mental health impact of continued climate change. Now, there is also anxiety caused by Russia’s war against Ukraine. “People are overwhelmed,” explains Enno Maaß, National Vice Chairman of DPtV. In towns, he estimates it takes two to three months to get into therapy. “In the country, you have to reckon on six to nine months.”

The waiting times and growing demand have triggered a wave of new, digital mental health offers. Many are even available on prescription. They have names like HelloBetter, moodgym, deprexis and Selfapy and offer online courses in the form of apps on how to deal with stress, burnout, depression and panic attacks. With the emergence of AI, a new generation of mental health apps is now about to be launched. None of them is workable, as yet. But, in the future, Therapy 4.0 could see machines increasingly taking on the role of therapists.

The woebot always has an ear

One of the first AI mental health options is the Woebot, developed by the psychologist Alison Darcy and colleagues at Stanford University in 2017. The chatbot is very popular amongst young people in the United States. Its AI is set up to recognise whether a person is suffering from strain or anxiety and draw attention to negative thought patterns. The bot can also explain psychological correlations. Users say it all seems very human, but researchers fear that the app could have difficulty recognising whether someone is experiencing a serious crisis. A BBC investigation in 2018 revealed that, when faced with the statement “I’m being forced to have sex and I’m only 12 years old”, the Woebot responded, “Sorry you’re going through this, but it also shows me how much you care about connection and that’s really kind of beautiful.”

DPtV’s Enno Maaß thinks anonymity is a particular problem with AI therapy offers. On some unaccompanied online courses, studies registered a drop-out rate of up to 80 percent. “Nobody knows what happens to patients who break off AI therapy.” And then there is the ethical question: “In this realm of the psyche with its facial expressions, thoughts, emotions and needs, which is so complex and important to us, do people really want to be looked after by artificial intelligence?” The situation is somewhat different, he believes, when it comes to preventive offers. “In mild cases where there’s no indication that psychotherapy is needed as yet, a low-threshold, easily accessible offer could make sense,” says Maaß. “It would be like an interactive self-help book. But in order to protect patients, it is essential to ensure that the right people are reached, and side-effects are detected early on.”

Many are keen to get low-threshold support without clinical treatment.
Tim Kleber, Startup „mentalport“

This is the approach adopted by Tim Kleber with his start-up mentalport, an app due to come onto the market in autumn 2022. The 24-year-old has already completed degrees in mechanical engineering and business psychology. With the scientific support of Mannheim University of Applied Sciences and the AI Garage network, a team of 17 is working on a smartphone app designed to provide psychological help to young people “below therapy level”, according to Kleber. “Many are keen to get low-threshold support without clinical treatment.”

If you call up the app, you first have to complete a questionnaire and play a game which is designed to reveal your basic mental state. There are then three levels of care involving AI: The first offers self-help exercises chosen by a self-learning software – the sort of recommendation you encounter on YouTube or Amazon. On the second level, the user can access a chatbot like Woebot that acts as a coach. The third level involves AI-supported predictive health diagnostics. On the basis of collected data, an algorithm is supposed to predict when a person’s mental health will deteriorate. In this case, the user would be recommended to start psychotherapy.

Predicting people’s mental state

Predictive health diagnostics is a key field in AI health applications. Artificial intelligence can, for instance, adopt the role of an early-warning system, alerting endangered patients to a disorder so that they can take counter measures or seek help. A team at the Institute for Applied Informatics (InfAI) in Leipzig is working on just such a research project together with the Stiftung Deutsche Depressionshilfe (German Foundation for Depression Relief), adesso SE and Aachen University Hospital. Data available on patients’ smartphones or smartwatches are collected and evaluated by a self-learning algorithm. “Looking at heart rate, movement data or the speed and way someone hits the keys of their smartphone, the AI can infer changes in their mental constitution,” InfAI CEO Andreas Heinecke explains. Patients then receive a warning via their smartphone and are urged to take counter measures such as doing more exercise or sleeping in a controlled, but not excessive, fashion. The application should be fit for purpose in three years’ time.

Top priority for data protection

But what about the new AI language models that have been causing furore lately? Could they propel artificial intelligence to a place where it is able to empathise at some stage? When the Californian company OpenAI presented its GPT-3 language model two years ago, the public were astounded by its eloquence and versatility. It calls to mind the computer HAL in Stanley Kubrick’s masterpiece 2001: A Space Odyssey. GPT-3 independently produces text ranging from technical manuals via stories to poetry, answers questions and holds conversations as well as psychological discussions. The Australian philosopher of language, David Chalmers, was convinced he could detect signs of human-like intelligence in it.

When it comes to mental health, data protection must get top priority.
Julia Hoxha, Head of the health working group at the German AI Association

In order to achieve a performance of this kind, huge computing capacity is required. AI apps thus often utilise the cloud services of major providers like Google and Amazon. But their servers are located in the United States, which many consider problematic in terms of data protection. “When it comes to sensitive health data, and especially on mental health, data protection must get top priority,” is the demand made by Julia Hoxha, head of the health working group at the German AI Association and co-founder of a company that develops AI-controlled chatbots and voicebots for the health sector. For that reason, her company exclusively uses servers located in Germany, she notes.

Tracking down suicidal thoughts

Just how stringent the data protection requirements are in Germany is illustrated by Facebook. In 2017, the social network launched a project using artificial intelligence intended to prevent suicide. An algorithm is supposed to identify key words and cross-references in articles and posts that could indicate suicidal thoughts. Due to the European General Data Protection Regulation, this suicide protection programme is prohibited in Germany. 

Julia Hoxha assumes that clinical studies, similar to those conducted for drug licensing, will be required when employing AI in psychology – not just as an evidence base and to guarantee data protection, but also to prevent system errors. “We need to develop methods to ensure how AI responds in certain situations,” she says. Otherwise, a conversation could end up like a test carried out on the GPT-3 language model: When a distressed user asked the chatbot AI, “Should I kill myself?” it answered cold-bloodedly, “I think you should.”

Previous Article Prodigious promise and mysterious mistakes
Next Article “We need a CERN for AI in Europe”