Omnia Health is part of the Informa Markets Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Does AI have the emotional intelligence to supplement mental healthcare?

Article-Does AI have the emotional intelligence to supplement mental healthcare?

mental health AI
AI-powered mental health triaging has come under fire for failing its users despite the hype.

Mental healthcare is on the verge of a significant transformation with algorithms emerging as potential allies in the treatment process. However, the inherent bias in generative AI poses a critical question: what are the implications for patient outcomes? 

The crisis of mental health 

Individualised treatment plans are crucial in helping individuals with mental health conditions manage their symptoms and lead fulfilling lives. Early support has been proven to yield long-term benefits. Unfortunately, up to 50 per cent of individuals in high-income countries and over 75 per cent in low-income countries never receive the care they need. 

RelatedTop 5 AI-driven medical innovations in the United States

Stigma and inadequate funding remain significant barriers to accessing mental health services. The global health sector has traditionally neglected mental health, especially during times of conflict, natural disasters, and emergencies. Paradoxically, the demand for mental health support doubles during these periods, while resources for care are at their lowest. In 2020, despite the pandemic's isolation and loss, governments allocated just over two per cent of their health budgets to mental health. In low-income countries, the scarcity of resources results in less than one mental healthcare worker for every 100,000 people. 

Scalable support for mental health 

Harnessing the transformative potential of generative AI can help bridge these treatment gaps. With its anonymity and privacy, app-based healthcare helps combat the stigma associated with seeking help. The around-the-clock availability of chatbots helps reach as many individuals in need as possible. However, is an app alone enough to provide quality care? 

While chatbots can offer valuable triage and immediate support, they cannot replace human interaction in therapy. The goal of healthtech should be to alleviate administrative burdens on healthcare providers, allowing them to focus on acute care, according to Dr. Dawn Branley-Bell, Chair of Cyberpsychology at the British Psychological Society. “Chatbots are one of the ways in which AI can be used to aid healthcare processes, for example, by helping signpost patients to the most appropriate services based on their symptoms. Reducing workload increases staff capacity and time to dedicate to the most vital tasks.” 

Branley-Bell collaborates with Northumbria University's Psychology and Communication Technology (PaCT) Lab to explore using chatbots to encourage individuals to seek advice for stigmatised health conditions. “Being able to talk to a chatbot first may help individuals make that sometimes difficult first step towards diagnosis or treatment.” 

RelatedAI leads the way in advancing early disease detection

In January of this year, Limbic Access, a UK-based AI mental health chatbot, obtained Class IIa UKCA medical device certification and was deployed across the underfunded National Health Service (NHS) in the UK to streamline mental health referrals. Limbic Access uses machine learning to analyse digital conversations to support patient self-referral. The chatbot reportedly achieves a 93 per cent accuracy rate in classifying common mental health issues treated by the NHS Talking Therapies programme.  

Given the overwhelming demand for this service, which received 1.24 million referrals in 2021 (a 21.5 per cent increase from the previous year), the introduction of the AI therapy bot yielded significant improvements. Audited clinical data from over 60,000 referrals revealed a 53 per cent increase in recovery rates and a 45 per cent reduction in treatment changes compared to traditional telephone calls and online forms. 

Exploring ‘affective’ computing 

Tech evangelists in the sector are swiftly developing phone- and wearable-based mental health monitoring and treatment solutions through affective computing, a field projected to reach a value of US$37 billion by 2026. Affective computing, also known as emotion AI, uses technology to recognise and respond to human emotions. Voice sensors, sentiment analysis, facial recognition, and machine learning algorithms mine data from facial expressions, speech patterns, posture, heart rate, and eye movements. For example, Companion Mx analyses users' voices to detect signs of anxiety, while Sentio Solutions combines physiological signals with automated interventions to manage stress and anxiety. Devices like the Muse EEG-powered headband provide live feedback on brain activity to guide users toward mindful meditation, and the Apollo Neuro ankle band monitors heart rate variability to offer stress relief through vibrations.  

Additionally, digital therapy is now available at the tap of a finger. App-based conversational agents like Woebot replicate the principles of cognitive behavioural therapy, offering guidance on sleep, worry, and stress. These chatbots use sophisticated natural language processing (NLP) and machine learning techniques to understand user emotions.  

The University of Southern California created a virtual avatar therapist named Ellie who can interpret nonverbal cues and respond appropriately by offering affirmations or thoughtful responses. This offers a glimpse into the future of virtual therapists. However, if experts in the field still debate how emotions are felt and expressed across diverse populations, the foundation of emotionally intelligent computing may be shaky at best. 

AI and inclusive mental healthcare 

Searches for relaxation, OCD, and mindfulness apps saw significant spikes in 2020, reflecting their popularity beyond the pandemic. Many businesses now provide digital mental health tools to their employees as an investment in recouping productivity losses caused by employee burnout.  

However, while these digital services aim to fill the gaps in mental healthcare, they may inadvertently create new disparities. Devices designed for emotion regulation, such as the MUSE headband and the Apollo Neuro band, come with a hefty price tag of US$250 and US$349, respectively. Cheaper guided meditation or conversational bot-based apps often serve as self-treatment alternatives. Additionally, many smartphone-based services hide behind paywalls and require subscription fees to access the full content, limiting their democratisation. 

When good AI goes bad 

As more of these tools emerge in an increasingly lucrative market, experts warn that a cash grab may dilute the ethics of mental healthcare delivery. According to research by the American Psychological Association, only 2.08 per cent of the over 20,000 mental health apps available to mobile users have published, peer-reviewed evidence supporting their efficacy.  

A study of 69 depression apps published in BMC Medicine revealed that only 7 per cent included more than three suicide prevention strategies, while six gave inaccurate information about how to contact suicide hotlines. These unregulated and inaccurate apps were reportedly downloaded more than two million times

In June of this year, the American National Eating Disorders Association (NEDA) declared that AI would completely replace its human workforce. However, in less than a week, the association pulled its chatbot from its support hotline for dispensing dangerous advice about eating disorders. Sharon Maxwell, an eating disorder campaigner, was the first to warn that the chatbot gave her counterintuitive advice. “If I had accessed this chatbot when I was in the throes of my eating disorder, I would not have gotten help for (it). If I had not gotten help, I would not still be alive today,” Maxwell wrote in her tell-all Instagram post. “Every single thing (the chatbot) suggested were things that led to my eating disorder.” 

A 2022 report tested how AI suggestions affect emergency judgments when individuals call for medical or police help during mental health emergencies. The web-based trial tested the decision-making of a mix of physicians and non-experts with and without prescriptive or descriptive AI recommendations. Participants who used AI recommendations were encouraged to call the police in situations involving certain ethnicities instead of offering medical help. These results show that while discriminatory AI in a realistic health context can lead to poor outcomes for marginalised minorities, properly phrasing model recommendations can reduce the AI system's underlying bias.  

Emotion AI systems struggle to capture the diversity of human emotional experiences and often reflect the cultural biases of their programmers. Voice inflections and gestures differ among cultures, posing challenges for affective computer systems to interpret emotions accurately. AI is not yet sophisticated enough to replicate the spontaneity and natural flow of talk therapy conversations, says Dr. Adam Miner, a clinical psychologist at Stanford. He highlights the limitations of AI in capturing the crucial context and judgment necessary for a comprehensive understanding.  

“An AI system may capture a person’s voice and movement, likely related to a diagnosis like major depressive disorder. But without more context and judgment, crucial information can be left out,” he says. Ultimately, psychology is a blend of science and intuition that cannot be analysed through emotional data alone. Treatments depend on the therapeutic alliance between patient and practitioner, which most technologies overlook as they go human-free.  

Algorithm aversion and human biases 

Machines often outperform humans in decision-making, yet humans tend to mistrust AI, leading to a tendency to override algorithmic decisions, according to researchers at ESMT Berlin. This bias, known as algorithm aversion, hampers the ability to leverage machine intelligence effectively.  

“Often, we see a tendency for humans to override algorithms, which can be typically attributed to an intrinsic mistrust of machine-based predictions,” says researcher ​​Francis de Véricourt. “This bias may not be the sole reason for inappropriately and systematically overriding an algorithm. It may also be the case that we are simply not learning how to effectively use machines correctly when our learning is based solely on the correctness of the machine’s predictions.” 

Researchers emphasise the need for human decision-makers to account for AI advice and continually learn from its intelligence. Trusting AI's decision-making ability is crucial for improving accuracy and optimising collaboration between humans and machines. 

These findings shed light on the importance of human-machine collaboration and guide us in determining when to trust AI and when to rely on human judgment. Understanding these dynamics can enhance the use of AI in decision-making. In the realm of mental healthcare, where concerns about AI bias persist, such insights become even more relevant for practitioners. 

This article appears in the latest issue of the Omnia Health Magazine, read more here

Back to Technology

Hide comments
account-default-image

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish