Editor's Note: This article was originally published in Harvard Business Review.
The United States faces a mental health epidemic. Nearly one in five American adults suffers from a form of mental illness. Suicide rates are at an all-time high, 115 people die daily from opioid abuse, and one in eight Americans over 12 years old take an antidepressant every day. The economic burden of depression alone is estimated to be at least $210 billion annually, with more than half of that cost coming from increased absenteeism and reduced productivity in the workplace.
In a crisis that has become progressively dire over the past decade, digital solutions — many with artificial intelligence (AI) at their core — offer hope for reversing the decline in our mental wellness. New tools are being developed by tech companies and universities with potent diagnostic and treatment capabilities that can be used to serve large populations at reasonable costs.
AI solutions are arriving at an opportune time. The nation is confronting a critical shortfall in psychiatrists and other mental health specialists that is exacerbating the crisis. Nearly 40 percent of Americans live in areas designated by the federal government as having a shortage of mental health professionals; more than 60 percent of U.S. counties are without a single psychiatrist within their borders. Those fortunate enough to live in areas with sufficient access to mental health services often can’t afford them because many therapists don’t accept insurance.
Instead, the countless undiagnosed suffer, or look to emergency rooms and primary care physicians for treatment. Patients with depression, for instance, see their primary care physicians more than five times on average annually, versus fewer than three times for those without depression. For this reason, even though mental health treatment appears to account for only 4 percent of employer health costs, it’s really linked to nearly a quarter of them.
Every day, one in eight Americans over 12 years old takes an antidepressant.
While some may consider the digitization of mental health services impersonal, the inherent anonymity of AI turns out to be a positive in some instances. Patients, who are often embarrassed to reveal problems to a therapist they’ve never met before, let down their guard with AI-powered tools. The lower cost of AI treatments versus seeing a psychiatrist or psychologist is another plus. These advantages help AI tools ferret out the undiagnosed, speed up needed treatment, and improve the odds of positive outcomes.
Like all digitization efforts in healthcare and other industries, these new tools pose risks, especially to patient privacy. Healthcare has already become a prime target of hackers as more and more records have been digitized. But hacking claims data is one thing; getting access to each patient’s most intimate details presents a whole new type of risk — particularly when those details are linked to consumer data and social media logins. Providers must design their solutions from the outset to employ mitigation techniques such as storing minimal personally identifiable data, regularly deleting session transcripts following analysis, and encrypting data on the server itself (not just communications).
AI vendors also must deal with the acknowledged limitations of AI, such as a tendency for machine learning to discriminate based on race, gender, or age. For instance, if an AI tool that uses speech patterns to detect mental illness is trained using speech samples only from one demographic group, working with patients from outside that group might result in false alerts and incorrect diagnoses. Similarly, a virtual therapist trained primarily on the faces of tech company employees may be less effective reading non-verbal cues from women, people of color, or seniors — few of whom work in tech. To avoid this risk, AI vendors must recognize the tendency and develop AI tools using the same rigorous standards as research clinicians who diligently seek test groups representative of the whole community.
More broadly, AI’s scale can be both a blessing and a curse. With AI, one poor programming choice carries the risk of harming millions of patients. Just as in drug development, we’re going to need careful regulation to make sure that large-scale treatment protocols remain safe and effective.
But as long as appropriate safeguards are in place, there are concrete signs that AI offers a powerful diagnostic and therapeutic tool in the battle against mental illness. Below, we examine four approaches with the greatest promise.
We’re on the cusp of an AI revolution in mental health — one that holds the promise of both better access and better care at a cost that won’t break the bank.
1. Making Humans Better
At their most basic level, AI solutions help psychiatrists and other mental health professionals do their jobs better. They collect and analyze reams of data much more quickly than humans could and then suggest effective ways to treat patients.
Ginger.io’s virtual mental health services — including video and text-based therapy and coaching sessions — provide a good example. Through analyzing past assessments and real-time data collected using mobile devices, the Ginger.io app can help specialists track patients’ progress, identify times of crisis, and develop individualized care plans. In a year-long survey of Ginger.io users, 72 percent reported clinically significant improvements in symptoms of depression.
2. Anticipating Problems
Mental health diagnosis is also being supplemented by machine-learning tools, which automatically expand their capabilities based on experience and new data. One example is Quartet Health, which screens patient medical histories and behavioral patterns to uncover undiagnosed mental health problems. For instance, Quartet can flag possible anxiety based on whether someone has been repeatedly tested for a non-existent cardiac problem.
It also can recommend pre-emptive follow-up in cases where patients may become depressed or anxious after receiving a bad diagnosis or treatment for a major physical illness. Already being adopted by insurance companies and employer medical plans, Quartet has reduced emergency room visits and hospitalizations by 15 to 25 percent for some of its users.
3. Dr. Bot
So-called chatbot counseling is another AI tool producing results. Chatbots are computer programs that simulate human conversation, either through text or a voice-enabled AI interface. In mental health, these bots are being pressed into service by employers and health insurers to root out individuals who might be struggling with substance abuse, depression, or anxiety and provide access to convenient and cost-effective care.
Woebot, for example, is a chatbot developed by clinical psychologists at Stanford University in 2017. It treats depression and anxiety using a digital version of the 40-year-old technique of cognitive behavioral therapy – a highly structured talk psychotherapy that seeks to alter a patient’s negative thought patterns in a limited number of sessions.
In a study of university students suffering from depression, those using Woebot experienced close to a 20 percent improvement in just two weeks, based on PHQ-9 scores — a common measure of depression. One reason for Woebot’s success with the study group was the high level of participant engagement. At a low cost of $39 per month, most were talking to the bot nearly every day — a level of engagement that simply doesn’t occur with in-person counseling.
4. The Next Generation
Today’s mental health AI solutions may be just the beginning. The University of Southern California’s Institute for Creative Technologies has developed a virtual therapist named Ellie that hints at what’s ahead. Ellie is far more than the usual chatbot — she can also detect nonverbal cues and respond accordingly. For instance, she has learned when to nod approvingly or perhaps utter a well-placed “hmmm” to encourage patients to be more forthcoming.
Ellie — an avatar rendered in 3-D on a television screen — functions by using different algorithms that determine her questions, motions, and gestures. The program observes 66 points on the patient’s face and notes the patient’s rate of speech and the length of pauses before answering questions. Ellie’s actions, motions, and speech mimic those of a real therapist — but not entirely, which is an advantage with patients who are fearful of therapy.
In a research project with soldiers recently returned from Afghanistan, Ellie uncovered more evidence of post-traumatic stress disorder than the Post-Deployment Health Assessment administered by the military. Ellie was even able to identify certain “tells” common to individuals suffering from post-traumatic stress disorder. With up to 20 percent of returning veterans coping with post-traumatic stress disorder and a staggering suicide rate among the population, the potential impact of a solution like Ellie is significant.
As with all potential breakthroughs, caveats remain and safeguards must be developed. Yet, there’s no doubt we’re on the cusp of an AI revolution in mental health — one that holds the promise of both better access and better care at a cost that won’t break the bank.
This article is posted with permission of Harvard Business Publishing. Any further copying, distribution, or use is prohibited without written consent from HBP - permissions@harvardbusiness.org.