Most mental health apps lack any scientific backing

(Fast co) -- Apps and AI-enabled digital tools were hailed as the long-sought solution to the mental health crisis, but how do they actually measure up?

A new study published in Nature Digital Medicine argues that modern tech alternatives might be overplaying their efficacy to consumers. In fact, most apps don’t have any scientific evidence or peer-reviewed studies to back up their promises.

Researchers identified 1,435 mental health apps in stores such as Google Play and iTunes. They then analyzed the claims on 73 mental health apps related to depression, self-harm, substance use, anxiety, and schizophrenia. Of those, 64% claimed effectiveness at diagnosing a mental health condition or improving symptoms or self-management. In 44% of cases, scientific language was most frequently used to support claims.

However, only 14% of the apps described a design or development involving real-world experience, and no apps referenced certification or accreditation processes. Just two apps offered “low-quality, primary evidence” from a study using the actual app. And only one app actually included a citation to published literature.

“Scientific language was the most frequently invoked form of support for use of mental health apps; however, high-quality evidence is not commonly described,” states the study. The researchers note that while there are plenty of reviews attesting to consumer mobile health apps’ success in helping individuals, the majority are simply not evidence-based and can, in fact, “contain harmful content.”

AN UNREGULATED INDUSTRY

Silicon Valley has increasingly invested in mental health solutions over the past few years. The sector is flush with chatbots, AI-enhanced monitors, and even “robots” that aim to help people with a number of conditions, including anxiety, depression, and PTSD. They now even live in your Facebook messenger.

But as Fast Company has reported, there is concern that for all the supposed benefits of mental health and counseling bots, there has been little to no regulation. Currently, the FDA does not monitor their claims or results as they do with, say, medical devices.

Meanwhile, critics claim the field requires far more psychological research. Others wonder if apps are a poor replacement for human communication and face-to-face counseling, which is generally a more superior form of treatment.

As with many digital healthcare interventions, health apps raise privacy concerns, too. A study published last week in The BMJ found that 19 out of 24 popular medicine-related Android apps shared user data to third and potentially fourth parties, including medical conditions and even whether a user is a smoker or pregnant.


Supporters say new technology offers a more affordable option to those who can’t afford–or access– mental health professionals. Some companies such as Woebot see it as aiding mental health treatment, though not necessarily serving as a medical replacement.

Mental health certainly needs the sector’s support. There is still only one mental health professional per 1,000 individuals, reports the National Mental Health Association. Meanwhile, 1 in 5 Americans—43.8 million adults—experiences mental illness in a given year, according to the National Alliance On Mental Health. However, only 41% ever receive mental health services or treatment. Of those struggling with a serious mental illness, 62.9% receive proper care.

It’s a costly issue: It’s estimated the country spends $2 billion a year on mental health treatment alone.

Still, some wonder, can an app ever really replace a human professional? The medical community is still largely apprehensive. Just as some patients are allergic or respond poorly to certain medications, healthcare experts are concerned that without enough data, we don’t know how certain individuals will respond to robotic therapy.

“The idea behind it certainly makes sense, but human behavior, human emotion, and people are complex,” Dr. John Torous, co-director of the Digital Psychiatry Program at Beth Israel Deaconess Medical Center, previously told Fast Company. “How much can these conversational agents really understand? What can they really respond to? When do they work well, and when do they not work well?”

Popular

More Articles

Popular