AI intervenes to bring down suicide rates

While a reliable method to accurately identify suicidal patients is missing from medical literature, researchers at universities are continually striving for AI-solutions to predict and prevent suicidal attempts.

Suicide is cited as the 10th leading cause of death and is one of three leading causes on the rise, according to the Centers for Disease Control and Prevention (CDC).

AI tools are diligently being deployed to address the rising suicide risk.

Machine Learning for Identifying Suicidal Thoughts

In recent research, researchers from Vanderbilt University applied machine learning (ML) to overcome the limitations of traditional methods that predict suicide attempts by looking into the electronic health records of adult patients. The outcomes showed that ML accurately anticipated future suicide attempts, with 84 to 92 percent accuracy within one week of a suicide event.

In 2017, researchers from Carnegie Mellon University’s (CMU) Marcel Just and the University of Pittsburgh’s David Brent developed a promising approach that helped identify suicidal intentions of individuals. Funded partly by the National Institute of Mental Health, the study analyzed the variations of how brains discriminated and reacted to death and life related-concepts like ‘death’, ‘cruelty’, ‘trouble’, ‘carefree’, ‘good’ and ‘praise’.

Published in Nature Human Behavior, their study relied on a machine-learning algorithm (Gaussian Naive Bayes). The central point of the study was to identify suicidal thoughts by looking into their thoughts on death-related topics.

NLP to Identify Social Media Behavior

Identifying language patterns on users’ social media can also lead to early intervention and stopping a suicidal attempt.

After instances of people broadcasting their suicides on Facebook Live, Facebook started looking for signs of suicidal thoughts through posts. Using AI, Facebook scans users’ posts and looks for patterns of suicidal behavior. It then flags the posts to human moderators who then respond by sending user resources on mental health. In urgent situations, the company contacts the first-responders who can try to find the individual. Additionally, Facebook works with 80 local partners like Save.org and the National Suicide Prevention Lifeline to develop policies on self-harm and suicide-related content.

But Facebook is not alone to help people in the most vulnerable situations.

While searching for “ways to kill yourself” or “suicidal thoughts”, Google search immediately takes the user to a page that provides information on 24*7 suicidal prevention lifeline, along with an online chat to help overcome the suicidal feeling. For Google searches on suicidal ways or ways to harm self, the ‘autocomplete’ is disabled. Usual google searches rely on predictive analytics and autocomplete so the user can skip typing the entire sentence in search space.

As per Google’s autocomplete policy guidelines, predictions on suicidal searches come under harmful or dangerous behavior and hence do not yield desired results.

Chatbots for Identifying Anxiety and Depression

According to the World Health Organization (WHO), an estimate of 60 percent of people who commit suicide have major depression. Different versions of suicide-prevention AI-tools are being deployed to address symptoms like depression that could possibly lead to suicide.

Woebot, a conversational chatbot aims to identify symptoms of anxiety and depression in young teens. The chatbot with a dorky sense of humor tracks moods through graphs and displays the progress every week, using Cognitive Behavioral Therapy (CBT). In other words, it creates “the experience of a therapeutic conversation for all of the people that use him.”

Crisis Text Line, a free 24/7 text line provides an AI solution to detect the risk of suicide. Powered by deep neural networks and natural language processing (NLP), Crisis Text Line can service 94% of high-risk texters within 5 minutes. The essence of the model lies in its capability of predicting risk by ‘reading variability in sentences and understanding context’, classifying thousands of high-risk words and word combinations. This combination of model predictions and a real-time feedback loop from Crisis Counselors is key to retraining the model.

Possible Roadblocks for AI

A larger debate is about the compliance of AI-solutions within the realm of a complex ethical and privacy landscape. Firms often find themselves struggling to find a balance between privacy concerns and suicide prevention. In such cases, engaging the users to trust the innovative applications remains critical.

In 2014, UK charity Samaritans suspended its suicide prevention Radar app following concerns on privacy and its potential misuse by online bullies. When and how to intervene would also require a certain degree of certainty before approaching a case. Intervention through the identification of suicidal thoughts may give rise to the risk of false positives.

But given the vast scale of relentless research efforts, clinicians may soon find AI useful for identifying suicidal intentions and that would mean saving lives.

If you’re facing distress or suicidal crisis in the U.S., you can immediately talk with someone at the National Suicide Prevention Lifeline (800–273–8255, suicidepreventionlifeline.org) or the Crisis Text Line (text HOME to 741–741)

Republishing requires permission from the Author