Skip to content

How Facebook’s AI is trying to assist suicide prevention

  • by
  • 4 min read

Facebook is using its AI’s machine learning capabilities towards suicide prevention — one of the leading causes of deaths in the world for youngsters.

Each year, around the world, close to 800,000 people take their lives — that equals one death every 40 seconds — according to the World Health Organization. Suicide is the second leading cause of fatalities across the 15 to 29-year-olds worldwide.

While the Facebook AI isn’t really preventing suicides by itself, it’s a tool that’s helping the Facebook review team working on filtering posts with suicidal intent and in cases of imminent danger, contact the local authorities too.

“We’re not doctors, and we’re not trying to make a mental health diagnosis,” says Dan Muriello, an engineer on the team that produced the tools. “We’re trying to get information to the right people quickly.”

Also read: Why 1/4th of Facebook users are deserting the social media platform?

How the Facebook AI is helping suicide prevention

How Facebook's AI is trying to assist suicide prevention

The Facebook AI tracks posts by users of the social network and sifts through content while calculating the combination of words in the comments and the post itself.

It then scores the content based on the previously confirmed cases of suicidal expression. This score is then combined with other numerical data such as time of the day and day of the week and inputted into a ‘random forest learning algorithm.

This machine learning algorithm specialises in numerical data. After crunching the data, it predicts whether or not the specific post should be flagged for suicide prevention and sent to the human review team — Community Operations.

But the Facebook AI wasn’t as good as detecting suicide risks in the beginning due to the lack of context.

How the Facebook AI was trained and works

For the last few years, users of the social network could flag posts that they feel might carry suicidal expressions. In 2017, Facebook started using machine learning to filter through posts and identify the ones that might be a suicide risk.

The AI uses phrases in posts and concerned comments from friends and family to identify posts with suicidal intent.

But that didn’t turn out well as words such as ‘Kill’, ‘Die’, ‘Goodbye’ that could be used to identify a suicide risk was too commonly used in other contexts. And the AI would return many harmless false positives to the human reviewers.

How Facebook's AI is trying to assist suicide prevention

According to the company, their team didn’t have enough Facebook posts that contain suicidal expressions (positive examples) but had a large volume of posts that didn’t contain suicidal expression (negative examples).

This led to a lopsided data set with a very low volume of positive examples compared to the negative ones.

“The team’s big breakthrough was the realisation that they had a smaller and more precise set of negative examples,” writes Catherine Card, Director of Product Management.

“This set of negative examples contained a lot of the ‘I have so much homework I want to kill myself’ type, which led to more precise training of the classifiers on accurate suicidal expressions.”

You might also like: Everything you need to know about the 3 new Instagram security tools

“The smaller dataset helped us develop a much more nuanced understanding of what is a suicidal pattern and what isn’t,” says Muriello.

According to the experts contacted by the company, the early hours of morning and Sundays can be common times when one contemplates suicide.

This gave their AI another subset to work with and combining the post data, comment data and the time of the post, the AI then flags the post as a suicide risk or not.


Source


Prayank

Prayank

Writes news mostly and edits almost everything at Candid.Technology. He loves taking trips on his bikes or chugging beers as Manchester United battle rivals. Contact Prayank via email: prayank@pm.me

>