© 2024 WUTC
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Facebook Increasingly Reliant on A.I. To Predict Suicide Risk

Facebook has been using artificial intelligence to detect if a user might be about engage in self-harm. The same technology may soon be used in other scenarios.
Dado Ruvic/Reuters
Facebook has been using artificial intelligence to detect if a user might be about engage in self-harm. The same technology may soon be used in other scenarios.

A year ago, Facebook started using artificial intelligence to scan people's accounts for danger signs of imminent self-harm.

Facebook Global Head of Safety Antigone Davis is pleased with the results so far.

"In the very first month when we started it, we had about 100 imminent-response cases," which resulted in Facebook contacting local emergency responders to check on someone. But that rate quickly increased.

"To just give you a sense of how well the technology is working and rapidly improving ... in the last year we've had 3,500 reports," she says. That means AI monitoring is causing Facebook to contact emergency responders an average of about 10 times a day to check on someone — and that doesn't include Europe, where the system hasn't been deployed. (That number also doesn't include wellness checks that originate from people who report suspected suicidal behavior online.)

Davis says the AI works by monitoring not just what a person writes online, but also how his or her friends respond. For instance, if someone starts streaming a live video, the AI might pick up on the tone of people's replies.

"Maybe like, 'Please don't do this,' 'We really care about you.' There are different types of signals like that that will give us a strong sense that someone may be posting of self-harm content," Davis says.

When the software flags someone, Facebook staffers decide whether to call the local police, and AI comes into play there, too.

"We also are able to use AI to coordinate a bunch of information on location to try to identify the location of that individual so that we can reach out to the right emergency response team," she says.

In the U.S., Facebook's call usually goes to a local 911 center, as illustrated in its promotional video.

Mason Marks isn't surprised that Facebook is employing AI this way. He's a medical doctor and research fellow at Yale and NYU law schools, and recently wrote about Facebook's system.

"Ever since they've introduced livestreaming on their platform, they've had a real problem with people livestreaming suicides," Marks says. "Facebook has a real interest in stopping that."

He isn't sure this AI system is the right solution, in part because Facebook has refused to share key data, such as the AI's accuracy rate. How many of those 3,500 "wellness checks" turned out to be actual emergencies? The company isn't saying.

He says scrutiny of the system is especially important because this "black box of algorithms," as he calls it, has the power to trigger a visit from the police.

"It needs to be done very methodically, very cautiously, transparently, and really looking at the evidence," Marks says.

For instance, Marks says, the outcomes need to be checked for unintended consequences — such as a potential squelching of frank conversations about suicide on Facebook's various platforms.

"People ... might fear a visit from police, so they might pull back and not engage in an open and honest dialogue," he says. "And I'm not sure that's a good thing."

But Facebook's Davis says releasing too many details about how the AI works might be counterproductive.

"That information could could allow people to play games with the system," Davis says. "So I think what we are very focused on is working very closely with people who are experts in mental health, people who are experts in suicide prevention to ensure that we do this in a responsible, ethical, sensitive and thoughtful way."

The ethics of using an AI to alert police to people's online behavior may soon go beyond suicide-prevention. Davis says Facebook has also experimented with AI to detect "inappropriate interactions" between minors and adults.

Law professor Ryan Calo, co-director of the University of Washington's Tech Policy Lab, says AI-based monitoring of social media may follow a predictable pattern for how new technologies gradually work their way into law enforcement.

"The way it would happen would be we would take something that everybody agrees is terrible — something like suicide, which is epidemic, something like child pornography, something like terrorism — so these early things, and then if they show promise in these sectors, we broaden them to more and more things. And that's a concern."

There may soon be a temptation to use this kind of AI to analyze social media chatter for signs of imminent crimes — especially retaliatory violence. Some police departments have already tried watching social media for early warnings of violence between suspected gang members, but an AI run by Facebook might do the same job more effectively.

Calo says society may soon have to ask important questions about whether to allow that kind of monitoring.

"If you can truly get an up-or-down yes or no, and it's reliable, if intervention is not likely to cause additional harm, and is this something that we think it is important enough to prevent, that this is justified?" Calo says. "That's a difficult calculus, and I think it's one we're going to have to be making more and more."


If you or someone you know may be considering suicide, contact the National Suicide Prevention Lifeline at 1-800-273-8255 (En Español: 1-888-628-9454; Deaf and Hard of Hearing: 1-800-799-4889) or the Crisis Text Line by texting 741741.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Tags
Martin Kaste is a correspondent on NPR's National Desk. He covers law enforcement and privacy. He has been focused on police and use of force since before the 2014 protests in Ferguson, and that coverage led to the creation of NPR's Criminal Justice Collaborative.