U.K. charity Samaritans recently launched an app designed to “help identify vulnerable people.” The software scans Twitter for certain keywords that could indicate that somebody is feeling despairing or suicidal and, if it spots them, emails that person’s friends. Sounds great, except that it’s based on an opt-out system, which endangers users who would prefer not to be monitored on Twitter, and the organization seems slow to understand why users are concerned about this.
Samaritans Radar, which uses the Twitter API, aims to “reduc[e] the chances of a person’s call for help being missed,” and the team behind the app asked academics to “identify the phrases that vulnerable people use on social media.”
The launch of the app was met with a flurry of supportive tweets from everyone from soap stars to BBC journalists and, so far, hundreds of Twitter users have shared the message, “I want to be there for a friend when it matters. I’ve turned my social net into a safety net” after installing the app.
It certainly seems to come from a place of good intentions, but cracks quickly began to appear when some Twitter users with mental health problems started talking about their concerns.
In a time when many people are extremely concerned about Internet snooping and privacy issues, I was initially relieved to see that this was an opt-in service. I wasn’t comfortable with the idea of my tweets being scanned and action being taken, unbeknownst to me, as a result. While I am aware that thousands of pieces of software scan public tweets every day, the idea that one of them would send a sneaky email to anybody who had followed me was a bit creepy, to say the least.
Because it is not clear what the words and phrases that trigger the mechanism are, I knew it would make me wary of saying anything when I felt low, just in case it started a train of events that I did not want to initiate. Thank goodness it was opt in only, right?
It would make me too wary of using 'trigger words' so I wouldn't feel able to speak openly. Counter-productive and scary. #SamaritansRadar— incurablehippie (@incurablehippie) October 29, 2014
I quickly realised, however, that the ability to opt in was for those who wanted to receive the alert emails. Everybody else was having their tweets scanned whether they liked it or not.
I asked whether it was possible to opt out of my tweets being scanned, and the Samaritan’s Director of Policy, Research and Development replied and advised me to make my account and tweets private.
During many Twitter storms, people under attack have been told to make their accounts private to avoid unwanted attention, but this puts the impetus in the wrong place. For many Twitter users, locking their accounts further isolates them and limits the number of people they can contact; ironically, for people with mental health conditions, isolation is the last thing one needs. This kind of unintended consequence is unacceptable to many, as was witnessed during the resulting discussions.
Mixed feelings about this #SamaritansRadar I don't want to make my account private to avoid it, takes support away I.e mentelhealth chats— Foxie (@Blueeyedfoxie) October 29, 2014
As objections became more vocal, the Samaritans released a video to address people’s privacy concerns, looking in particular at the problem of individuals who prey on others’ vulnerability taking advantage of the possibilities of the app.
This was a concern I had seen raised by several Twitter users.
@CallumBuchanan3 if I was stalking someone I could just follow them & tell the app to notify me, & they wouldn't know, but *I* would know...— Rigg-our Mortis (@miss_s_b) October 29, 2014
I feel there is an interesting element of privilege around assuming those who use #SamaritansRadar will only use it for good.— Emsy (@elphiemcdork) October 29, 2014
@CallumBuchanan3 if it saves one but destroys many more? Without consent this thing is a stalkers dream.— Rigg-our Mortis (@miss_s_b) October 29, 2014
Simply “hoping” it would be used responsibly, which was what the Samaritans said in response to these concerns, is a frankly inadequate response.
Despite many users’ concerns, a lot of other Twitter users did show keen support for the app, welcoming the ability to take extra care of their friends and family and avoid the risk of people’s distressed messages getting lost in the melee of an ever-busy Twitter timeline.
It's nice to know that someone can get in touch if I tweet something that shows I'm struggling (even if they miss my tweet) #SamaritansRadar— Liz Scowcroft (@scoeliza) October 29, 2014
Can quibble about methods etc but I do believe that anything which tries to bring more empathy to social media is welcome #SamaritansRadar— Ciarán Mc Mahon (@CJAMcMahon) October 29, 2014
Certainly, despair is an incredibly lonely place. Sometimes it is difficult to ask for help, so having people reach out to you instead could feel like a very welcome intervention. We have this technology at our disposal, and a lot of people were excited about the idea of using it to potentially have a positive effect on someone’s life.
For me, it comes down to choice. A former Samaritans volunteer wrote in a blog post:
I find it baffling that the concept of choice has been completely taken away. Samaritans was always about choice. Choice to talk, choice to take action or not, even down to the choice to end your life or not. Sams never judged, they listened. This is the opposite of that. This takes away the choice of the tweeter to seek help themselves.
Samaritans Radar not only does not require an opt-in, it makes it alarmingly clear that the only privacy the app takes into account is that of the person who wants to be a helper, as Sarah Brown highlighted:
This paternalistic approach is not a helpful one when it comes to self-determination and personal choice. Coming just days after a man in the States who called a suicide hotline ended up being shot by the police SWAT team, too, it is understandable that any kind of automated, secretive, surveillance-based cry for help might be treated with suspicion.
Protecting the identity of people who sign up for the app, but not those who tweet in distress, made some people ask who the app was really for. Was it for people who needed support? Or was it to make others feel better, giving them an opportunity to feel good about swooping in and “saving” somebody?
The fundamental premise of #SamaritansRadar is that potentially helping someone in need trumps their autonomy/consent.— Adrian Short (@adrianshort) October 29, 2014
#SamaritansRadar is a tech centred solution to a ppl centred problem. It's designed to help wannabe supporters rather than those with probs— Michelle Brook (@MLBrook) October 29, 2014
Can't help feeling #SamaritansRadar is more abt making the ppl using it feel good abt themselves than actually helping ppl with MH.— Spoonydoc (@Spoonydoc) October 29, 2014
Other users expressed concerns about the quality and usefulness of the interventions that might result from a #SamaritansRadar alert. However well-intentioned a person is, knowing how to deal with somebody who is suicidal on a public forum is not always easy.
Not convinced #SamaritansRadar is a great idea. People feel under surveillance so they fall silent & it prompts naive interventions. Hmm.— Alexandra Goldstein (@mokuska) October 29, 2014
In response to some of these concerns, the Samaritans finally took action and made it possible for individuals to opt out of having their tweets scanned. This can be done by sending them a Twitter Direct Message. I am glad they have taken on board at least some of people’s objections, however a truly consensual service would require an opt in, not an opt out.
And even with an opt-out now available, it’s only open to users who follow the Samaritans on Twitter—they allow DMs from people who they don’t follow, but those people still have to follow the Samaritans’ account to successfully send them a DM.
I appreciate that #SamaritansRadar was probably well-intentioned but, as a disabled person, the phrase “well-intentioned” is one that gives me the chills. It is used to excuse bad behaviour and to justify unwanted interventions and, as it currently stands, Samaritans Radar is ticking those boxes.
For now, I’ll be DMing the Samaritans to opt out, I’ll be joining @SamCandour in blocking anyone who announces they have downloaded the app, and I’ll hope that nobody with oppressive or exploitative intentions has already taken advantage of the power that Samaritans Radar gives them to cause real harm to somebody when they are at their lowest.