Charities contribute to growing mistrust of mental health text support – Here’s why

Like many sectors of society, mental health care has radically changed following the pandemic. Forced to adapt to a growing demand for counseling and crisis services, mental health charities have had to rapidly scale up their digital services to meet the needs of their users.
Unfortunately, some charities have experienced growing pains as they transition to an unfamiliar environment that increasingly involves the use of data-driven technologies, such as machine learning, a type of artificial intelligence.
Recently, two charities faced public backlash over the way they used machine learning and handled the data of users who contacted their mental health crisis support services.
When it was revealed that the American company Crisis Text Line shared user data with another organization – Loris AI – specializing in the development of machine learning technologies, many critical responses have been received on social networks, denouncing the commercialization of sensitive data as a shocking betrayal of trust. In response, Crisis Text Line terminated its data sharing relationship with Loris AI and asked the company to delete the data it had sent.
A few weeks later, it emerged that Shout, the UK’s largest crisis text line, had also shared anonymised data with researchers from imperial college london and used machine learning to analyze patterns in the data. Again, this data comes from deeply personal and sensitive conversations between people in distress and the charity’s volunteer advisers.
One of the main reasons for this partnership was to determine what could be learned from anonymized conversations between users and Shout staff. To investigate this, the research team used machine learning techniques to uncover personal details about users from the text of the conversation, including non-binary age and gender.
The information inferred by machine learning algorithms does not personally identify individual users. However, many users were outraged when they found out how their data was being used. With social media attention turned to them, Shout replied:
We take the privacy of our text messages very seriously and operate to the highest standards for data security. … We have always been completely transparent that we will use data and anonymized information from Shout both to improve the service, so that we can better meet your needs, and for the improvement of mental health in the Kingdom -United.
Without a doubt, Shout was transparent in a way – they directed users to permissive privacy policies before accessing their service. But as we all know, these policies are seldom readand they should not be considered meaningful forms of user consent in a crisis situation.
It is therefore unfortunate to see charities such as Shout and Crisis Text Line fail to recognize how their actions can contribute to a growing culture of mistrust, particularly as they provide essential support in a climate where poor mental health is on the rise and utilities are stretched like a result of underfunding.
Without a doubt, Shout was transparent in a way – they directed users to permissive privacy policies before accessing their service. But as we all know, these policies are seldom readand they should not be considered meaningful forms of user consent in a crisis situation.
It is therefore unfortunate to see charities such as Shout and Crisis Text Line fail to recognize how their actions can contribute to a growing culture of mistrust, particularly as they provide essential support in a climate where poor mental health is on the rise and utilities are stretched like a result of underfunding.
Editor’s Note: This article is part of a Partnership the Chronicle forged with the Conversation to expand coverage of philanthropy and nonprofit organizations. The three organizations receive support for this work from the Lilly endowment. This article is republished from Conversation under Creative Commons license.