TY - GEN
T1 - A taxonomy of ethical tensions in inferring mental health states from social media
AU - Chancellor, Stevie
AU - Birnbaum, Michael L.
AU - Caine, Eric D.
AU - Silenzio, Vincent M.B.
AU - De Choudhury, Munmun
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/1/29
Y1 - 2019/1/29
N2 - Powered by machine learning techniques, social media provides an unobtrusive lens into individual behaviors, emotions, and psychological states. Recent research has successfully employed social media data to predict mental health states of individuals, ranging from the presence and severity of mental disorders like depression to the risk of suicide. These algorithmic inferences hold great potential in supporting early detection and treatment of mental disorders and in the design of interventions. At the same time, the outcomes of this research can pose great risks to individuals, such as issues of incorrect, opaque algorithmic predictions, involvement of bad or unaccountable actors, and potential biases from intentional or inadvertent misuse of insights. Amplifying these tensions, there are also divergent and sometimes inconsistent methodological gaps and under-explored ethics and privacy dimensions. This paper presents a taxonomy of these concerns and ethical challenges, drawing from existing literature, and poses questions to be resolved as this research gains traction. We identify three areas of tension: ethics committees and the gap of social media research; questions of validity, data, and machine learning; and implications of this research for key stakeholders. We conclude with calls to action to begin resolving these interdisciplinary dilemmas.
AB - Powered by machine learning techniques, social media provides an unobtrusive lens into individual behaviors, emotions, and psychological states. Recent research has successfully employed social media data to predict mental health states of individuals, ranging from the presence and severity of mental disorders like depression to the risk of suicide. These algorithmic inferences hold great potential in supporting early detection and treatment of mental disorders and in the design of interventions. At the same time, the outcomes of this research can pose great risks to individuals, such as issues of incorrect, opaque algorithmic predictions, involvement of bad or unaccountable actors, and potential biases from intentional or inadvertent misuse of insights. Amplifying these tensions, there are also divergent and sometimes inconsistent methodological gaps and under-explored ethics and privacy dimensions. This paper presents a taxonomy of these concerns and ethical challenges, drawing from existing literature, and poses questions to be resolved as this research gains traction. We identify three areas of tension: ethics committees and the gap of social media research; questions of validity, data, and machine learning; and implications of this research for key stakeholders. We conclude with calls to action to begin resolving these interdisciplinary dilemmas.
KW - Algorithms
KW - Ethics
KW - Machine learning
KW - Mental health
KW - Social media
UR - http://www.scopus.com/inward/record.url?scp=85061824457&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061824457&partnerID=8YFLogxK
U2 - 10.1145/3287560.3287587
DO - 10.1145/3287560.3287587
M3 - Conference contribution
AN - SCOPUS:85061824457
T3 - FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency
SP - 79
EP - 88
BT - FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency
PB - Association for Computing Machinery, Inc
T2 - 2019 ACM Conference on Fairness, Accountability, and Transparency, FAT* 2019
Y2 - 29 January 2019 through 31 January 2019
ER -