TY - GEN
T1 - Inferring mood instability via smartphone sensing
T2 - 27th ACM International Conference on Multimedia, MM 2019
AU - Zhang, Xiao
AU - Zhuang, Fuzhen
AU - Li, Wenzhong
AU - Ying, Haochao
AU - Xiong, Hui
AU - Lu, Sanglu
N1 - Funding Information:
This work was partially supported by the National Key R&D Program of China (Grant Nos. 2017YFB1001801, 2018YFB1004300), the National Natural Science Foundation of China (Grant Nos. 61672278, 61832008, 61832005, 91746301, U1836206), the Key R&D Program of Jiangsu Province, China (Grant No. BE2018116), the science and technology project from State Grid Corporation of China (Contract No. SGSNXT00YJJS1800031), the Collaborative Innovation Center of Novel Software Technology and Industrialization, and the Sino-German Institutes of Social Computing. Xiao Zhang was also supported by the program A for Outstanding PhD candidate of Nanjing University.
Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/10/15
Y1 - 2019/10/15
N2 - A high correlation between mood instability (MI), the rapid and constant fluctuation in mood, and mental health has been demonstrated. However, conventional approaches to measure MI are limited owing to the high manpower and time cost required. In this paper, we propose a smartphone-based MI detection that can automatically and passively detect MI with minimal human involvement. The proposed method trains a multi-view learning classification model using features extracted from the smartphone sensing data of volunteers and their self-reported moods. The trained classifier is then used to detect the MI of unseen users efficiently, thereby reducing the human involvement and time cost significantly. Based on extensive experiments conducted with the dataset collected from 68 volunteers, we demonstrate that the proposed multi-view learning model outperforms the baseline classifiers.
AB - A high correlation between mood instability (MI), the rapid and constant fluctuation in mood, and mental health has been demonstrated. However, conventional approaches to measure MI are limited owing to the high manpower and time cost required. In this paper, we propose a smartphone-based MI detection that can automatically and passively detect MI with minimal human involvement. The proposed method trains a multi-view learning classification model using features extracted from the smartphone sensing data of volunteers and their self-reported moods. The trained classifier is then used to detect the MI of unseen users efficiently, thereby reducing the human involvement and time cost significantly. Based on extensive experiments conducted with the dataset collected from 68 volunteers, we demonstrate that the proposed multi-view learning model outperforms the baseline classifiers.
KW - Attention
KW - Mood Instability Detection
KW - Multi-view Learning
KW - Smartphone Sensing
UR - http://www.scopus.com/inward/record.url?scp=85074839842&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074839842&partnerID=8YFLogxK
U2 - 10.1145/3343031.3350957
DO - 10.1145/3343031.3350957
M3 - Conference contribution
AN - SCOPUS:85074839842
T3 - MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
SP - 1401
EP - 1409
BT - MM 2019 - Proceedings of the 27th ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
Y2 - 21 October 2019 through 25 October 2019
ER -