TY - JOUR
T1 - Analyzing question quality through intersubjectivity
T2 - World views and objective assessments of questions on social question-answering
AU - Kitzie, Vanessa
AU - Choi, Erik
AU - Shah, Chirag
PY - 2013
Y1 - 2013
N2 - Social question-answering (SQA) allows people to ask questions in natural language and receive answers from others. While research on SQA has focused on the quality of answers provided with implications for system-based interventions, few studies have examined whether the questions asked to elicit these answers accurately depict an asker's information need. To address this gap, the current study explores the viability for system based interventions to improve questions by comparing human, non-textual assessments of question quality to automatic, textual features extracted from the questions' content in order to determine whether there is a significant relationship between subjective judgments on one hand, and objective ones on the other. Findings indicate that not only is there a significant relationship between human-based ratings of question quality criteria and extracted textual features, but also that distinct textual features contribute to explaining the variability of each human-based rating. These findings encourage further study of the relationship between the reasons for why a question might be of poor quality and textual features that can be extracted from the question. This relationship can ultimately inform design of intervention-based systems that can not only automatically assess question quality, but also provide reasons that can be understood by the asker as to why the quality of his or her question is poor and suggest how to revise the question.
AB - Social question-answering (SQA) allows people to ask questions in natural language and receive answers from others. While research on SQA has focused on the quality of answers provided with implications for system-based interventions, few studies have examined whether the questions asked to elicit these answers accurately depict an asker's information need. To address this gap, the current study explores the viability for system based interventions to improve questions by comparing human, non-textual assessments of question quality to automatic, textual features extracted from the questions' content in order to determine whether there is a significant relationship between subjective judgments on one hand, and objective ones on the other. Findings indicate that not only is there a significant relationship between human-based ratings of question quality criteria and extracted textual features, but also that distinct textual features contribute to explaining the variability of each human-based rating. These findings encourage further study of the relationship between the reasons for why a question might be of poor quality and textual features that can be extracted from the question. This relationship can ultimately inform design of intervention-based systems that can not only automatically assess question quality, but also provide reasons that can be understood by the asker as to why the quality of his or her question is poor and suggest how to revise the question.
KW - Question quality
KW - Satisfaction
KW - Social QandA
UR - http://www.scopus.com/inward/record.url?scp=84903971418&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84903971418&partnerID=8YFLogxK
U2 - 10.1002/meet.14505001052
DO - 10.1002/meet.14505001052
M3 - Article
AN - SCOPUS:84903971418
SN - 1550-8390
VL - 50
JO - Proceedings of the ASIST Annual Meeting
JF - Proceedings of the ASIST Annual Meeting
IS - 1
ER -