TY - JOUR
T1 - Data requirements for evaluation of personalization of information retrieval - A position paper
AU - Belkin, Nicholas J.
AU - Hienert, Daniel
AU - Mayr, Philipp
AU - Shah, Chirag
N1 - Funding Information:
This work was partly funded by Deutsche Forschungsgemeinschaft (DFG), grant no. MA 3964/5-1; the AMUR project at GESIS; and, by the National Science Foundation, grant no. IIS-1423239.
PY - 2017
Y1 - 2017
N2 - Two key, but usually ignored, issues for the evaluation of methods of personalization for information retrieval are: that such evaluation must be of a search session as a whole; and, that people, during the course of an information search session, engage in a variety of activities, intended to accomplish different goals or intentions. Taking serious account of these factors has major implications for not only evaluation methods and metrics, but also for the nature of the data that is necessary both for understanding and modeling information search, and for evaluation of personalized support for information retrieval (IR). In this position paper, we: present a model of IR demonstrating why these factors are important; identify some implications of accepting their validity; and, on the basis of a series of studies in interactive IR, identify some types of data concerning searcher and system behavior that we claim are, at least, necessary, if not necessarily sufficient, for meaningful evaluation of personalization of IR.
AB - Two key, but usually ignored, issues for the evaluation of methods of personalization for information retrieval are: that such evaluation must be of a search session as a whole; and, that people, during the course of an information search session, engage in a variety of activities, intended to accomplish different goals or intentions. Taking serious account of these factors has major implications for not only evaluation methods and metrics, but also for the nature of the data that is necessary both for understanding and modeling information search, and for evaluation of personalized support for information retrieval (IR). In this position paper, we: present a model of IR demonstrating why these factors are important; identify some implications of accepting their validity; and, on the basis of a series of studies in interactive IR, identify some types of data concerning searcher and system behavior that we claim are, at least, necessary, if not necessarily sufficient, for meaningful evaluation of personalization of IR.
KW - Evaluation
KW - Information seeking
KW - Interactive IR
KW - Session
KW - Task
KW - User log data
UR - http://www.scopus.com/inward/record.url?scp=85034744753&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85034744753&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85034744753
SN - 1613-0073
VL - 1866
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 18th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2017
Y2 - 11 September 2017 through 14 September 2017
ER -