Data requirements for evaluation of personalization of information retrieval - A position paper

Nicholas J. Belkin, Daniel Hienert, Philipp Mayr, Chirag Shah

Research output: Contribution to journalConference articlepeer-review


Two key, but usually ignored, issues for the evaluation of methods of personalization for information retrieval are: that such evaluation must be of a search session as a whole; and, that people, during the course of an information search session, engage in a variety of activities, intended to accomplish different goals or intentions. Taking serious account of these factors has major implications for not only evaluation methods and metrics, but also for the nature of the data that is necessary both for understanding and modeling information search, and for evaluation of personalized support for information retrieval (IR). In this position paper, we: present a model of IR demonstrating why these factors are important; identify some implications of accepting their validity; and, on the basis of a series of studies in interactive IR, identify some types of data concerning searcher and system behavior that we claim are, at least, necessary, if not necessarily sufficient, for meaningful evaluation of personalization of IR.

Original languageEnglish (US)
JournalCEUR Workshop Proceedings
StatePublished - 2017
Event18th Working Notes of CLEF Conference and Labs of the Evaluation Forum, CLEF 2017 - Dublin, Ireland
Duration: Sep 11 2017Sep 14 2017

All Science Journal Classification (ASJC) codes

  • Computer Science(all)


  • Evaluation
  • Information seeking
  • Interactive IR
  • Session
  • Task
  • User log data


Dive into the research topics of 'Data requirements for evaluation of personalization of information retrieval - A position paper'. Together they form a unique fingerprint.

Cite this