In this study, we seek to understand how providing feedback to users about their performances with an interactive information retrieval (IIR) system impacts their evaluations of that system. Sixty subjects completed three recall-based searching tasks with an experimental IIR system and were asked to evaluate the system after each task and after finishing all three tasks. Before completing the final evaluation, three-fourths of the subjects were provided with feedback about their performances. Subjects were assigned randomly to one of four feedback conditions: a baseline condition where no feedback was provided; an actual feedback condition where subjects were provided with their real performances; and two conditions where subjects were deceived and told that they performed very well or very poorly. Results show that the type of feedback provided significantly affected subjects' system evaluations; most importantly there was a significant difference in subjects' satisfaction ratings before and after feedback was provided in the actual feedback condition.