CMU-CS-23-113 Computer Science Department School of Computer Science, Carnegie Mellon University
Testing for Reviewer Anchoring in the Ryan Liu M.S. Thesis May 2023
Peer review serves as a core component of the process for publishing and distinguishing computer science research. Many peer review frameworks involve multiple stages of reviewer scores, where reviewers are expected to provide new scores after viewing additional relevant information (e.g. a response to their initial review). Whether this stage achieves its full desired effect is uncertain: humans are known to under-adjust judgements after they are initially formed, a phenomenon known as the anchoring effect. We design a novel experiment to measure whether reviewers exhibit the anchoring effect in their initial and revised scores, comparing the outcomes when the reviewer initially sees a worse version of the paper which is later corrected (experimental condition), versus if the reviewer had the correct paper during the entire review (control condition). Here, a key challenge is to ensure that the worse version of the paper should get lower scores than the corrected version, while the corrected version's scores should be identically distributed to the control version's scores in the absence of anchoring. To achieve this, we construct a fake paper for reviewers to evaluate, and use it to deceive the experimental group into believing that the worse version was seen due to a browser error. Our design respects a key confounder while avoiding the mention of anchoring to ensure the authenticity of the participants' responses. Across 108 PhD-level participants, we find no statistically significant effect that participants anchor toward their original scores (p=0.35). In additional exploratory analyses, we find that reviewers self-reporting a low confidence show more signs of anchoring.
50 pages
Thesis Committee:
Srinivasan Seshan, Head, Computer Science Department
| |
Return to:
SCS Technical Report Collection This page maintained by reports@cs.cmu.edu |