Psychology Ph.D. Dissertations


Estimating Performance Mean and Variability With Distributional Rating Scales: A Field Study Towards Improved Performance Measurement

Date of Award


Document Type


Degree Name

Doctor of Philosophy (Ph.D.)



First Advisor

Milton D. Hakel, PhD (Committee Chair)

Second Advisor

Michael J. Zickar, PhD (Committee Member)

Third Advisor

Dara Musher-Eizenman, PhD (Committee Member)

Fourth Advisor

Michael C. Carroll, PhD (Committee Member)


Research on distributional rating scales is mixed on whether they represent an improvement in performance measurement over traditional Likert-type scales. The present study attempted to reconcile the mixed results by suggesting that distributional ratings provide estimates of mean performance comparable to Likert-type ratings, yet also contribute conceptually critical estimates of performance variability unavailable from Likert-type ratings. Approximately 2,090 undergraduate students in 95 classes rated their instructors' performance. Data were collected in a between-classes design with random assignment to either distributional or Likert-type rating scale conditions. Results indicated no significant differences in estimates of mean performance or interrater agreement for mean performance between distributional and Likert-type rating scales. Further, raters used the distributional scale to report some degree of performance variability, and surprisingly, they agreed on variability estimates as much as or more than they agreed on mean estimates. Thus, distributional rating scales indeed have the potential to capture richer performance information than Likert-type scales.