•  
  •  
 

Abstract

This paper addresses the effects of rater training on the rubric-based scoring of three preservice teacher candidate performance assessments. This project sought to evaluate the consistency of ratings assigned to student learning outcome measures being used for program accreditation and to explore the need for rater training in order to increase rater agreement. There were three phases during this project: (1) authentic student work was rated by department faculty members in the absence of rubric training; (2) faculty were then trained to administer rubric scoring guides; and (3) additional student work was rated by faculty after training. Inter-rater agreement was calculated pre- and post- rater training, using side-by-side comparisons. Little to no improvement in rater agreement was seen post-training. Implications and future research needs for rater training in the application of rubrics are discussed.

Share

COinS