Newman L, Lown B, Jones R. Johansson A, Shwartzstein R. Developing a Peer Assessment of Lecturing Instrument: Lessons Learned. Academic Medicine. 2009; 84(8):1104-1110.
Peer assessment of teaching can improve the quality of instruction and contribute to summative evaluation of teaching effectiveness integral to high-stakes decision making. There is, however, a paucity of validated, criterion-based peer assessment instruments.
The authors describe development and pilot testing of one such instrument and share lessons learned. The report provides a description of how a task force of the Shapiro Institute for Education and Research at Harvard Medical School and Beth Israel Deaconess Medical Center used the Delphi method to engage academic faculty leaders to develop a new instrument for peer assessment of medical lecturing.
Newman and colleagues describe how they used consensus building to determine the criteria, scoring rubric, and behavioral anchors for the rating scale. To pilot test the instrument, participants assessed a series of medical school lectures. Statistical analysis revealed high internal consistency of the instrument's scores (alpha = 0.87, 95% bootstrap confidence interval [BCI] = 0.80 to 0.91), yet low interrater agreement across all criteria and the global measure (intraclass correlation coefficient = 0.27, 95% BCI = -0.08 to 0.44).T
Finally, the authors remark the importance of faculty involvement in determining a cohesive set of criteria to assess lectures. They discuss how providing evidence that a peer assessment instrument is credible and reliable increases the faculty's trust in feedback. The authors point to the need for proper peer rater training to obtain high interrater agreement measures, and posit that once such measures are obtained, reliable and accurate peer assessment of teaching could be used to inform the academic promotion process.
No hay comentarios:
Publicar un comentario