Tetsuro Watari, Soichiro Koyama, Yusaku Kato, Yonho Paku, Yoshikiyo Kanada, Hiroaki Sakurai
Fujita Medical Journal, 8(3) 83-87, Aug, 2022 Peer-reviewedLead author
Objectives: Objective structured clinical examinations (OSCEs) are used to assess clinical competence in medical education. Evaluations using video-recorded OSCEs are effective in reducing costs in terms of time and human resources. To improve inter-rater reliability, these evaluations undergo moderation in the form of a discussion between the raters to obtain consistency in grading according to the rubric criteria. We examined the effect of moderation related to the rubric criteria on the inter-rater reliability of a video-recorded OSCE with real patients.
Methods: Forty OSCE videos in which students performed range-of-motion tests at shoulder abduction on real patients were assessed by two raters. The two raters scored videos 1 to 10 without moderation and videos 11 to 40 with moderation each time. The inter-rater reliability of the OSCE was calculated using the weighted kappa coefficient.
Results: The mean scores of the weighted kappa coefficients were 0.49 for videos 1 to 10, 0.57 for videos 11 to 20, 0.66 for videos 21 to 30, and 0.82 for videos 31 to 40.
Conclusions: An assessment of video-recorded OSCEs was conducted with real patients in a real clinical setting. Repeated moderation improved the inter-rater reliability. This study suggests the effectiveness of moderation in OSCEs with real patients.