TY - JOUR
T1 - How Reliable are Single-Question Workplace-Based Assessments in Surgery?
AU - Gates, Rebecca S.
AU - Krumm, Andrew E.
AU - Cate, Olle ten
AU - Chen, Xilin
AU - Marcotte, Kayla
AU - Thelen, Angela E.
AU - Deal, Shanley B.
AU - Alseidi, Adnan
AU - Swanson, David
AU - George, Brian C.
N1 - Publisher Copyright:
© 2024 Association of Program Directors in Surgery
PY - 2024/7
Y1 - 2024/7
N2 - OBJECTIVE: Workplace-based assessments (WBAs) play an important role in the assessment of surgical trainees. Because these assessment tools are utilized by a multitude of faculty, inter-rater reliability is important to consider when interpreting WBA data. Although there is evidence supporting the validity of many of these tools, inter-reliability evidence is lacking. This study aimed to evaluate the inter-rater reliability of multiple operative WBA tools utilized in general surgery residency. DESIGN: General surgery residents and teaching faculty were recorded during 6 general surgery operations. Nine faculty raters each reviewed 6 videos and rated each resident on performance (using the Society for Improving Medical Professional Learning, or SIMPL, Performance Scale as well as the operative performance rating system (OPRS) Scale), entrustment (using the ten Cate Entrustment-Supervision Scale), and autonomy (using the Zwisch Scale). The ratings were reviewed for inter-rater reliability using percent agreement and intraclass correlations. PARTICIPANTS: Nine faculty members viewed the videos and assigned ratings for multiple WBAs. RESULTS: Absolute intraclass correlation coefficients for each scale ranged from 0.33 to 0.47. CONCLUSIONS: All single-item WBA scales had low to moderate inter-rater reliability. While rater training may improve inter-rater reliability for single observations, many observations by many raters are needed to reliably assess trainee performance in the workplace.
AB - OBJECTIVE: Workplace-based assessments (WBAs) play an important role in the assessment of surgical trainees. Because these assessment tools are utilized by a multitude of faculty, inter-rater reliability is important to consider when interpreting WBA data. Although there is evidence supporting the validity of many of these tools, inter-reliability evidence is lacking. This study aimed to evaluate the inter-rater reliability of multiple operative WBA tools utilized in general surgery residency. DESIGN: General surgery residents and teaching faculty were recorded during 6 general surgery operations. Nine faculty raters each reviewed 6 videos and rated each resident on performance (using the Society for Improving Medical Professional Learning, or SIMPL, Performance Scale as well as the operative performance rating system (OPRS) Scale), entrustment (using the ten Cate Entrustment-Supervision Scale), and autonomy (using the Zwisch Scale). The ratings were reviewed for inter-rater reliability using percent agreement and intraclass correlations. PARTICIPANTS: Nine faculty members viewed the videos and assigned ratings for multiple WBAs. RESULTS: Absolute intraclass correlation coefficients for each scale ranged from 0.33 to 0.47. CONCLUSIONS: All single-item WBA scales had low to moderate inter-rater reliability. While rater training may improve inter-rater reliability for single observations, many observations by many raters are needed to reliably assess trainee performance in the workplace.
KW - general surgery
KW - inter-rater reliability
KW - reliability
KW - workplace-based assessment
UR - http://www.scopus.com/inward/record.url?scp=85194553168&partnerID=8YFLogxK
U2 - 10.1016/j.jsurg.2024.03.015
DO - 10.1016/j.jsurg.2024.03.015
M3 - Article
C2 - 38816336
AN - SCOPUS:85194553168
SN - 1931-7204
VL - 81
SP - 967
EP - 972
JO - Journal of surgical education
JF - Journal of surgical education
IS - 7
ER -