TY - JOUR
T1 - Effect of surgical experience and spine subspecialty on the reliability of the AO Spine Upper Cervical Injury Classification System
AU - Lambrechts, Mark J.
AU - Schroeder, Gregory D.
AU - Karamian, Brian A.
AU - Canseco, Jose A.
AU - Oner, F. Cumhur
AU - Benneker, Lorin M.
AU - Bransford, Richard J.
AU - Kandziora, Frank
AU - Rajasekaran, Shanmuganathan
AU - El-Sharkawi, Mohammad
AU - Kanna, Rishi
AU - Joaquim, Andrei Fernandes
AU - Schnake, Klaus
AU - Kepler, Christopher K.
AU - Vaccaro, Alexander R.
N1 - Publisher Copyright:
© 2023 The authors, CC BY-NC-ND 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/)
PY - 2023/1
Y1 - 2023/1
N2 - OBJECTIVE The objective of this paper was to determine the interobserver reliability and intraobserver reproducibility of the AO Spine Upper Cervical Injury Classification System based on surgeon experience (< 5 years, 5-10 years, 10-20 years, and > 20 years) and surgical subspecialty (orthopedic spine surgery, neurosurgery, and “other” surgery). METHODS A total of 11,601 assessments of upper cervical spine injuries were evaluated based on the AO Spine Upper Cervical Injury Classification System. Reliability and reproducibility scores were obtained twice, with a 3-week time interval. Descriptive statistics were utilized to examine the percentage of accurately classified injuries, and Pearson's chi-square or Fisher's exact test was used to screen for potentially relevant differences between study participants. Kappa coefficients (κ) determined the interobserver reliability and intraobserver reproducibility. RESULTS The intraobserver reproducibility was substantial for surgeon experience level (< 5 years: 0.74 vs 5-10 years: 0.69 vs 10-20 years: 0.69 vs > 20 years: 0.70) and surgical subspecialty (orthopedic spine: 0.71 vs neurosurgery: 0.69 vs other: 0.68). Furthermore, the interobserver reliability was substantial for all surgical experience groups on assessment 1 (< 5 years: 0.67 vs 5-10 years: 0.62 vs 10-20 years: 0.61 vs > 20 years: 0.62), and only surgeons with > 20 years of experience did not have substantial reliability on assessment 2 (< 5 years: 0.62 vs 5-10 years: 0.61 vs 10-20 years: 0.61 vs > 20 years: 0.59). Orthopedic spine surgeons and neurosurgeons had substantial intraobserver reproducibility on both assessment 1 (0.64 vs 0.63) and assessment 2 (0.62 vs 0.63), while other surgeons had moderate reliability on assessment 1 (0.43) and fair reliability on assessment 2 (0.36). CONCLUSIONS The international reliability and reproducibility scores for the AO Spine Upper Cervical Injury Classification System demonstrated substantial intraobserver reproducibility and interobserver reliability regardless of surgical experience and spine subspecialty. These results support the global application of this classification system.
AB - OBJECTIVE The objective of this paper was to determine the interobserver reliability and intraobserver reproducibility of the AO Spine Upper Cervical Injury Classification System based on surgeon experience (< 5 years, 5-10 years, 10-20 years, and > 20 years) and surgical subspecialty (orthopedic spine surgery, neurosurgery, and “other” surgery). METHODS A total of 11,601 assessments of upper cervical spine injuries were evaluated based on the AO Spine Upper Cervical Injury Classification System. Reliability and reproducibility scores were obtained twice, with a 3-week time interval. Descriptive statistics were utilized to examine the percentage of accurately classified injuries, and Pearson's chi-square or Fisher's exact test was used to screen for potentially relevant differences between study participants. Kappa coefficients (κ) determined the interobserver reliability and intraobserver reproducibility. RESULTS The intraobserver reproducibility was substantial for surgeon experience level (< 5 years: 0.74 vs 5-10 years: 0.69 vs 10-20 years: 0.69 vs > 20 years: 0.70) and surgical subspecialty (orthopedic spine: 0.71 vs neurosurgery: 0.69 vs other: 0.68). Furthermore, the interobserver reliability was substantial for all surgical experience groups on assessment 1 (< 5 years: 0.67 vs 5-10 years: 0.62 vs 10-20 years: 0.61 vs > 20 years: 0.62), and only surgeons with > 20 years of experience did not have substantial reliability on assessment 2 (< 5 years: 0.62 vs 5-10 years: 0.61 vs 10-20 years: 0.61 vs > 20 years: 0.59). Orthopedic spine surgeons and neurosurgeons had substantial intraobserver reproducibility on both assessment 1 (0.64 vs 0.63) and assessment 2 (0.62 vs 0.63), while other surgeons had moderate reliability on assessment 1 (0.43) and fair reliability on assessment 2 (0.36). CONCLUSIONS The international reliability and reproducibility scores for the AO Spine Upper Cervical Injury Classification System demonstrated substantial intraobserver reproducibility and interobserver reliability regardless of surgical experience and spine subspecialty. These results support the global application of this classification system.
KW - AO Spine
KW - neurosurgeon
KW - orthopedic spine surgeon
KW - reliability
KW - reproducibility
KW - trauma
KW - upper cervical spine
UR - http://www.scopus.com/inward/record.url?scp=85145425558&partnerID=8YFLogxK
U2 - 10.3171/2022.6.SPINE22454
DO - 10.3171/2022.6.SPINE22454
M3 - Article
C2 - 35986731
AN - SCOPUS:85145425558
SN - 1547-5654
VL - 38
SP - 31
EP - 41
JO - Journal of Neurosurgery: Spine
JF - Journal of Neurosurgery: Spine
IS - 1
ER -