There is no such thing as a validated prediction model

Ben Van Calster, Ewout W. Steyerberg, Laure Wynants, Maarten van Smeden*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Background: Clinical prediction models should be validated before implementation in clinical practice. But is favorable performance at internal validation or one external validation sufficient to claim that a prediction model works well in the intended clinical context? Main body: We argue to the contrary because (1) patient populations vary, (2) measurement procedures vary, and (3) populations and measurements change over time. Hence, we have to expect heterogeneity in model performance between locations and settings, and across time. It follows that prediction models are never truly validated. This does not imply that validation is not important. Rather, the current focus on developing new models should shift to a focus on more extensive, well-conducted, and well-reported validation studies of promising models. Conclusion: Principled validation strategies are needed to understand and quantify heterogeneity, monitor performance over time, and update prediction models when appropriate. Such strategies will help to ensure that prediction models stay up-to-date and safe to support clinical decision-making.

Original languageEnglish
Article number70
JournalBMC Medicine
Volume21
Issue number1
DOIs
Publication statusPublished - 24 Feb 2023

Keywords

  • Calibration
  • Discrimination
  • External validation
  • Heterogeneity
  • Internal validation
  • Model performance
  • Predictive analytics
  • Risk prediction models

Fingerprint

Dive into the research topics of 'There is no such thing as a validated prediction model'. Together they form a unique fingerprint.

Cite this