Abstract
Background: Clinical prediction models should be validated before implementation in clinical practice. But is favorable performance at internal validation or one external validation sufficient to claim that a prediction model works well in the intended clinical context? Main body: We argue to the contrary because (1) patient populations vary, (2) measurement procedures vary, and (3) populations and measurements change over time. Hence, we have to expect heterogeneity in model performance between locations and settings, and across time. It follows that prediction models are never truly validated. This does not imply that validation is not important. Rather, the current focus on developing new models should shift to a focus on more extensive, well-conducted, and well-reported validation studies of promising models. Conclusion: Principled validation strategies are needed to understand and quantify heterogeneity, monitor performance over time, and update prediction models when appropriate. Such strategies will help to ensure that prediction models stay up-to-date and safe to support clinical decision-making.
Original language | English |
---|---|
Article number | 70 |
Journal | BMC Medicine |
Volume | 21 |
Issue number | 1 |
DOIs | |
Publication status | Published - 24 Feb 2023 |
Keywords
- Calibration
- Discrimination
- External validation
- Heterogeneity
- Internal validation
- Model performance
- Predictive analytics
- Risk prediction models