Evaluation of performance measures in predictive artificial intelligence models to support medical decisions: overview and guidance

  • Ben Van Calster
  • , Gary S. Collins
  • , Andrew J. Vickers
  • , Laure Wynants
  • , Kathleen F. Kerr
  • , Lasai Barreñada
  • , Gael Varoquaux
  • , Karandeep Singh
  • , Karel Gm Moons
  • , Tina Hernandez-Boussard
  • , Dirk Timmerman
  • , David J. McLernon
  • , Maarten van Smeden
  • , Ewout W. Steyerberg*
  • ,
  • *Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

Abstract

Numerous measures have been proposed to illustrate the performance of predictive artificial intelligence (AI) models. Selecting appropriate performance measures is essential for predictive AI models intended for use in medical practice. Poorly performing models are misleading and may lead to wrong clinical decisions that can be detrimental to patients and increase financial costs. In this Viewpoint, we assess the merits of classic and contemporary performance measures when validating predictive AI models for medical practice, focusing on models that estimate probabilities for a binary outcome. We discuss 32 performance measures covering five performance domains (discrimination, calibration, overall performance, classification, and clinical utility) along with corresponding graphical assessments. The first four domains address statistical performance, whereas the fifth domain covers decision-analytical performance. We discuss two key characteristics when selecting a performance measure and explain why these characteristics are important: (1) whether the measure's expected value is optimised when calculated using the correct probabilities (ie, whether it is a proper measure) and (2) whether the measure solely reflects statistical performance or decision-analytical performance by properly accounting for misclassification costs. 17 measures showed both characteristics, 14 showed one, and one (F1 score) showed neither. All classification measures were improper for clinically relevant decision thresholds other than when the threshold was 0·5 or equal to the true prevalence. We illustrate these measures and characteristics using the ADNEX model which predicts the probability of malignancy in women with an ovarian tumour. We recommend the following measures and plots as essential to report: area under the receiver operating characteristic curve, calibration plot, a clinical utility measure such as net benefit with decision curve analysis, and a plot showing probability distributions by outcome category.

Original languageEnglish
Article number100916
Number of pages13
JournalThe Lancet. Digital health
Volume7
Issue number12
DOIs
Publication statusPublished - 1 Dec 2025

Fingerprint

Dive into the research topics of 'Evaluation of performance measures in predictive artificial intelligence models to support medical decisions: overview and guidance'. Together they form a unique fingerprint.

Cite this