A comparison of hyperparameter tuning procedures for clinical prediction models: A simulation study

Zoë S. Dunias*, Ben Van Calster, Dirk Timmerman, Anne Laure Boulesteix, Maarten van Smeden

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1 Downloads (Pure)

Abstract

Tuning hyperparameters, such as the regularization parameter in Ridge or Lasso regression, is often aimed at improving the predictive performance of risk prediction models. In this study, various hyperparameter tuning procedures for clinical prediction models were systematically compared and evaluated in low-dimensional data. The focus was on out-of-sample predictive performance (discrimination, calibration, and overall prediction error) of risk prediction models developed using Ridge, Lasso, Elastic Net, or Random Forest. The influence of sample size, number of predictors and events fraction on performance of the hyperparameter tuning procedures was studied using extensive simulations. The results indicate important differences between tuning procedures in calibration performance, while generally showing similar discriminative performance. The one-standard-error rule for tuning applied to cross-validation (1SE CV) often resulted in severe miscalibration. Standard non-repeated and repeated cross-validation (both 5-fold and 10-fold) performed similarly well and outperformed the other tuning procedures. Bootstrap showed a slight tendency to more severe miscalibration than standard cross-validation-based tuning procedures. Differences between tuning procedures were larger for smaller sample sizes, lower events fractions and fewer predictors. These results imply that the choice of tuning procedure can have a profound influence on the predictive performance of prediction models. The results support the application of standard 5-fold or 10-fold cross-validation that minimizes out-of-sample prediction error. Despite an increased computational burden, we found no clear benefit of repeated over non-repeated cross-validation for hyperparameter tuning. We warn against the potentially detrimental effects on model calibration of the popular 1SE CV rule for tuning prediction models in low-dimensional settings.

Original languageEnglish
Article number9932
Pages (from-to)1119-1134
Number of pages16
JournalStatistics in Medicine
Volume43
Issue number6
DOIs
Publication statusPublished - 15 Mar 2024

Keywords

  • cross-validation
  • hyperparameter tuning
  • penalized regression
  • prediction models
  • Random Forest

Fingerprint

Dive into the research topics of 'A comparison of hyperparameter tuning procedures for clinical prediction models: A simulation study'. Together they form a unique fingerprint.

Cite this