Abstract
OBJECTIVES: To assess improvement in the completeness of reporting coronavirus (COVID-19) prediction models after the peer review process.
STUDY DESIGN AND SETTING: Studies included in a living systematic review of COVID-19 prediction models, with both preprint and peer-reviewed published versions available, were assessed. The primary outcome was the change in percentage adherence to the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) reporting guidelines between pre-print and published manuscripts.
RESULTS: Nineteen studies were identified including seven (37%) model development studies, two external validations of existing models (11%), and 10 (53%) papers reporting on both development and external validation of the same model. Median percentage adherence among preprint versions was 33% (min-max: 10 to 68%). The percentage adherence of TRIPOD components increased from preprint to publication in 11/19 studies (58%), with adherence unchanged in the remaining eight studies. The median change in adherence was just 3 percentage points (pp, min-max: 0-14 pp) across all studies. No association was observed between the change in percentage adherence and preprint score, journal impact factor, or time between journal submission and acceptance.
CONCLUSIONS: The preprint reporting quality of COVID-19 prediction modeling studies is poor and did not improve much after peer review, suggesting peer review had a trivial effect on the completeness of reporting during the pandemic.
Original language | English |
---|---|
Pages (from-to) | 75-84 |
Number of pages | 10 |
Journal | Journal of Clinical Epidemiology |
Volume | 154 |
Early online date | 14 Dec 2022 |
DOIs | |
Publication status | Published - Feb 2023 |
Keywords
- Adherence
- COVID-19
- Peer review
- Prediction modeling
- Prognosis and diagnosis
- Reporting guidelines
- TRIPOD