Abstract
Background: Drawing conclusions from systematic reviews of test accuracy studies without considering the methodological quality ( risk of bias) of included studies may lead to unwarranted optimism about the value of the test(s) under study. We sought to identify to what extent the results of quality assessment of included studies are incorporated in the conclusions of diagnostic accuracy reviews. Methods: We searched MEDLINE and EMBASE for test accuracy reviews published between May and September 2012. We examined the abstracts and main texts of these reviews to see whether and how the results of quality assessment were linked to the accuracy estimates when drawing conclusions. Results: We included 65 reviews of which 53 contained a meta-analysis. Sixty articles ( 92%) had formally assessed the methodological quality of included studies, most often using the original QUADAS tool ( n = 44, 68%). Quality assessment was mentioned in 28 abstracts ( 43%); with a majority ( n = 21) mentioning it in the methods section. In only 5 abstracts ( 8%) were results of quality assessment incorporated in the conclusions. Thirteen reviews ( 20%) presented results of quality assessment in the main text only, without further discussion. Forty-seven reviews ( 72%) discussed results of quality assessment; the most frequent form was as limitations in assessing quality ( n = 28). Only 6 reviews ( 9%) further linked the results of quality assessment to their conclusions, 3 of which did not conduct a meta-analysis due to limitations in the quality of included studies. In the reviews with a meta-analysis, 19 ( 36%) incorporated quality in the analysis. Eight reported significant effects of quality on the pooled estimates; in none of them these effects were factored in the conclusions. Conclusion: While almost all recent diagnostic accuracy reviews evaluate the quality of included studies, very few consider results of quality assessment when drawing conclusions. The practice of reporting systematic reviews of test accuracy should improve if readers not only want to be informed about the limitations in the available evidence, but also on the associated implications for the performance of the evaluated tests.
Original language | English |
---|---|
Article number | 33 |
Number of pages | 8 |
Journal | BMC Medical Research Methodology [E] |
Volume | 14 |
DOIs | |
Publication status | Published - 3 Mar 2014 |
Keywords
- Bias (Epidemiology)
- Cross-Sectional Studies
- Data Interpretation, Statistical
- Diagnostic Errors
- Diagnostic Tests, Routine
- Humans
- Quality Assurance, Health Care
- Research Design
- Sensitivity and Specificity