An empirical assessment of the validity of uncontrolled comparisons of the accuracy of diagnostic tests

Article type
Authors
Takwoingi Y, Dinnes J, Leeflang M, Deeks J
Abstract
Background: Cochrane reviews of diagnostic test accuracy aim to provide evidence to support the selection of diagnostic tests by comparing the performance of tests or test combinations. Studies that directly compare tests within patients or between randomized groups are preferable but are uncommon. Consequently, between-study uncontrolled (indirect) comparisons of tests may provide the only evidence of note. Such comparisons are likely to be more prone to bias like indirect comparisons between healthcare interventions (Glenny et al 2005), and maybe more severely due to considerable heterogeneity between studies and the lack of a common comparator test. Objectives: To estimate bias and reliability of meta-analyses of uncontrolled comparisons of diagnostic accuracy studies compared to meta-analyses of comparative studies. Methods: Meta-analyses that included test comparisons with both comparative studies and uncontrolled studies were identified from a cohort of higher quality diagnostic reviews (Dinnes et al 2005) indexed in the Database of Abstracts of Reviews of Effects up to December 2002 supplemented by more recent searches. The hierarchical summary ROC model was used to synthesize pairs of sensitivity and specificity in each meta-analysis and estimate and compare accuracy measures for both the uncontrolled test comparison and the comparative studies. Results: Ninety-four comparative reviews were identified of which 30 provided data to conduct both direct and uncontrolled test comparisons. The degree of bias and variability of relative sensitivities, specificities and diagnostic odds ratios between comparative and uncontrolled comparisons was analysed. Further results will be available at the Colloquium. Conclusions: Test selection is critical to health technology assessment. In the absence of comparative studies, selection has often relied on comparisons of meta-analyses of uncontrolled studies. Limitations of such comparisons should be considered when making inferences on the relative accuracy of competing tests, and in encouraging funders to ensure future test accuracy studies address important comparative questions.