Reporting and methods in systematic reviews of comparative accuracy

Article type
Authors
Takwoingi Y1, Riley R1, Deeks J1
1University of Birmingham, UK
Abstract
Background: Systematic reviews in which the accuracy of two or more tests are compared can provide evidence to support the clinical validity of each test and aid test selection. Because test evaluation is often limited to the assessment of test accuracy, it is vital that in the rapidly expanding evidence base, reviews and meta-analyses that compare the accuracy of multiple tests are conducted and reported appropriately to avoid misleading conclusions and recommendations.

Objectives: To provide a descriptive survey of current practice with a view to identifying good practice and problems, and to make suggestions for the improvement of future reviews.

Methods: Systematic reviews of test accuracy in the Database of Abstracts of Reviews of Effects published between 1994 and October 2012 were identified. We placed no restrictions on language of publication, test type, purpose of the test (screening, staging, diagnostic, etc), setting, or disease area. We extracted information on the target condition, patient population, tests evaluated, purpose of the tests, analysis methods and reporting characteristics of each review. Descriptive statistics were computed. We also compared reporting characteristics with the most relevant reporting guideline—the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist.

Results: We included 248 reviews that evaluated the accuracy of two or more tests. The reviews contained 6915 studies (studies may appear in more than one meta-analysis). Initial results indicate that tests are not often formally compared in the same meta-analysis but instead a separate meta-analysis is performed for each test and comparisons are made informally by comparing summary estimates between meta-analyses. Data analysis is still ongoing and results will be available for presentation at the colloquium.

Conclusions: Initial findings highlight the need for better understanding of methods and strategies for comparing tests in meta-analysis and specific guidance for reporting reviews of comparative accuracy.