Article type
Year
Abstract
Background: Indirect comparison (IC) is often used in meta-analysis to evaluate the relative effects of competing interventions due to lack of head-to-head randomized controlled trials and to synthesize complex evidence. Such an indirect approach is more susceptible to bias especially selection bias than direct comparison. Objectives: To understand the properties and potential problems of commonly used statistical methods in IC meta-analysis. Methods: We searched The Cochrane Library Issue 4, 2007, MEDLINE (Dec 2007) and Chinese Biomedical Disk (CBM, Dec 2007). All methodological papers relevant to IC or papers aimed at identifying and adjusting bias in performing IC were included. Meta-analyses that claimed to be IC in title or abstract were also included. Results: (1) Twenty methodological papers suggested five IC statistical methods: naïve method, adjusted indirect comparison (AIC), meta-regression, methods using generalized linear (mixed) models, Bayesian methods. None of the papers recommended the naïve method. (2) We found 58 IC meta-analysis papers, 3 used naïve method, 26 performed AIC, 6 used meta-regression, 4 used mixed models and 9 used Bayesian methods. Forty papers assessed the quality of trials and 48 papers reported the heterogeneity test. Of the 57 papers concerned with baseline comparability of included trials, 27 had a comparable baseline or adjusted the impact of non-comparable factors on treatment effects. (3) Significant discrepancy was found in 3 of the 9 comparisons in 8 papers between the direct and adjusted indirect estimate and the direction of discrepancy was unpredictable. Conclusions: When making IC meta-analysis, an adjusted method should be used and the methodological quality, heterogeneity and similarity of included trials should be assessed for validity and reliability of the estimates. More research and experience is required before the scope and position of IC can be established.