Article type
Year
Abstract
Background: In many areas, randomised controlled trials (RCTs) may not have directly compared specific treatments of interest. For example, each of two drugs may have been compared to placebo in RCTs, but not with each other directly.
Methods: a. The Database of Abstracts of Reviews of Effectiveness (1994-1998) was searched for systematic reviews involving meta-analysis of RCTs which reported both direct and indirect comparisons, or indirect comparisons alone. b. A systematic review of Medline and other databases was carried out to identify published methods for analysingindirect comparisons. c. Study designs were created using data from the International Stroke Trial. Random samples of patients receiving aspirin, heparin or placebo in 16 centres were used to create meta-analyses with half the trials comparing aspirin and placebo and half heparin and placebo. Methods for indirect comparisons were used to estimate the contrast between aspirin and heparin. The whole process was repeated 1000 times and the results were compared with direct comparisons and also theoretical results.
Results: Of reviews which included meta-analyses of two or more RCTs, 26/237 (11%) included indirect comparisons. Few studies had carried out a formal analysis. Some reviews based analysis on the naive addition of data from the treatment arms of interest. Interpretation of indirect comparisons was not always appropriate. Very few methodological papers were identified, of which only one suggested a simple method: Bucher et al (J Clin Epidemiol 1997) proposed using the ratio of two separate odds ratios. Simulation studies showed that the na ve method is liable to bias and also produces over-precise answers. Several methods provide correct answers if strong but unverifiable assumptions are fulfilled. Four times as many similar sized trials are needed for the indirect approach to have the same power as directly randomised comparisons.
Conclusions: Systematic reviews often include indirect comparisons. Appropriate methods of analysis should be used, and interpretations should be more cautious in view of the observational nature of the data.
Methods: a. The Database of Abstracts of Reviews of Effectiveness (1994-1998) was searched for systematic reviews involving meta-analysis of RCTs which reported both direct and indirect comparisons, or indirect comparisons alone. b. A systematic review of Medline and other databases was carried out to identify published methods for analysingindirect comparisons. c. Study designs were created using data from the International Stroke Trial. Random samples of patients receiving aspirin, heparin or placebo in 16 centres were used to create meta-analyses with half the trials comparing aspirin and placebo and half heparin and placebo. Methods for indirect comparisons were used to estimate the contrast between aspirin and heparin. The whole process was repeated 1000 times and the results were compared with direct comparisons and also theoretical results.
Results: Of reviews which included meta-analyses of two or more RCTs, 26/237 (11%) included indirect comparisons. Few studies had carried out a formal analysis. Some reviews based analysis on the naive addition of data from the treatment arms of interest. Interpretation of indirect comparisons was not always appropriate. Very few methodological papers were identified, of which only one suggested a simple method: Bucher et al (J Clin Epidemiol 1997) proposed using the ratio of two separate odds ratios. Simulation studies showed that the na ve method is liable to bias and also produces over-precise answers. Several methods provide correct answers if strong but unverifiable assumptions are fulfilled. Four times as many similar sized trials are needed for the indirect approach to have the same power as directly randomised comparisons.
Conclusions: Systematic reviews often include indirect comparisons. Appropriate methods of analysis should be used, and interpretations should be more cautious in view of the observational nature of the data.