Indirect comparisons for evaluating healthcare interventions: review of published systematic reviews and discussion of methodological problems

Article type
Authors
Song F, Yoon Y, Walsh T, Glenny A, Eastwood A, Altman D
Abstract
Background/Objectives: Adjusted indirect comparison (AIC) and mixed treatment comparison (MTC) have been increasingly used for evaluating competing healthcare interventions. The validity of AIC and MTC depends on a series of key assumptions in terms of moderators of relative effect. This paper aims to investigate methodological problems in the application of AIC and MTC in systematic reviews. Methods: We searched PubMed to identify systematic reviews published between 2000 and 2007 in which indirect approach has been explicitly used. Data extracted from identified systematic reviews include comprehensiveness of literature search, method used for indirect comparison, and whether the similarity assumption was explicitly discussed. Results: Eighty-one independent review reports were included. Indirect comparison has become increasingly or more explicitly used in research syntheses for the evaluation of a wide range of healthcare interventions. AIC using classical frequentist methods was the most commonly used method (46/81). More complex methods (e.g., network or Bayesian hierarchical meta-analysis and MTC) have been used in 17 reports. In 12 reports, indirect comparison was informal. Results from different trials were naively compared without using a common treatment control in four reports. The key assumption of trial similarity was explicitly stated or discussed in only 39 of the 81 review reports. Explicit reference to the assumption of similarity was associated with efforts to examine or improve the similarity between trials for indirect comparisons (28/39 vs. 14/42). The assumption of consistency was not explicit in most cases where direct and indirect evidence was compared or combined (19/30). Evidence from head-to-head comparison trials was not systematically searched for, or not included in nine reports. Conclusions: Identified methodological problems include unclear understanding of underlying assumptions, inappropriate search for and inclusion of relevant trials, use of inappropriate or flawed methods, lack of objective and validated methods to assess or improve trial similarity, inadequate comparison and inappropriate combination of direct and indirect evidence. These methodological problems need to be adequately resolved. In particular, similarity between trials and the consistency in evidence from different sources need to be more explicitly and systematically assessed in the application of indirect approaches.