Article type
Year
Abstract
Background: Comparing the effectiveness of interventions is now a requirement for regulatory approval in several countries. It also aids in clinical and public health decision-making. Certain agencies are now mandating a particular approach over others. However, in the absence of head-to-head randomized trials (RCTs), determining the relative effectiveness of interventions is challenging. Objectives: We aimed to determine the comparative validity of adjusted indirect comparisons of RCTs with the more complex Bayesian mixed treatment comparison. Methods: Using systematic searching, we identified all meta-analyses evaluating more than three interventions for a similar disease state. We abstracted data on each clinical trial included population n and outcomes. We conducted fixed effects meta-analysis of each intervention versus mutual comparator and then applied the adjusted indirect comparison. We conducted a mixed treatment meta-analysis on all trials and compared the point estimates and confidence/credible intervals (CIs/CrIs) to determine important differences. Results: We included data from seven reviews that met our inclusion criteria, allowing a total of 51 comparisons. According to the apriori consistency rule, the estimated odds ratios and associated uncertainty intervals were importantly different between the two approaches for only 10 evaluations. Conclusions: More often than not, the adjusted indirect comparison yields estimates of relative effectiveness equal to the mixed treatment comparison. In the absence of head-to-head evidence, it is impossible to determine which approach is most likely correct. The choice of which approach to use should be made by the study teams, not by regulatory agencies.