Exploring and accounting for the impact of interventions with scarce evidence in network meta-analysis

Article type
Authors
Chaimani A1, Salanti G1
1Department of Hygiene & Epidemiology, School of Medicine, University of Ioannina, Greece
Abstract
Background: Previous empirical evidence has revealed that interventions evaluated in very few trials are often placed among the most effective interventions in a network meta-analysis (NMA). Although this phenomenon can be expected for recently marketed treatments, older treatments with scarce evidence could show exaggerated results due to publication bias or because they have been compared only to suboptimal alternatives. If the network is poorly connected, such treatments are informed mainly by their potentially biased direct comparisons and may rank spuriously high in the overall ranking.
Objectives: To explore the impact of interventions with scarce evidence in a NMA and to estimate the relative ranking of treatments as a function of the amount of available information in the network.
Methods: We developed a network meta-regression model using the contribution of each direct comparison to the network as a predictor of the relative effects. We assumed that interventions poorly connected to the rest of the network might be favored in a direct comparison. We applied the model in a network that compares eight different stents for myocardial infraction, in which results are highly affected by the direct comparison between bare-mental stents (BMS) and control.
Results: Based on the cumulative ranking probabilities, the NMA model indicated BMS as the best intervention, although newer treatments are well-known to be safer. After applying our meta-regression model BMS was placed at a lower rank and results were closer to those expected from clinical practice.
Conclusions: In poorly connected networks, the contribution of indirect information might be low and some interventions may appear high in ranking because they are informed primarily by their direct comparisons. Modeling the amount of available evidence for each intervention might be a useful tool to evaluate the robustness of results from ΝΜΑ. How results change if suspicious comparisons are given a lower weight might also be explored via a sensitivity analysis.