Article type
Year
Abstract
Background:
Reporting bias, or “non-reporting bias” as defined in the latest Cochrane Handbook for Systematic Reviews of Interventions (Version 6, 2019), can seriously compromise the results of systematic reviews and meta-analysis and, as a consequence, potentially affect clinical decision-making. Various graphical and statistical methods are available to assess the risk of reporting bias. However, these approaches have mostly been developed for pairwise meta-analysis, making it difficult to assess the impact of reporting bias on the results from network meta-analysis (NMA).
Objectives:
To develop a conceptual and methodological framework for evaluating the impact of reporting bias on NMA results.
Methods:
The framework combines comparison-adjusted funnel plots, regression techniques, selection models and threshold analysis. We produce comparison-adjusted funnel plots where the direction of potential bias in each comparison is informed by pairwise contour-enhanced funnel plots and regression slopes for small-study effects. The limit meta-analysis model to adjust for small-study effects (Rücker et al, 2011) is extended to multiple treatment comparisons. To explore the impact of publication bias, we use the extension for NMA of the Copas selection model (Mavridis et al, 2014). For comparisons with less than 10 studies a qualitative assessment of the risk of bias is performed following the framework described in Chapter 13 of the Cochrane Handbook. The threshold analysis to assess the sensitivity of treatment recommendations to bias (Philippo et al, 2019) is also applied where, for each relative effect, a threshold is calculated indicating how much the pairwise evidence could change due to bias before a different treatment is favoured. Then, the plausibility of this change is judged qualitatively.
Results and Conclusions:
We present the feasibility and applicability of the methods using illustrative examples of previously published NMAs accessed through the nmadb R package (Papakonstantinou, 2019). We plan to implement these strategies in the Confidence in Network Meta-Analysis (CINeMA) framework (Nikolakopoulou et al., 2019) and web-application (https://cinema.ispm.unibe.ch/). This will allow a more systematic evaluation of the reporting bias domain and produce better informed confidence ratings of the NMA findings.
This project is funded by the Swiss National Science Foundation under grant agreement No. 179158.
Patient or healthcare consumer involvement: Not relevant
Reporting bias, or “non-reporting bias” as defined in the latest Cochrane Handbook for Systematic Reviews of Interventions (Version 6, 2019), can seriously compromise the results of systematic reviews and meta-analysis and, as a consequence, potentially affect clinical decision-making. Various graphical and statistical methods are available to assess the risk of reporting bias. However, these approaches have mostly been developed for pairwise meta-analysis, making it difficult to assess the impact of reporting bias on the results from network meta-analysis (NMA).
Objectives:
To develop a conceptual and methodological framework for evaluating the impact of reporting bias on NMA results.
Methods:
The framework combines comparison-adjusted funnel plots, regression techniques, selection models and threshold analysis. We produce comparison-adjusted funnel plots where the direction of potential bias in each comparison is informed by pairwise contour-enhanced funnel plots and regression slopes for small-study effects. The limit meta-analysis model to adjust for small-study effects (Rücker et al, 2011) is extended to multiple treatment comparisons. To explore the impact of publication bias, we use the extension for NMA of the Copas selection model (Mavridis et al, 2014). For comparisons with less than 10 studies a qualitative assessment of the risk of bias is performed following the framework described in Chapter 13 of the Cochrane Handbook. The threshold analysis to assess the sensitivity of treatment recommendations to bias (Philippo et al, 2019) is also applied where, for each relative effect, a threshold is calculated indicating how much the pairwise evidence could change due to bias before a different treatment is favoured. Then, the plausibility of this change is judged qualitatively.
Results and Conclusions:
We present the feasibility and applicability of the methods using illustrative examples of previously published NMAs accessed through the nmadb R package (Papakonstantinou, 2019). We plan to implement these strategies in the Confidence in Network Meta-Analysis (CINeMA) framework (Nikolakopoulou et al., 2019) and web-application (https://cinema.ispm.unibe.ch/). This will allow a more systematic evaluation of the reporting bias domain and produce better informed confidence ratings of the NMA findings.
This project is funded by the Swiss National Science Foundation under grant agreement No. 179158.
Patient or healthcare consumer involvement: Not relevant