Article type
Year
Abstract
Background: Several methods exist for bias adjustment of meta-analysis results, but there has so far been no comprehensive comparison against various nonadjusted methods.
Objectives: In this article, we compare six bias adjustment methods against two nonadjusted methods to examine how these different bias adjustment methods perform.
Methods: We reanalysed a meta-analysis that included 10 randomized controlled trials. Two data-based methods—i) Welton’s data-based approach (DB) and ii) Doi’s quality effects model (QE)—and four opinion-informed methods—iii) opinion-based approach (OB), iv) opinion-based distributions combined statistically with data-based distributions (O+DB), v) numerical opinions informed by data-based distributions (OID (num)), and vi) opinions obtained by selecting areas from data-based distributions (OID (select))—were used to incorporate methodological quality information into the meta-analytical estimates. The results of these six methods were compared against two unadjusted models, the DerSimonian-Laird random effects (RE) model and Doi’s inverse variance heterogeneity (IVhet) model.
Results: When the RE model was viewed as the unadjusted baseline, no bias adjustment occurred for the three opinion-based methods. The DB and QE methods adjusted towards the null. When the IVhet model was used as the unadjusted baseline, there was some bias adjustment away from the null, except for the opinion-based methods, which demonstrated an apparent bias adjustment due to conformity with the random effects model. Data- versus opinion-driven methods agreed differently with different unadjusted models. Data-based methods agreed with the IVhet model with minimal adjustment. Opinion-based methods followed the random effects results.
Conclusions: The difference between data- and opinion-based methods can be attributed to the robustness of data-based methods to small study effects.
Relevance and importance to patients: This is a contribution to methods that will result in more robust evidence production. This research is important for clinicians and their patients who require trustworthy meta-analyses in their decision-making. There are several methods to ‘adjust’ the results of meta-analysis based on potential for bias in the results of included studies. Not all methods produce the same results, and this research suggests that methods that rely on expert opinion may not be adequately adjusted.
Objectives: In this article, we compare six bias adjustment methods against two nonadjusted methods to examine how these different bias adjustment methods perform.
Methods: We reanalysed a meta-analysis that included 10 randomized controlled trials. Two data-based methods—i) Welton’s data-based approach (DB) and ii) Doi’s quality effects model (QE)—and four opinion-informed methods—iii) opinion-based approach (OB), iv) opinion-based distributions combined statistically with data-based distributions (O+DB), v) numerical opinions informed by data-based distributions (OID (num)), and vi) opinions obtained by selecting areas from data-based distributions (OID (select))—were used to incorporate methodological quality information into the meta-analytical estimates. The results of these six methods were compared against two unadjusted models, the DerSimonian-Laird random effects (RE) model and Doi’s inverse variance heterogeneity (IVhet) model.
Results: When the RE model was viewed as the unadjusted baseline, no bias adjustment occurred for the three opinion-based methods. The DB and QE methods adjusted towards the null. When the IVhet model was used as the unadjusted baseline, there was some bias adjustment away from the null, except for the opinion-based methods, which demonstrated an apparent bias adjustment due to conformity with the random effects model. Data- versus opinion-driven methods agreed differently with different unadjusted models. Data-based methods agreed with the IVhet model with minimal adjustment. Opinion-based methods followed the random effects results.
Conclusions: The difference between data- and opinion-based methods can be attributed to the robustness of data-based methods to small study effects.
Relevance and importance to patients: This is a contribution to methods that will result in more robust evidence production. This research is important for clinicians and their patients who require trustworthy meta-analyses in their decision-making. There are several methods to ‘adjust’ the results of meta-analysis based on potential for bias in the results of included studies. Not all methods produce the same results, and this research suggests that methods that rely on expert opinion may not be adequately adjusted.