Article type
Year
Abstract
Background: Research into publication bias suggests that it is the ‘interest level’, or statistical significance, of findings, not study rigour or quality, that determines which research gets published and subsequently becomes available. Hence, a meta-analysis which only combines published studies, missing those that have not been published, is vulnerable to publication bias. The presence of publication bias in meta-analyses has the potential to cause misleading conclusions with potentially devastating consequences. Many methods exist for detecting publication bias that frequently conclude that ‘‘caution must be exercised when interpreting the metaanalysis’’, but this statement alone is deficient if results are going to be used within a decision-making framework. Objectives: What is required is a reliable way to adjust pooled estimates for publication bias. Here, we present a comprehensive simulation study designed to assess a large number of previously described and novel adjustment methods in order to identify those with the most desirable statistical properties. Methods: The methods under evaluation include different versions of the Trim and Fill algorithm and several regression-based methods, which are more commonly applied for detecting publication bias (rather than adjust for it). These regression methods include those proposed by Egger et al. and modified variants of them by Harbord et al. and Peters et al. as well as some less well known approaches. Moreover, more complex novel Bayesian semi-parametric regressions are implemented in the hope of improving on the performance of the simpler approaches. Results: Results are encouraging, with several of the regression methods displaying good performance profiles with no one consistently outperforming all others, but with some of the more common approaches being superseded by novel approaches. Conclusions: Adjusting for publication bias seems promising and continuing validatory simulation studies are underway to further understand the properties of the latest Bayesian methods. In addition to presenting the results of the simulation studies, there will be a proposal for creating a consensus simulation framework in which future testing and adjustment methods can be evaluated. This should alleviate the previous problems of the methods being evaluated under different (and arguably favourable) simulation conditions.