Article type
Year
Abstract
Introduction: If publication bias is present in a meta-analysis this may result in a biased estimate of intervention effectiveness. Meta-analysts need to know how to detect and, if present, adjust for publication bias in the data analysis.
Objectives: We conducted a systematic review to identify methods to detect, assess impact of, and adjust for publication bias. We also compared these methods using an unbiased sampling frame.
Methods: We searched Medline and Science Index (1966-99) and MathSciNet (1940-99) for relevant articles. After an initial screening [n=332] the remaining articles [n=67] were reviewed independently by two reviewers (BP, RP) using the following criteria: basic supporting theory, assumptions, method outcomes, estimation, limitations, simplicity and generality. To evaluate the performance of the included methods, we used 26 mete-analyses including 400 randomized trials of which 73 were unpublished
Results: Thirty-one methods were identified and classified into four groups according to their initial concepts: file-drawer (7 methods), funnel plot (9 methods), selection model (11 methods) and selection model with data augmentation (4 methods). From Rosenthal's fail-safe number, recent file-drawer methods estimate the number of unpublished studies. Graphical inspection of a funnel plot can be supplemented with a rank-correlation test, linear or logistic regression analyses, and a simple rank-based data augmentation technique. "Trim and fill" methods estimate treatment effect adjusting for the numbers and outcomes of missing studies. Selection models estimate the treatment effect while allowing for modeling of non-randomly selected studies. Parameter estimation in these models used maximum likelihood, the expectation/maximization (EM) algorithm, and Markov Chain Monte-Carlo simulation. We discuss Bayesian hierarchical selection models, the data augmentation technique; and their applications to model both the selection process and sensitivity to unobserved studies. Thirteen of the 26 meta-analyses of published trials had statistically significant results. Of these, none became non-significant with the inclusion of unpublished trials; four became non-significant after adjusting for publication bias with the "trim-and fill" method; three with the "simple, graphical" method; and two with a selection model. On average, estimates from published studies overestimate treatment effect by 6% (inter-quartile range -3,43%). The "trim and fill" method overcompensated and underestimated treatment effect by 6% (-39%, 18%), on average. The "simple, graphical" method overestimated by 21 % (-19%, 76%) and a selection model overestimated by 47% (-22%, 104%)
Discussion: We identified a large number of methods that have been developed to detect and adjust for publication bias. The methods are diverse and when compared to one another can provide different estimates in terms of direction and magnitude of publication bias.
Objectives: We conducted a systematic review to identify methods to detect, assess impact of, and adjust for publication bias. We also compared these methods using an unbiased sampling frame.
Methods: We searched Medline and Science Index (1966-99) and MathSciNet (1940-99) for relevant articles. After an initial screening [n=332] the remaining articles [n=67] were reviewed independently by two reviewers (BP, RP) using the following criteria: basic supporting theory, assumptions, method outcomes, estimation, limitations, simplicity and generality. To evaluate the performance of the included methods, we used 26 mete-analyses including 400 randomized trials of which 73 were unpublished
Results: Thirty-one methods were identified and classified into four groups according to their initial concepts: file-drawer (7 methods), funnel plot (9 methods), selection model (11 methods) and selection model with data augmentation (4 methods). From Rosenthal's fail-safe number, recent file-drawer methods estimate the number of unpublished studies. Graphical inspection of a funnel plot can be supplemented with a rank-correlation test, linear or logistic regression analyses, and a simple rank-based data augmentation technique. "Trim and fill" methods estimate treatment effect adjusting for the numbers and outcomes of missing studies. Selection models estimate the treatment effect while allowing for modeling of non-randomly selected studies. Parameter estimation in these models used maximum likelihood, the expectation/maximization (EM) algorithm, and Markov Chain Monte-Carlo simulation. We discuss Bayesian hierarchical selection models, the data augmentation technique; and their applications to model both the selection process and sensitivity to unobserved studies. Thirteen of the 26 meta-analyses of published trials had statistically significant results. Of these, none became non-significant with the inclusion of unpublished trials; four became non-significant after adjusting for publication bias with the "trim-and fill" method; three with the "simple, graphical" method; and two with a selection model. On average, estimates from published studies overestimate treatment effect by 6% (inter-quartile range -3,43%). The "trim and fill" method overcompensated and underestimated treatment effect by 6% (-39%, 18%), on average. The "simple, graphical" method overestimated by 21 % (-19%, 76%) and a selection model overestimated by 47% (-22%, 104%)
Discussion: We identified a large number of methods that have been developed to detect and adjust for publication bias. The methods are diverse and when compared to one another can provide different estimates in terms of direction and magnitude of publication bias.