Systematic review of the empirical evidence of study publication bias and outcome reporting bias

Article type
Authors
Dwan K, Altman D, Chan A, Decullier E, Von Elm E, Gamble C, Ioannidis J, Williamson1 P
Abstract
Background: The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial (RCT). Publication bias has been recognised as a potential threat to the validity of metaanalysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias (ORB) has received less attention. Objectives: To review and summarise the evidence from a series of cohort studies that have assessed publication bias and ORB in clinical studies. Methods: Empirical studies were included that assessed a cohort of clinical studies for publication bias or ORB. The cohort should be an inception cohort, where a protocol for a study is registered before it starts, as this type of prospective design is more reliable. For cohorts that included non-RCTs, information was not available for RCTs separately. Due to this barrier, and variability across studies in the time lapse between when the protocol was approved and when data were censored for analysis, it was not felt appropriate to combine statistically the results from different cohorts. This review provides a descriptive summary of the included cohort studies. Results: Sixteen studies were eligible for this review, of which five looked at ORB. Several studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, the evidence indicates that 40% to 62% of studies had at least one primary outcome that was changed, introduced, or omitted. Conclusions: Recent work provides direct empirical evidence for the existence of publication bias and ORB. There is strong evidence of an association between significant results and publication; studies reporting significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.