Article type
Year
Abstract
Background: The validity of research synthesis based on published literature will be threatened if published studies comprise a biased selection of all studies. Objective: To examine whether the extent of publication bias was consistent at different stages of research dissemination from research inception, to conference presentation and manuscript submission. Methods: Bibliographic databases were searched to identify empirical studies that tracked a cohort of studies and reported the rate of formal publication by study results. Odds ratios were used to measure the association between formal publication and significant or positive results. Results from different cohort studies were quantitatively combined. Main findings: We identified 11 inception cohort studies that followed up research from their beginning, three regulatory cohort studies that included trials submitted to regulatory authorities, 23 abstract cohort studies that considered abstracts presented at conferences, and four manuscript cohort studies that considered manuscripts submitted to journals. Within the four subgroups, there were significant clinical diversity and heterogeneity across studies. The pooled odds ratio of publication bias (biased publication of studies with positive results) was 2.66 (95% CI 1.95 to 3.62) in inception cohort studies, 10.87 (95% CI 1.44 to 81.85) in regulatory cohort studies, 1.64 (95% CI 1.34 to 2.00) in abstract cohort studies, and 1.06 (95% CI 0.80 to 1.39) in cohorts of manuscripts submitted to journals. Conclusions: Despite many caveats about the available empirical evidence on publication bias, there is little doubt that dissemination of research findings is likely to be a biased process. Publication bias may occur mainly before the presentation of findings at conferences and the submission of manuscripts to journals.