How much study participant data is missing from our trials?

Article type
Authors
Kirkham J1, Dwan K2, Blümle A3, von Elm E4, Williamson P1
1University of Liverpool, United Kingdom
2Cochrane Editorial Unit, United Kingdom
3German Cochrane Center, Germany
4Cochrane Switzerland, Switzerland
Abstract
Background: Study publication bias and outcome reporting bias (ORB) have been recognised as two threats that can affect the validity of systematic reviews due to problems with missing outcome data. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported [1].
Objectives: To estimate the proportion of missing data due to lack of publication of the studies and the proportion of missing outcome data within a published study.
Methods: Data were taken from protocols of randomised controlled trials submitted to the research ethics committee of the University of Freiburg (Germany) between 2000 and 2002 and associated full published articles. The total amount of published and unpublished outcome data from all study participants was extracted and the proportion of missing data from both unpublished and published studies computed. Missing data from unpublished studies was taken from planned sample sizes from study protocols. A sensitivity analysis was undertaken to account for partially reported outcome data.
Results: From 259 studies in the study cohort, 167 were published and 92 were unpublished. Half (51%; 1,288,719 of 2,522,010) of the participant outcome data from the 259 studies was missing; 39% of this missingness was attributable to missing data from published studies and 12% from unpublished studies. The sensitivity analysis revealed that the majority of missing data from published studies was linked to data being reported in a way that could not be included in a meta-analysis.
Conclusions: Missing participant outcomes data from both published and unpublished studies is frequent. Preventative measures and potential statistical solutions for reducing the impact of missing data in meta-analyses will be discussed.
Reference:
[1] Dwan KM, Gamble C, Williamson PR, Kirkham JJ. Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review. PLoS ONE 2013;8(7):e66844.