Empirical Assessment of Publication Bias: Lessons From Two Meta-Analyses

Article type
Year
Authors
Devine E
Abstract
Introduction:

Objectives: The objective was to determine the extent to which publication bias was evident in two meta-analyses.

Methods: For two meta-analyses of the effects of psychoeducational care, effect size values from published studies (journals or books) were contrasted with those from unpublished studies (theses or dissertations that were not published elsewhere). The differences were noted and tested using the analogue for regression.

Results: There were 182 studies of surgical patients with an experimental or a quasi-experimental design. Forty-four percent of studies were published and 56% were unpublished. For the three global outcomes, published studies yielded somewhat larger average estimates of effect (d+), recovery (N=169; published d+=.5l and unpublished d+=.44) and pain (N=119; published d+=.37 and unpublished d+=.36). The difference was much larger for psychological well-being (N=101; published d+= .47 and unpublished d+=.21). There were 78 studies of cancer patients with an experimental or a quasi-experimental design. Forty-five percent of studies were published and 55% were unpublished. Published studies yielded somewhat larger average estimates of effect on anxiety (N=48; published d+=.66 and unpublished d+=.57) and nausea (N=8; published d+=.75 and unpublished d+=.68). This difference was much larger for pain (N=9; published d+=.61 and unpublished d+=.29). Interestingly, unpublished studies yielded somewhat larger estimates of effect (d+) on depression (N=37; published d+=.48 and unpublished d+=.53) and knowledge (N=13; published d+= 1.07 and unpublished d+= 1.19). This difference was even larger for vomiting (N=8; published d+=.30 and unpublished d+=.47). Small sample sizes limited some of the analyses.

Discussion: Most differences in d+ by source of study were very small, but in two cases d+ values were twice as larger in published studies than in unpublished studies. Publication bias is a plausible threat, but not an inevitable threat. It is important to seek unpublished studies and examine the threat empirically.