Publication of economic evaluations alongside clinical trials: preliminary evidence of bias

Article type
Authors
Gilbody S, Sutton A, Bower P
Abstract
Background: Publication bias occurs when studies with results that are statistically significant, interesting, from large well-funded studies, or of higher quality are more likely to be published more rapidly than work without these characteristics. This fact has been well demonstrated in the epidemiological literature, but has been subject to less scrutiny in the health economic literature. Objectives: To examine whether randomised economic evaluations are based on clinical effectiveness estimates that are unrepresentative of the totality of the research literature. Methods: We compared pooled clinical effect sizes in studies with a concurrent economic evaluation compared to pooled clinical effect sizes in studies which did not publish a concurrent economic evaluation, using metaregression. Funnel plots were used to assess the likelihood of publication bias. Results: Our data set comprised 36 RCTs (12,294 patients) evaluating the effectiveness of enhanced depression management of which 12 RCTs (6798 patients) reported a concurrent economic evaluation. Studies that published an economic evaluation were more likely to be based upon studies with effect sizes favouring enhanced depression management. The pooled clinical effect size of studies publishing an economic evaluation was almost twice as large compared to studies that did not publish an economic evaluation (pooled standardised mean difference in RCTs with an economic evaluation = 0.34; 95% Cl 0.23 to 0.46; pooled standardised mean difference in RCTs without an economic evaluation = 0.17; 95% CI 0.10 to 0.25). This difference was statistically significant (SMD between group difference = -0.17; 95% CI -0.31 to -0.02; p = 0.02). None of the six studies with the lowest effect size estimates reported a cost effectiveness estimate. There was little variation in sample size, making funnel plot analysis difficult to interpret. Conclusions: Publication of an economic evaluation of enhanced care for depression was associated with a larger clinical effect size. Cost effectiveness estimates should be interpreted with caution, and the representativeness of the clinical data upon which they are based should always be considered. Further research is needed to explore this observed association and potential bias in other areas.