The effect of study sponsorship on a systematically evaluated body of evidence was modest: secondary analysis of a systematic review

Article type
Authors
Gartlehner G, Morgan L, Thieda P, Fleg A
Abstract
Background: An increasing amount of literature is finding associations between industry funding and the reporting of positive results. Systematic reviews of randomized controlled trials are commonly viewed as the best study design to determine causal effects such as the efficacy or effectiveness of interventions. However, the validity of findings of systematic reviews largely depends on the validity of the included component studies. As the proportion of clinical studies funded by industry sources grows, the potential for bias arising from funding sources may rise as well and may adversely affect the conclusions of systematic reviews. Objectives: The objective of this study was to determine the effect of industry bias in a systematically reviewed body of evidence of head-to-head trials. Methods: We limited our analysis to published head-to-head randomized controlled trials of selective serotonin reuptake inhibitors (SSRIs) identified in a systematic review. Two reviewers independently determined the status of funding for each trial. We classified drugs into one of two groups: 1) drugs associated with the funding source; 2) drugs not associated with the funding source. To determine the effect of any underlying industry bias, we conducted relative benefit meta-analyses comparing response rates of drugs when associated with the funding source with response rates of the same drugs when not associated with the funding source. Results: Results, based on data of 4706 patients, indicate a modest effect of industry bias. Although the pooled response rate of SSRIs, when associated with the funding source, is significantly greater than that of the same SSRIs when not associated with the sponsor (relative benefit: 1.07; 95% CI, 1.02 to 1.11), the effect is likely not clinically relevant. Conclusions: The effect of industry bias on a systematically evaluated body of evidence might often be overestimated.