Article type
Year
Abstract
Background: Randomised controlled trials (RCTs) often attempt to determine the efficacy of a treatment or intervention under ideal conditions. Although RCTs may investigate the effectiveness of interventions in actual practice, observational studies are often used to measure the effects in ’real world’ scenarios.
Objectives: To assess the impact of study design on effect measures estimated in observational studies and RCTs.
Methods: We aimed to identify and select systematic and non-systematic reviews published since 1990 that were designed as methodological studies to compare quantitative effect size estimates measuring efficacy or effectiveness of interventions of trials with observational studies or different designs of observational studies. Using results from RCTs as the reference, we examined the estimates to see whether there was relative under- or over-estimation of odds ratios (OR) in the observational studies. We pooled summary estimates from identified reviews to calculate a summary pooled odds ratio comparing the effects of study design.
Results: We initially identified 2968 references, assessed full text of 53, and identified 15 reviews that met our inclusion criteria. Eleven studies, contributing 216 separate meta-analyses, were included in the quantitative analysis. Our results yielded little difference between the evidence obtained from RCTs and observational studies. Despite several pre-specified subgroup analyses and taking risk of bias of the included studies into account, no significant differences in effect were noted between study designs. Our primary quantitative analysis showed that the pooled OR comparing effects from observational studies with effects from RCTs was 1.10 (95% CI 0.95–1.28).
Conclusions: Our results across all studies are very similar to results reported by previous investigators. As such, we have reached similar conclusions: there is little evidence of significant effect estimate differences between observational studies and RCTs, regardless of specific observational study design features, heterogeneity, or inclusion of drug studies.
Objectives: To assess the impact of study design on effect measures estimated in observational studies and RCTs.
Methods: We aimed to identify and select systematic and non-systematic reviews published since 1990 that were designed as methodological studies to compare quantitative effect size estimates measuring efficacy or effectiveness of interventions of trials with observational studies or different designs of observational studies. Using results from RCTs as the reference, we examined the estimates to see whether there was relative under- or over-estimation of odds ratios (OR) in the observational studies. We pooled summary estimates from identified reviews to calculate a summary pooled odds ratio comparing the effects of study design.
Results: We initially identified 2968 references, assessed full text of 53, and identified 15 reviews that met our inclusion criteria. Eleven studies, contributing 216 separate meta-analyses, were included in the quantitative analysis. Our results yielded little difference between the evidence obtained from RCTs and observational studies. Despite several pre-specified subgroup analyses and taking risk of bias of the included studies into account, no significant differences in effect were noted between study designs. Our primary quantitative analysis showed that the pooled OR comparing effects from observational studies with effects from RCTs was 1.10 (95% CI 0.95–1.28).
Conclusions: Our results across all studies are very similar to results reported by previous investigators. As such, we have reached similar conclusions: there is little evidence of significant effect estimate differences between observational studies and RCTs, regardless of specific observational study design features, heterogeneity, or inclusion of drug studies.