Article type
Year
Abstract
Background: There is considerable debate as to the relative merits of using randomised controlled trial (RCT) data compared to data from observational studies in systematic reviews of adverse effects.
Objectives: This meta-analysis of meta-analyses aimed to assess the level of agreement or disagreement in the estimates of harm derived from meta-analysis of different study designs.
Methods: Searches were carried out in 10 databases in addition to reference checking; contacting experts; citation searching; and handsearching key journals, conference proceedings and websites. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from one study design could be directly compared, using the ratio of odds ratios (RORs), with the pooled estimate for the same adverse effect arising from another study design.
Results: Thirty-nine studies, yielding 111 meta-analyses were identified for inclusion. The pooled ratio of odds ratios of RCTs compared to observational studies was estimated to be 1.03 (95% CI 0.93-1.15) and there was less discrepancy with larger studies. Other meta-analysis of meta-analyses of different types of observational studies (such as cohort studies and case-control studies) also indicated no significant difference in estimates of adverse effects derived from different study designs. In almost all instances, the estimates of harm from meta-analyses of the different study designs had 95% confidence intervals that overlapped. In terms of statistical significance, in nearly two-thirds of the meta-analyses the results were in agreement (both studies showing a significant increase or significant decrease or no significant difference). In only two meta-analyses was there opposing statistical significance.
Conclusions: Empirical evidence from this overview indicates that, on average, there is no difference in the risk estimate of adverse effects derived from meta-analyses of different study designs. This suggests that systematic reviews of adverse effects need not be restricted to specific study designs.
Objectives: This meta-analysis of meta-analyses aimed to assess the level of agreement or disagreement in the estimates of harm derived from meta-analysis of different study designs.
Methods: Searches were carried out in 10 databases in addition to reference checking; contacting experts; citation searching; and handsearching key journals, conference proceedings and websites. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from one study design could be directly compared, using the ratio of odds ratios (RORs), with the pooled estimate for the same adverse effect arising from another study design.
Results: Thirty-nine studies, yielding 111 meta-analyses were identified for inclusion. The pooled ratio of odds ratios of RCTs compared to observational studies was estimated to be 1.03 (95% CI 0.93-1.15) and there was less discrepancy with larger studies. Other meta-analysis of meta-analyses of different types of observational studies (such as cohort studies and case-control studies) also indicated no significant difference in estimates of adverse effects derived from different study designs. In almost all instances, the estimates of harm from meta-analyses of the different study designs had 95% confidence intervals that overlapped. In terms of statistical significance, in nearly two-thirds of the meta-analyses the results were in agreement (both studies showing a significant increase or significant decrease or no significant difference). In only two meta-analyses was there opposing statistical significance.
Conclusions: Empirical evidence from this overview indicates that, on average, there is no difference in the risk estimate of adverse effects derived from meta-analyses of different study designs. This suggests that systematic reviews of adverse effects need not be restricted to specific study designs.