So what the hell is an odds ratio? Can we make the results of reviews more easily understood?

Article type
Authors
Deeks J, Dooley G, Altman D, Sackett D
Abstract
Introduction: Systematic reviews in CDSR and elsewhere have traditionally used odds ratios (OR) to analyze and express treatment effects when outcomes are measured as event rates. Interpreting the OR as an approximation to the relative risk (RR) will lead to exaggeration of the treatment effect unless event rates are very low. Statistical methods exist (and are now available in CDSR) for combining relative risks. We have investigated the effects of using these methods in the reviews in CDSR as compared with using methods based on the odds ratio.

Objective: To examine the effect of analyzing and reporting the results of meta-analyses as relative risks rather than odds ratios, noting apparent and real exaggerations of treatment benefit and increases in between study heterogeneity.

Methods: Systematic reviews contained within CDSR were analyzed using Mantel-Haenszel methods for combining odds ratios and relative risks. Apparent exaggerations in treatment effects were assessed by comparing OR and RR common effect estimates. Real exaggerations were estimated by comparing numbers-needed-to-treat calculated separately for both methods. Heterogeneity statistics were also calculated for both methods, together with a summary of the distribution of control group event rates in the contributing clinical trials.

Results: There are 1507 presentations of syntheses of dichotomous outcomes in the CDSR. We report on the apparent and real exaggerations of treatment benefit in these reviews comparing odds ratio and relative risk analyses, and relate our findings to the distribution of control group event rates.

Discussion: We will produce recommendations on the suitability of using RRs rather than ORs as summaries of effect size for dichotomous data, and note circumstances when this approach is likely to introduce bias and excessive heterogeneity.