Evidence-based sensitivity analyses for systematic reviews and meta-analysis

Article type
Authors
Sterne J, Carlin J
Abstract
Background: A number of studies have provided empirical evidence that specific dimensions of trial quality are associated with intervention effect estimates [e.g. 1,2]. Studies included in meta-analyses are usually of varying quality, so choosing the studies to be included in the primary analysis is difficult. Restriction to high quality studies may exclude much information, while inclusion of low quality studies may bias the summary effect estimate. The information available in an individual meta-analysis is usually too small to allow a precise comparison of high and low quality studies [3].

Objectives: To develop statistical methods for evidence-based sensitivity analyses for systematic reviews.

Methods: We considered how to combine evidence from two types of studies, for example studies with adequate (high quality, HQ) or inadequate/unclear (low quality, LQ) reporting of allocation concealment. Based on previous work, we assumed that prior estimates of both the average bias in intervention effects from LQ studies, and the variance in such bias, are available. For example, re-analyses of the data of Schulz et al. found that the ratio of odds ratios (ROR) comparing studies that were not and were adequately concealed was 0.67 while the between-meta-analysis variance in the log ROR was 0.065 [3].

Results: The posterior mean of the intervention effect is a weighted average of the effect from the HQ studies, and the bias-corrected effect from the LQ studies. The weight for the LQ studies depends on the variance of the bias. If the amount of bias is known precisely, we can correct the intervention effect estimates from the LQ studies and combine them with those from the HQ studies in the usual way. Thus, including all studies in a meta-analysis corresponds to assuming that the bias is precisely zero. The less precise our assumptions about the magnitude of the bias (the larger the prior variance of the bias) the smaller the weight for the LQ studies. Thus, excluding all LQ studies corresponds to assuming we know nothing about the amount of bias in these studies.

Conclusions: Collections of meta-analyses should be used to derive estimates of the amount of bias, and its variability, associated with specific dimensions of trial quality. Based on these, our methods provide evidence-based criteria for deriving the best estimate of the intervention effect using all available evidence, and for sensitivity analyses examining how the conclusions of a meta-analysis are affected by different assumptions about the bias. Examples from a number of published meta-analyses will be presented.

References 1. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273:408-12. 2. Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M et al. Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet. 1998;352:609-13. 3. Sterne JAC, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in meta-epidemiological research. Stat Med. 2002;21:1513-24.