Article type
Year
Abstract
Introduction: Several methods can be used for the meta-analysis of data from clinical trials with binary endpoints. The most widely used methods are those of Mantel and Haenszel, Cochran, Peto, DerSimonian and Laird, rate difference and logarithm of the odds ratio. The methods are based on either an additive effect model, where the treatment effect is measured by the risk difference, or a multiplicative effect model, where the measures used are the risk relative or the odds ratio. In addition, the DerSimonian and Laird method is based on a random-effects model, and the others on a fixed-effects model. In general, there is no clear evidence for the choice of the correct model prior to performing the meta-analysis.
Objective: To study the consequences of using an unsuitable effect model for meta-analysis.
Methods: Using Monte-Carlo simulation techniques, fictitious trial samples with a known effect model were generated and analysed using all methods. Sensitivity analyses were conducted by varying the main trial characteristics (basic risk in control group, treatment size effect, size of trials and the variability of these parameters).
Results: We observed that the inadequacy of the effect model chosen for the analysis increased the value of the heterogeneity statistic (Cochran Q statistic) for both the fixed and random models, in particular when the range of the basic risk (observed failure in the control group) in the trials was broad.
Discussion: In practice, since the true effect model is unknown, all available methods should be used and that which gives the lowest value for the heterogeneity statistic should be retained. Large differences in the value of the heterogeneity statistic across the methods should lead to explore the effect model.
Objective: To study the consequences of using an unsuitable effect model for meta-analysis.
Methods: Using Monte-Carlo simulation techniques, fictitious trial samples with a known effect model were generated and analysed using all methods. Sensitivity analyses were conducted by varying the main trial characteristics (basic risk in control group, treatment size effect, size of trials and the variability of these parameters).
Results: We observed that the inadequacy of the effect model chosen for the analysis increased the value of the heterogeneity statistic (Cochran Q statistic) for both the fixed and random models, in particular when the range of the basic risk (observed failure in the control group) in the trials was broad.
Discussion: In practice, since the true effect model is unknown, all available methods should be used and that which gives the lowest value for the heterogeneity statistic should be retained. Large differences in the value of the heterogeneity statistic across the methods should lead to explore the effect model.