Article type
Year
Abstract
Background: The achievements of meta-analysis in the realm of clinical trial research are impressive. However, the results from meta-analysis are not always trustworthy. This led to research into the numerous ways in which bias may be introduced, and the development of methods to detect the presence of such bias[1]. Schulz et al.[2] introduced an approach to detecting bias in trial results, which has come to be known as meta-epidemiology. We examined previous meta-epidemiological studies and found that sample size calculations were not reported. Since sample size calculation for meta-epidemiological research has not been clear up until now, research on this issue is urgently needed. Sample size calculations will ensure that we have enough meta-analyses to have high power to detect a bias in the results if it is present. Furthermore, collecting meta-analyses suitable for meta-epidemiological studies and extracting data from them is time consuming and expensive.
Objectives: To investigate approaches to sample size calculation for meta-epidemiological studies. Methods: Two meta-epidemiological methods are commonly used to detect bias: one is the Schulz logistic regression method [2]; another is the weighted mean method [4]. We started with the sample size calculation for a logistic regression model developed by Hseih et al.[3]. Since the parameter of interest in the Schulz model is an interaction term (treatment by trial characteristic) rather than a simple covariate, the Hseih et al. sample size formula had to be adapted. An alternative approach, based on the weighted mean method, was to adapt the sample size formula for a t-test. These formulas can be applied using results from previous meta-epidemiological studies, based on fixed- or random-effects methods. Data from previous meta-epidemiological analyses were used to examine the performance of the sample size calculations. Simulations under a broad range of settings were also conducted to assess the adequacy of the sample size calculations in terms of achieved power, while controlling for type I error.
Results: We developed two sample size formulas for the number of meta-analyses needed in a meta-epidemiological study. The two formulas gave similar results in the examples we investigated. Simulations suggest that the sample sizes specified by these formulas provide sufficient power. Conclusions: This study permits calculation of the sample sizes needed for future meta-epidemiological studies, which may be very helpful for planning purposes.
Acknowledgements: This research was supported by a grant from the Canadian Institutes of Health Research (CIHR).
References: 1. Egger M, Ebrahim S, Smith GD. Where now for meta-analysis? Int J Epidemiol. 2002; 31(1):1-5. 2. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995; 273:408-412. 3. Hsieh FY, Bloch DA, Larsen MD. A simple method of sample size calculation for linear and logistic regression. Stat Med. 1998; 17:1623-34. 4. Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003; 56:943-955.
Objectives: To investigate approaches to sample size calculation for meta-epidemiological studies. Methods: Two meta-epidemiological methods are commonly used to detect bias: one is the Schulz logistic regression method [2]; another is the weighted mean method [4]. We started with the sample size calculation for a logistic regression model developed by Hseih et al.[3]. Since the parameter of interest in the Schulz model is an interaction term (treatment by trial characteristic) rather than a simple covariate, the Hseih et al. sample size formula had to be adapted. An alternative approach, based on the weighted mean method, was to adapt the sample size formula for a t-test. These formulas can be applied using results from previous meta-epidemiological studies, based on fixed- or random-effects methods. Data from previous meta-epidemiological analyses were used to examine the performance of the sample size calculations. Simulations under a broad range of settings were also conducted to assess the adequacy of the sample size calculations in terms of achieved power, while controlling for type I error.
Results: We developed two sample size formulas for the number of meta-analyses needed in a meta-epidemiological study. The two formulas gave similar results in the examples we investigated. Simulations suggest that the sample sizes specified by these formulas provide sufficient power. Conclusions: This study permits calculation of the sample sizes needed for future meta-epidemiological studies, which may be very helpful for planning purposes.
Acknowledgements: This research was supported by a grant from the Canadian Institutes of Health Research (CIHR).
References: 1. Egger M, Ebrahim S, Smith GD. Where now for meta-analysis? Int J Epidemiol. 2002; 31(1):1-5. 2. Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995; 273:408-412. 3. Hsieh FY, Bloch DA, Larsen MD. A simple method of sample size calculation for linear and logistic regression. Stat Med. 1998; 17:1623-34. 4. Sampson M, Barrowman NJ, Moher D, Klassen TP, Pham B, Platt R et al. Should meta-analysts search Embase in addition to Medline? J Clin Epidemiol. 2003; 56:943-955.