Article type
Year
Abstract
Background: Meta-analyses are routinely evaluated for the presence of large between-study heterogeneity. We argue that it is also important to probe whether there is extreme between-study homogeneity.
Objectives: To develop a framework for and assess the importance of extreme between-study homogeneity in meta-analysis.
Methods: Extreme between-study homogeneity may be evaluated with the same statistics as large heterogeneity, but using left-sided statistical significance for inference. Tests with asymptotic assumptions are commonly used, but exact tests may be more appropriate for evaluating extreme homogeneity. We present an example of a Monte Carlo simulation test for testing extreme homogeneity in risk ratios across studies, using the empiric distribution of the summary risk ratio and heterogeneity statistic. We propose a left-sided p = 0.01 threshold for claiming extreme homogeneity, in order to minimize Type I error.
Results: Among 11,803 meta-analyses with binary contrasts from The Cochrane Library, 143 (1.21%) had left-sided p-value <0.01 for the asymptotic Q statistic and 1004 (8.50%) had left-sided p-value <0.10. The frequency of extreme between-study homogeneity did not depend on the number of studies included in the meta-analyses. Extreme between-study homogeneity may result from chance (Type I error) or inappropriate statistical inference (asymptotic vs. Monte Carlo) and may depend on the use of a specific effect metric; alternatively implausibly extreme homogeneity may provide hints for the existence of correlated data, stratification using strong predictors of outcome, biases and potential fraud. We present and discuss examples of meta-analyses with extreme between-study homogeneity that cover these diverse possibilities.
Conclusions: Extreme between-study homogeneity may provide useful insights about a meta-analysis and its constituent studies.
Objectives: To develop a framework for and assess the importance of extreme between-study homogeneity in meta-analysis.
Methods: Extreme between-study homogeneity may be evaluated with the same statistics as large heterogeneity, but using left-sided statistical significance for inference. Tests with asymptotic assumptions are commonly used, but exact tests may be more appropriate for evaluating extreme homogeneity. We present an example of a Monte Carlo simulation test for testing extreme homogeneity in risk ratios across studies, using the empiric distribution of the summary risk ratio and heterogeneity statistic. We propose a left-sided p = 0.01 threshold for claiming extreme homogeneity, in order to minimize Type I error.
Results: Among 11,803 meta-analyses with binary contrasts from The Cochrane Library, 143 (1.21%) had left-sided p-value <0.01 for the asymptotic Q statistic and 1004 (8.50%) had left-sided p-value <0.10. The frequency of extreme between-study homogeneity did not depend on the number of studies included in the meta-analyses. Extreme between-study homogeneity may result from chance (Type I error) or inappropriate statistical inference (asymptotic vs. Monte Carlo) and may depend on the use of a specific effect metric; alternatively implausibly extreme homogeneity may provide hints for the existence of correlated data, stratification using strong predictors of outcome, biases and potential fraud. We present and discuss examples of meta-analyses with extreme between-study homogeneity that cover these diverse possibilities.
Conclusions: Extreme between-study homogeneity may provide useful insights about a meta-analysis and its constituent studies.