Trial Sequential Analysis controls random error risks in systematic review meta-analyses

Article type
Authors
Wetterslev J1, Turner R2, Bender R3, Jakobsen JC4, Imberger G5, Gluud C6
1Copenhagen Trial Unit, Rigshospitalet, Copenhagen
2Medical Research Council Biostatistics Unit
3IQWiG
4Cochrane Hepato-Biliary Group
5Department of Anaesthesia, Western Health, Melbourne, Australia
6Hepato-Biliary Group
Abstract
Objective: To discuss random errors, required information size calculation (the ‘sample size’ of a meta-analysis), and Trial Sequential Analysis in Cochrane Reviews.
Description: Just as one can obtain 3 heads and 7 tails in 10 tosses of a fair coin, differences between interventions may be observed when none exists (type I error, false positive, alpha error). Similarly, it is possible to observe no difference when one does exist (type II error, false negative, beta error). Several studies have shown that traditional methods in meta-analyses (95% confidence intervals (CIs) excluding no effect, or P values < 0.05) do not sufficiently control the risks of type I and type II errors. Most meta-analyses lack sufficient power, and in many cases the accumulating data undergoes repetitive testing. Sample sizes for clinical trials are calculated to ensure results are sufficiently powered, but systematic reviews rarely consider such information. This issue was recently debated in The BMJ (by Roberts et al. and responses), and more guidance is needed. Trial Sequential Analysis is frequentistic analytic software that can estimate the required information size in meta-analysis; control random errors; declare futility before the required information size is reached; provide adjusted CIs and diversity; help in dimensioning future trials; and assess imprecision for GRADE. This workshop will facilitate debate with a series of 5 minute presentations on key issues followed by discussion.