Article type
Year
Abstract
Background: Stopping studies early due to an apparent treatment benefit (truncated studies) may lead to overestimation of the treatment effect and thus a risk of bias. GRADE guidelines recommend sensitivity analyses in which truncated studies are omitted from meta-analyses, to assess whether early stopping has caused overestimation bias. However, GRADE recommendations do not address the assessment of studies that were subjected to interim monitoring but did not stop early (nontruncated studies). Such studies lead to underestimation which may balance the overestimation from truncated studies.
Objectives: To investigate how sensitivity analyses of nontruncated studies should be undertaken to adjust treatment effect estimates for the underestimation that results from statistical conditioning on nontruncation.
Methods: Simulation studies generated conditional and unconditional probability distributions of treatment effect estimates for randomised controlled trials (RCTs) that were monitored for early stopping due to benefit, with a maximum number of (equally spaced) analyses between 2-5. Outcomes were assumed to have a normal distribution and a moderate effect size of 0.25. Analyses were based on 100 000 simulations of RCTs with 90% power and 5% significance level, corresponding to a sample size of approx 350 per treatment group. For each simulated RCT, early stopping due to benefit occurred if the estimated treatment effect at an interim analysis was sufficiently large (using the O'Brien-Fleming rule) in the direction of benefit. Meta-analyses were then conducted on collections of 4, 12, and 24 simulated studies. In each collection, a proportion of studies (from 25%-75%) was subjected to interim monitoring for benefit, up to a maximum of three equally spaced interim analyses. For each scenario, 1000 meta-analyses were performed using each of four meta-analysis strategies: omitting nontruncated studies (crude), restricted to studies with no interim monitoring (restricted), adjusting nontruncated studies (adjusted) and including all studies (all-study), with both fixed and random effects models.
Results: The Figure illustrates that the crude strategy led to underestimation of treatment effects (red box plots). The other three strategies yielded meta-analysis estimates that were approximately unbiased. The all-study strategy (blue) yielded estimates that were the least variable. The adjusted strategy (green) exhibited less variation than the restricted strategy (yellow).
Conclusions: The primary meta-analysis in a systematic review should involve all studies, including those that stopped early for benefit. If a sensitivity analysis is conducted, treatment effect estimates from nontruncated studies subjected to interim analyses should first be statistically adjusted to ensure the meta-analysis is unbiased. Researchers should report all details required for statistical adjustment of treatment effect estimates when reporting studies that had interim monitoring.
Patient or healthcare consumer involvement: Nil - methodological study.
Objectives: To investigate how sensitivity analyses of nontruncated studies should be undertaken to adjust treatment effect estimates for the underestimation that results from statistical conditioning on nontruncation.
Methods: Simulation studies generated conditional and unconditional probability distributions of treatment effect estimates for randomised controlled trials (RCTs) that were monitored for early stopping due to benefit, with a maximum number of (equally spaced) analyses between 2-5. Outcomes were assumed to have a normal distribution and a moderate effect size of 0.25. Analyses were based on 100 000 simulations of RCTs with 90% power and 5% significance level, corresponding to a sample size of approx 350 per treatment group. For each simulated RCT, early stopping due to benefit occurred if the estimated treatment effect at an interim analysis was sufficiently large (using the O'Brien-Fleming rule) in the direction of benefit. Meta-analyses were then conducted on collections of 4, 12, and 24 simulated studies. In each collection, a proportion of studies (from 25%-75%) was subjected to interim monitoring for benefit, up to a maximum of three equally spaced interim analyses. For each scenario, 1000 meta-analyses were performed using each of four meta-analysis strategies: omitting nontruncated studies (crude), restricted to studies with no interim monitoring (restricted), adjusting nontruncated studies (adjusted) and including all studies (all-study), with both fixed and random effects models.
Results: The Figure illustrates that the crude strategy led to underestimation of treatment effects (red box plots). The other three strategies yielded meta-analysis estimates that were approximately unbiased. The all-study strategy (blue) yielded estimates that were the least variable. The adjusted strategy (green) exhibited less variation than the restricted strategy (yellow).
Conclusions: The primary meta-analysis in a systematic review should involve all studies, including those that stopped early for benefit. If a sensitivity analysis is conducted, treatment effect estimates from nontruncated studies subjected to interim analyses should first be statistically adjusted to ensure the meta-analysis is unbiased. Researchers should report all details required for statistical adjustment of treatment effect estimates when reporting studies that had interim monitoring.
Patient or healthcare consumer involvement: Nil - methodological study.