Comparison of statistical methods used to meta-analyse results from interrupted time series studies: an empirical study

Article type
Authors
Korevaar E1, Turner SL1, Forbes AB1, Karahalios A2, Taljaard M3, McKenzie JE1
1Monash University
2University of Melbourne
3Ottawa Hospital Research Institute and University of Ottawa
Abstract
Background:
The interrupted time series (ITS) design is commonly used to evaluate large-scale policy change or public health interventions when randomisation is infeasible. In ITS studies, measurements are collected at regular intervals before and after an interruption. The pre-interruption period is used to estimate an underlying time trend that, when projected into the post-interruption period, creates a counterfactual for what would have occurred without the interruption. The impact of the interruption can then be quantified using a variety of metrics such as immediate and long-term effects. Several statistical methods are available for the analysis and meta-analysis of ITS studies. However, there has been no empirical evaluation of the impact of using different statistical methods to analyse ITS studies and meta-analyse their results.
Objectives:
To empirically compare meta-analysis results obtained from different meta-analysis and ITS analysis methods when applied to real-world ITS data.
Methods:
ITS datasets were sourced from published meta-analyses and reanalysed using two ITS estimation methods. The level- and slope-change effect estimates were calculated and combined using fixed-effect and four random-effects meta-analysis methods. We compared the meta-analytic effect estimates, 95% confidence intervals, p-values and estimates of heterogeneity across the statistical method combinations.
Results:
Of an eligible 40 reviews, data from 17 meta-analyses (including 283 ITS studies) were obtained and analysed. We found that the meta-analysis method choice did not systematically impact the meta-analytic effect estimates, standard errors and between-study variances, irrespective of the ITS analysis method. However, the meta-analytic confidence intervals and p-values were impacted by the meta-analysis confidence interval method, and the ITS analysis method used may modify this impact.
Conclusions:
The effect estimates, standard errors and between-study variance estimates were minimally impacted by ITS analysis and meta-analysis method choice. However, confidence intervals and p-values varied depending on the statistical method used, which may impact the interpretation of a meta-analysis. In conjunction with evidence from numerical simulation, this study provides insights into which methods to use in different scenarios, which may assist researchers in evidence synthesis of public health or policy interventions.
Patient, public and/or healthcare consumer involvement:
No patients/consumers were involved in the design/reporting of this study.