Judging the impact of missing participant continuous data on risk of bias in systematic reviews of randomized trials

Article type
Authors
Ebrahim S1, Johnston B2, Akl EA3, Mustafa RA4, Sun X5, Walter SD1, Heels-Ansdell D1, Alonso-Coello P6, Guyatt GH1
1Department of Clinical Epidemiology & Biostatistics, McMaster University, Canada
2Department of Anesthesia and Pain Medicine, The Hospital For Sick Children, Canada
3Department of Internal Medicine, American University of Beirut, Lebanon
4Department of Medicine, University of Missouri-Kansas City, USA
5Center for Clinical Epidemiology and Evidence-Based Medicine, Xinqiao Hospital, China
6Iberoamerican Cochrane Centre, CIBERESP-IIB Sant Pau, Spain
Abstract
Background: We developed an approach to address missing participant data for continuous outcomes in meta-analyses.

Objectives: To assist systematic review authors and guideline panels in judging the impact ofmissing participant data on risk of bias.

Methods: Our approach involves a complete case analysis complemented by sensitivity analyses applying four increasingly stringent imputation strategies (Table 1). When the minimally important difference (MID) is available, we calculate the proportion of patients who benefit from the treatment. Systematic review authors should test a range of thresholds that guideline panels might choose as an important effect. A guideline panel should choose the threshold for recommending treatment. If the entire confidence interval for the proportion is above the threshold for all plausible imputation strategies, a panel should not rate down for risk of bias. If the confidence interval includes the threshold, confidence in the importance of the treatment effect decreases. We applied our approach to a systematic review of respiratory rehabilitation for chronic obstructive pulmonary disease.

Results: In the complete case analysis, the proportion of patients who achieved an improvement effect greater than the MID was 29% (95% CI of 21–37%) (Fig. 1). Strategies 1–3 resulted in point estimates ranging from 24% to 18%, with lower confidence limits from 17% to 11% (Fig. 1). Strategy 4 was not considered a plausible scenario. In the complete case analysis, the lower confidence limit suggests that at least 21% will achieve an important improvement. The conclusion would be similar for strategy 1 and 2. For strategy 3, if 11% benefiting would be insufficient to recommend treatment, a panel would rate down the quality of evidence for risk of bias.

Conclusions: We provide a useful approach on judging the impact of missing participant data for continuous outcomes on confidence in estimates of treatment effects.