Article type
Year
Abstract
Background: No methods directly address the impact of missing participant data for continuous outcomes in systematic reviews on risk of bias.
Objectives: To develop approaches for addressing missing participant data for continuous outcomes in systematic reviews.
Methods: We conducted a consultative, iterative process to develop a framework for handling missing participant data for continuous outcomes. We applied various assumptions to a systematic review evaluating cognitive behavioural therapy in patients with depression receiving disability benefits (Fig. 1).
Results: The primary studies used the Beck Depression Inventory—scale of 0–63, higher scores representing worse outcomes (Fig. 1). We used four sources of data for imputing the mean and standard deviation (SD) for participants with missing data: [A] the control arm from the same trial (e.g. DeGraaf: 19.69 (±9.62)), [B] the worst outcome among intervention arms of all included trials (DeGraaf: 17.87 (±10.72)), [C] the best outcome among control arms from all included trials (Misri: 5.25 (±4.98)), and [D] the worst outcome among control arms of all included trials (Naeem: 28.5 (±8.7)). Worst and best outcomes were based on means and not SDs. We developed three approaches that used different combinations of the four sources of data for imputation (Table 1). Analysis excluding those with missing data (range 4–40%) showed a significant effect (p = 0.001). Results were robust to assumptions made in approach 1 (p = 0.001) but not to approach 2 (p = 0.22) and approach 3 changed the direction of effect (p = 0.68) (Fig. 1).
Conclusions: Conducting sensitivity analyses under plausible assumptions regarding missing participant data provides insight into the robustness of the results and helps establish the extent to which missing data may reduce confidence in estimates of effect. In this case, results are vulnerable to missing participant data and suggest that confidence in estimates in summary of findings tables should be rated down for risk of bias.
Objectives: To develop approaches for addressing missing participant data for continuous outcomes in systematic reviews.
Methods: We conducted a consultative, iterative process to develop a framework for handling missing participant data for continuous outcomes. We applied various assumptions to a systematic review evaluating cognitive behavioural therapy in patients with depression receiving disability benefits (Fig. 1).
Results: The primary studies used the Beck Depression Inventory—scale of 0–63, higher scores representing worse outcomes (Fig. 1). We used four sources of data for imputing the mean and standard deviation (SD) for participants with missing data: [A] the control arm from the same trial (e.g. DeGraaf: 19.69 (±9.62)), [B] the worst outcome among intervention arms of all included trials (DeGraaf: 17.87 (±10.72)), [C] the best outcome among control arms from all included trials (Misri: 5.25 (±4.98)), and [D] the worst outcome among control arms of all included trials (Naeem: 28.5 (±8.7)). Worst and best outcomes were based on means and not SDs. We developed three approaches that used different combinations of the four sources of data for imputation (Table 1). Analysis excluding those with missing data (range 4–40%) showed a significant effect (p = 0.001). Results were robust to assumptions made in approach 1 (p = 0.001) but not to approach 2 (p = 0.22) and approach 3 changed the direction of effect (p = 0.68) (Fig. 1).
Conclusions: Conducting sensitivity analyses under plausible assumptions regarding missing participant data provides insight into the robustness of the results and helps establish the extent to which missing data may reduce confidence in estimates of effect. In this case, results are vulnerable to missing participant data and suggest that confidence in estimates in summary of findings tables should be rated down for risk of bias.
Images