Reducing Bias and Increasing Power by Imputing Missing Standard Deviations for Continuous Outcomes in Systematic Reviews

Article type
Authors
M Wolf F, P Guevara J
Abstract
Objectives: To examine 1) methods for imputing missing standard deviations for continuous outcomes that are not included in primary studies, 2) impact of including effect sizes with imputed standard deviations in pooled results of meta-analyses and systematic reviews, and 3) potential biasing effects of including or excluding these estimated effects in reviews.

Methods: Investigators sometimes fail to report estimates of dispersion in primary studies. Often only means, F or t-statistics, or p-values are provided for continuous outcome measures. Unless estimates of standard deviations (SDs) can be imputed, effect sizes cannot be estimated and these findings cannot be pooled and included in meta-analyses. Sometimes findings are reported only as significant (p <.05) or non-significant, or even not at all even though outcomes are described in methods sections. A reasonable assumption is that these results were probably statistically non-significant or they would have been reported. We suggest taking the following approaches when standard deviations are not reported in primary studies:

* If either standard errors of the mean or confidence intervals are provided, use standard statistical formulas to compute SDs.
* If t-statistics are provided, impute estimates of pooled SDs from generalized formula for this statistic.
* If p-values, but not t-statistics, are reported, use exact p-values to identify the corresponding t-statistics (with appropriate df) and proceed as in approach 2.
* If only significance or non-significance is reported, use t-statistics (with appropriate df) corresponding to p = .05 (for reported p < .05) or p =.50 (for non-significant results) as in approach 2.

We used the above approaches to examine the impact of imputing missing standard deviations for a Cochrane systematic review that examined the effects of asthma self-management education on patients' physiological, functional status, and health care utilization outcomes.

Results: Of 30 controlled trials eligible for inclusion in the review, SDs for outcomes were missing in 11 trials (37%). When we imputed estimates for these missing SDs, we were able to include additional data for 8 of 16 primary outcomes in the review (50%), thereby increasing sample size (and statistical Power) and the precision of estimates of effect (narrowing confidence intervals for standardized or raw weighted mean differences, d).

Conclusions: Systematic bias can occur if study results for continuous outcomes are not included in pooled analyses. Standard deviations were unreported in over one-third of the primary studies included in the Cochrane systematic review that we examined in detail. These missing data pertained to one-half of the primary outcomes. Exclusion of individual study effect sizes typically results in under estimation of pooled effect sizes when findings are statistically significant, and over estimation of pooled effects when findings are not significant. In both instances, including these outcomes increases statistical Power and precision of estimates of effect. It is desirable to impute values for missing SDs for continuous outcomes in primary studies so effect sizes may be estimated and pooled in reviews rather than exclude results because of missing values. When imputing missing values, it is important to describe clearly the methods used.