Article type
Year
Abstract
Objectives:
To examine 1) methods for imputing missing standard deviations for continuous outcomes that are not included in primary studies, 2) impact of including effect sizes with imputed standard deviations in pooled results of meta-analyses and systematic reviews, and 3) potential biasing effects of including or excluding these estimated effects in reviews.Methods:
Investigators sometimes fail to report estimates of dispersion in primary studies. Often only means, F or t-statistics, or p-values are provided for continuous outcome measures. Unless estimates of standard deviations (SDs) can be imputed, effect sizes cannot be estimated and these findings cannot be pooled and included in meta-analyses. Sometimes findings are reported only as significant (p <.05) or non-significant, or even not at all even though outcomes are described in methods sections. A reasonable assumption is that these results were probably statistically non-significant or they would have been reported. We suggest taking the following approaches when standard deviations are not reported in primary studies:* If either standard errors of the mean or confidence intervals are provided, use standard statistical formulas to compute SDs.
* If t-statistics are provided, impute estimates of pooled SDs from generalized formula for this statistic.
* If p-values, but not t-statistics, are reported, use exact p-values to identify the corresponding t-statistics (with appropriate df) and proceed as in approach 2.
* If only significance or non-significance is reported, use t-statistics (with appropriate df) corresponding to p = .05 (for reported p < .05) or p =.50 (for non-significant results) as in approach 2.
We used the above approaches to examine the impact of imputing missing standard deviations for a Cochrane systematic review that examined the effects of asthma self-management education on patients' physiological, functional status, and health care utilization outcomes.