Article type
Year
Abstract
Background: The evidence-based practice movement and the need to summarize data from single-subject experimental designs (SSEDs) within meta-analyses have prompted the development of techniques for calculating SSED effect sizes (ES). The two most promising approaches seem to be the ‘‘family of non-overlap metrics’’ and the application of hierarchical linear modeling (HLM) (Parker, Hagan-Burke, & Vannest, 2007; Shadish & Rindskopf, 2007). Non-overlap metrics such as Non-overlap of All Pairs (NAP; Parker & Vannest, in press) or Percentage of Non-overlapping data (PND; Scruggs, Mastropieri, & Casto, 1987) use the amount of non-overlapping data in SSEDs as an indicator of performance differences, i.e., the extent to which data in baseline versus intervention phases do not overlap is an accepted indicator of the magnitude of treatment effect. These non-parametric approaches are not impacted by data assumptions of normal distribution, equal variance, and serial independence (which are commonly not met by SSED data). Their disadvantage on the other hand is the inability to describe trend and variability in the data as well as being insensitive to the magnitude of mean level shift. HLM procedures allow a more fine-grained analysis of effects on level and slope as well as a more accurate analysis of overall treatment effect that takes into account inter- and intra-subject variability. Objectives: This presentation will demonstrate how to combine HLM and non-regression ES procedures within a meta-analysis of SSED data. Methods: A sample data set of 15 published SSED studies including 42 participants was taken from a recent meta-analysis of intervention research in autism spectrum disorders (Wendt, 2009). Interventions applied graphic symbols to increase communicative development. HLM procedures described by Van den Noortgate and Onghena (2003, 2007) were applied to combine data from individual cases. The level-1 model measures within-subject effect of treatment change from baseline to treatment phases. The level-2 model explains why some subjects show more change than others and assesses higher order mean effects. The intervention effect on each participant was estimated by using empirical Bayes (EB) techniques (Morris, 1983). To assess data overlap between phases as a supplemental measure of treatment effect, the non-regression NAP and PND were calculated additionally. Results: Although HLM procedures yielded a fine-grained analysis of overall effect of the intervention phases within the SSEDs, they were inapplicable for analyzing generalization and maintenance phases due to few cases reporting these data. Also, cases that presented with slightly different outcome measures could not be included in the HLM analysis. These instances were more accurately described by the NAP and PND metrics. Conclusions: For more heterogeneous data sets it is recommended to supplement an HLM synthesis of SSEDs with non-regression metrics. With only a small number of cases available, it seems difficult to get reliable estimates of population characteristics such as mean effect and variation over cases of this effect. EB estimates of individual participant effect are only informative if there are enough cases to combine; they tend to be biased towards the overall estimate and are not directly comparable to NAP and PND.