Article type
Year
Abstract
Background:
While the RCT is widely regarded as the design of choice for evaluating the effectiveness of interventions in healthcare, within the wider area of public policy there has been considerable debate about the suitability of experimental evaluation methods. Studies, largely from clinical areas, have identified detailed design features of rigorous RCTs that reduce systematic bias in estimating effect sizes but this evidence base is lacking for public policy evaluation. This study therefore aims to investigate the extent to which methodology (particularly study design) influences a study's findings by calculating the effect sizes resulting from 'comparable policy evaluations' (i.e. evaluations of very similar policies) from sets of studies with randomised and non-randomised study designs.Objectives:
To examine whether RCTs of policy interventions produce significantly different results when compared with other study designs or whether heterogeneity, if found, can be explained by other factors.Methods:
Sets of comparable policy evaluations meeting the following criteria were included:Sourced from systematic reviews completed or published between 1999 and 2004
Addressing policy interventions
where comparison was possible within the source review between RCTs and other study designs
Within each review, differences between randomised and non-randomised designs were explored. To allow for unexplained heterogeneity between studies as well as the known uncertainty in estimated effect sizes (measured by their standard errors), random effects meta-regression techniques were used.