Article type
Year
Abstract
Background: While the RCT is widely regarded as the design of choice for evaluating the effectiveness of interventions in healthcare, within the wider area of public policy there has been considerable debate about the suitability of experimental evaluation methods. Studies, largely from clinical areas, have identified detailed design features of rigorous RCTs that reduce systematic bias in estimating effect sizes but this evidence base is lacking for public policy evaluation. This study therefore aims to investigate the extent to which methodology (particularly study design) influences a study's findings by calculating the effect sizes resulting from 'comparable policy evaluations' (i.e. evaluations of very similar policies) from sets of studies with randomised and non-randomised study designs.
Objectives: To examine whether RCTs of policy interventions produce significantly different results when compared with other study designs or whether heterogeneity, if found, can be explained by other factors.
Methods: Sets of comparable policy evaluations meeting the following criteria were included:
Sourced from systematic reviews completed or published between 1999 and 2004
Addressing policy interventions
where comparison was possible within the source review between RCTs and other study designs
Within each review, differences between randomised and non-randomised designs were explored. To allow for unexplained heterogeneity between studies as well as the known uncertainty in estimated effect sizes (measured by their standard errors), random effects meta-regression techniques were used.
Results: More than 200 studies were included in the analysis across a range of policy areas: sexual health, workplace health promotion, peer delivered health promotion, and the promotion of mental health, physical activity and healthy eating. Policy interventions were delivered mainly within institutions and communities, and more rarely at regional or national levels. They were predominantly the provision of health promotion services and, occasionally, included environmental modification or regulation/ legislation. Findings from 400 effect sizes available for the meta-regression will be presented.
Conclusion: The effect sizes of interventions are potentially influenced by a range of confounders associated with the design of the interventions, providers of the intervention, and the design and quality of the evaluation.
Objectives: To examine whether RCTs of policy interventions produce significantly different results when compared with other study designs or whether heterogeneity, if found, can be explained by other factors.
Methods: Sets of comparable policy evaluations meeting the following criteria were included:
Sourced from systematic reviews completed or published between 1999 and 2004
Addressing policy interventions
where comparison was possible within the source review between RCTs and other study designs
Within each review, differences between randomised and non-randomised designs were explored. To allow for unexplained heterogeneity between studies as well as the known uncertainty in estimated effect sizes (measured by their standard errors), random effects meta-regression techniques were used.
Results: More than 200 studies were included in the analysis across a range of policy areas: sexual health, workplace health promotion, peer delivered health promotion, and the promotion of mental health, physical activity and healthy eating. Policy interventions were delivered mainly within institutions and communities, and more rarely at regional or national levels. They were predominantly the provision of health promotion services and, occasionally, included environmental modification or regulation/ legislation. Findings from 400 effect sizes available for the meta-regression will be presented.
Conclusion: The effect sizes of interventions are potentially influenced by a range of confounders associated with the design of the interventions, providers of the intervention, and the design and quality of the evaluation.