Article type
Abstract
Background: Randomised and non-randomised studies of the same intervention may produce different impact quantities due to the methods used to determine causality and the samples included in estimation of the treatment effect. Researchers in social sciences and public health have conducted a large number of internal replication studies in order to test the validity of non-randomised approaches. These studies compare randomised treatment estimates with estimates from non-randomised comparisons.
Objectives: To compare impact quantities estimated in randomised field trials and non-randomised replication studies, and quantify differences according to approaches used.
Methods: We conducted a systematic review of non-experimental internal replication studies of randomised field trials in the social science and public health. We will assess bias and explore correlations between effect sizes and study characteristics.
Results: Surveys of internal replication studies are already available in labour economics (1) psychology (2) international development (3) and education (Wong et al 2016). We update these studies using systematic methods of data collection and meta-analysis to provide new evidence on impact estimates from non-experimental approaches.
Conclusions: The study appraises common sources of bias in non-randomised studies in social sciences and public health, and attempts to quantify deviations from unbiased treatment effects arising from different methodological sources.
References
1) Glazerman, 2004.
2) Cook et al., 2008.
3) Hansen et al., 2011.
Objectives: To compare impact quantities estimated in randomised field trials and non-randomised replication studies, and quantify differences according to approaches used.
Methods: We conducted a systematic review of non-experimental internal replication studies of randomised field trials in the social science and public health. We will assess bias and explore correlations between effect sizes and study characteristics.
Results: Surveys of internal replication studies are already available in labour economics (1) psychology (2) international development (3) and education (Wong et al 2016). We update these studies using systematic methods of data collection and meta-analysis to provide new evidence on impact estimates from non-experimental approaches.
Conclusions: The study appraises common sources of bias in non-randomised studies in social sciences and public health, and attempts to quantify deviations from unbiased treatment effects arising from different methodological sources.
References
1) Glazerman, 2004.
2) Cook et al., 2008.
3) Hansen et al., 2011.