Article type
Abstract
"Background
This research addresses the disparity in the way that failure is handled between the applied social sciences and domains such as medicine and engineering, where rigorous failure analysis is prevalent. Well-documented in the evaluation literature, publisher’s bias towards success (i.e. positive findings) constrains the ability of policymakers and managers to learn from and address past mistakes.
Objectives
We aim to offer a taxonomy of purported reasons for failure, incidence estimates of causes, and augmented logic models, to help fill in a structured manner the evidence gap on failure analysis in the social sector.
Methods
We tackle this gap by two complementary means. The first is to better exploit systematic reviews and meta-analyses for learning about failure to detect expected effects in randomized trials. This empirical base includes reports on trials in which reasons for the failure to detect effects are proposed by trial reports’ authors post facto. The second is to produce a practical framework for anticipating failure at the level of program design. This theoretical base includes theory of change about a priori risks as represented, for instance, in logic models with counterfactuals.
Results and Conclusions
Our systematic compilation of evidence on failure is expected to, on one hand, support researchers in improving the robustness of evaluation designs, thus reducing the chances of inconclusive findings, and in opening the so-called “evaluation black box,” thus increasing transparency in the evaluation process. On the other hand, it should enable policymakers and public managers to reduce the risks of avoidable failures, counteract threats to program success, and, ultimately, contribute to a brighter future for all."
This research addresses the disparity in the way that failure is handled between the applied social sciences and domains such as medicine and engineering, where rigorous failure analysis is prevalent. Well-documented in the evaluation literature, publisher’s bias towards success (i.e. positive findings) constrains the ability of policymakers and managers to learn from and address past mistakes.
Objectives
We aim to offer a taxonomy of purported reasons for failure, incidence estimates of causes, and augmented logic models, to help fill in a structured manner the evidence gap on failure analysis in the social sector.
Methods
We tackle this gap by two complementary means. The first is to better exploit systematic reviews and meta-analyses for learning about failure to detect expected effects in randomized trials. This empirical base includes reports on trials in which reasons for the failure to detect effects are proposed by trial reports’ authors post facto. The second is to produce a practical framework for anticipating failure at the level of program design. This theoretical base includes theory of change about a priori risks as represented, for instance, in logic models with counterfactuals.
Results and Conclusions
Our systematic compilation of evidence on failure is expected to, on one hand, support researchers in improving the robustness of evaluation designs, thus reducing the chances of inconclusive findings, and in opening the so-called “evaluation black box,” thus increasing transparency in the evaluation process. On the other hand, it should enable policymakers and public managers to reduce the risks of avoidable failures, counteract threats to program success, and, ultimately, contribute to a brighter future for all."