ICT4D: Does the evidence match the hype?

Article type
Authors
Brown AN1, Skelly HS1
1FHI 360
Abstract
Background: Digital and data solutions are increasingly promoted as enhancing the benefits from programs. Often the achievements are reported in terms of uptake or publicity. It is important to know whether there is an evidence base that shows attributable outcomes.

Objectives: In this study, we explore the breadth and depth of evidence coming from impact evaluation studies for digital and data interventions. We identify clusters and gaps in evidence and synthesize the evidence in clusters.

Methods: We use the technology subset of studies collected using a systematic search and screening protocol covering science, technology, innovation, and partnerships. The dataset is restricted to studies using counterfactual methodologies to estimate effect sizes. We use a quality rating tool to rate each of the included studies. We then catalogue these studies according to intervention categories, which group the evidence according to theories of change. Within each group of studies, we assess the breadth and depth of evidence presented in terms of geographic coverage, scale of evaluated programs, duration of implementations, etc. Where sufficient homogeneity exists, we conduct synthesis.

Results: We find a large cluster of studies on mHealth interventions, although the interventions within this broader category are heterogeneous. Our synthesis suggests that several types of mHealth interventions are generally effective. We find a significant share of the evidence base are studies of quick or small pilot programs without evidence of effectiveness at scale. Additional results are to be determined, as the research is in process.

Conclusions: There is a growing base of evidence on digital and data interventions for development with some categories that have sufficient quality and consistency across studies for synthesis. More research is needed to demonstrate effectiveness of these programs at scale and over time.