Article type
Abstract
Background: Traditionally, evaluations were commissioned mainly to focus on the Development Assistance Committee (DAC) /Organisation of Economic Cooperation and Development (OECD) evaluation criteria of relevance, effectiveness, efficiency, impact and sustainability. With increased focus on continuous learning, adaptation and improvement, organisations are increasingly incorporating a strong learning element within evaluations.
Objectives: To foster use of evaluations as programme-improvement tools
Methods: At Provide and Equip (P&E) consultancy firm based in Uganda, the evaluations that served as programme-improvement tools were characterised by the following: stand-lone learning research questions; review the theory of change and suggesting improvements where necessary; the data-collection methodology and tools having elaborate learning questions; approaching communities as learners and listeners as opposed to experts; documentation of life-changing stories and interventions behind the stories; establishment of both intended and unintended results and reasons why; and, the evaluation report with a dedicated section on learning.
Results: Some evaluations served as practical leaning tools rather than an end in themselves more than others in terms of fostering shared learning and programme improvement.
Conclusions: In order for evaluations to serve as change agents for learning and performance improvement, the learning element has to be clearly embedded in the design, methodology, presentation of findings and recommendations.
Recommendation: Evaluation designs, methodology, tools and report outlines should be subjected to criteria to assesses whether they meet the learning agenda before they are implemented. This would be ideal at the inception-report phase.
Objectives: To foster use of evaluations as programme-improvement tools
Methods: At Provide and Equip (P&E) consultancy firm based in Uganda, the evaluations that served as programme-improvement tools were characterised by the following: stand-lone learning research questions; review the theory of change and suggesting improvements where necessary; the data-collection methodology and tools having elaborate learning questions; approaching communities as learners and listeners as opposed to experts; documentation of life-changing stories and interventions behind the stories; establishment of both intended and unintended results and reasons why; and, the evaluation report with a dedicated section on learning.
Results: Some evaluations served as practical leaning tools rather than an end in themselves more than others in terms of fostering shared learning and programme improvement.
Conclusions: In order for evaluations to serve as change agents for learning and performance improvement, the learning element has to be clearly embedded in the design, methodology, presentation of findings and recommendations.
Recommendation: Evaluation designs, methodology, tools and report outlines should be subjected to criteria to assesses whether they meet the learning agenda before they are implemented. This would be ideal at the inception-report phase.