A Study of the Family Care First in Cambodia Developmental Evaluation (Final Version) 

Speeches Shim

This report is made possible by the support of the American People through the United States Agency for International Development (USAID). The contents of this presentation are the sole responsibility of DEPA-MERL consortium and do not necessarily reflect the views of USAID or the United States Government.

The Developmental Evaluation Pilot Activity (DEPA-MERL) consortium conducted a developmental evaluation with Family Care First in Cambodia (FCF) from November 2016 to March 2018. FCF aims to increase the number of children living in safe, nurturing family-based care. The DEPA-MERL consortium, funded by USAID’s MERLIN program, consists of: Search for Common Ground (Search), which implemented the developmental evaluation with FCF, including hiring, managing and supporting the Developmental Evaluator; Social Impact, which served as the prime awardee on the consortium and provided support to Search on the FCF developmental evaluation; and the William Davidson Institute at the University of Michigan (WDI), which studied the effectiveness of this approach in FCF.

Developmental evaluation supports the continuous adaptation of programs by providing evaluative insight and timely feedback to inform ongoing adaptation in complex, dynamic situations. This is done by embedding an evaluator into the program for the duration of the evaluation.

The Developmental Evaluator, in close collaboration with other stakeholders, uses a variety of monitoring and evaluation methods and tools to collect and share data. The Developmental Evaluator enables real-time, evidence-based reflection and decision-making consistent with USAID’ Collaborating, Learning, and Adapting approach. See Figure 1 for differences between traditional and developmental evaluation.

WDI’s role in the DEPA-MERL consortium is to facilitate learning on the implementation of the developmental evaluation approach in USAID programming and context. To accomplish this objective, WDI studied the FCF developmental evaluation during all 15 months of the evaluation implementation. Through the data collected, the DEPA-MERL consortium aims to build on existing literature focused on developmental evaluation in practice.i Readers of this report including USAID stakeholders, other organizations implementing developmental evaluation, and Developmental Evaluators themselves, can use the data and recommendations to improve the effectiveness of the approach. Additionally, the findings from this study will be compared to findings from other developmental evaluation pilots conducted by DEPA-MERL for a cross-case comparison report, forthcoming in September 2019.

Figure 1: Developmental evaluation differs from traditional evaluation because it supports the continuous adaptation of programs whereas for the purposes of this study, traditional evaluation is typically formative or summative in nature

Traditional Evaluation Developmental Evaluation
Render definitive judgments of success or failure. Provide feedback, generate learnings, support changes in direction.
Measure success against predetermined goals. Develop new measures and monitoring mechanisms as goals emerge and evolve.
Position the evaluator outside to assure independence and objectivity. Position evaluation as internal, team function integrated into action and ongoing interpretive processes.
Design the evaluation based on linear cause-and-effect logic models. Design the evaluation to capture system dynamis, interdependencies, models and emergent interconnections.
Aim to produce generalizable findings across time and space. Aim to produce context-specific understandings that inform ongoing innovation.
Accountability focused on and directed to external authorities, stakeholders and funders. Accountability centered on the innovators' deep sense of fundamental values and commitment.
Accountability to control and locate responsibility. Learning to respond to lack of control and stay in touch with what's unfolding and thereby respond strategically.
Evaluator determines the design based on the evaluator's perspective about what is important. The evaluator controls the evaluation. Evaluator collaborates with those engaged in the change effort to design an evaluation process that matches philosophically with an organization's principles and objectives.
Evaluation results in opinion of success or failure, which creates anxiety in those evaluated. Evaluation supports ongoing learning.
Date 
Tuesday, November 27, 2018 - 12:30pm