Reflect and Act on Monitoring Feedback

Speeches Shim

FEATURED

USAID/Liberia hosts Government of Liberia and USAID Portfolio Review

In January 2010, USAID/Liberia broke new ground when it transformed its Portfolio Review process into a transparent country partnership event.

“Portfolio review to be adopted as Government of Liberia model for all future reviews of international aid.”

VISIT LINK

ALSO SEE

Aid for Trade: Where next in Monitoring and Evaluation?
For AID for Trade programs the WTO and OECD are tracking both donor and partner country performance monitoring activities, as well as joint monitoring efforts which are still relatively new as the chart below shows.

By involving beneficiaries, partners, stakeholders, and other USAID and USG entities in performance management steps, including collecting, interpreting, and sharing performance monitoring information, USAID Missions can strengthen the use of performance monitoring information. Regularly sharing performance data with partners can foster learning and adaptive management at every level.

USAID has a well established Portfolio Review process for bringing together monitoring data and other pertinent information to determine whether progress towards a Development Objective (DO) is on track, or whether changes in country conditions, faulty assumptions or other factors have put success at risk. This process, which occurs annually in Missions, if not twice a year, is broadly outlined in ADS 201.

Portfolio Reviews are a key link between monitoring and learning in the USAID Program Cycle. In Liberia and Serbia, to highlight just two Missions, USAID's portfolio is now reviewed as part of a joint monitoring and learning activity that fosters country partnerships in line with current USAID guidance. As announced in 2013, USAID is planning to develop a standard Mission Order on conducting Portfolio Reviews.

As more USAID projects are developed that involve Government-to-Government arrangements through which implementation is pursued, perhaps in tandem with project activities managed under USAID contracts, cooperative agreements or grants, additional opportunities for joint monitoring and transparent information sharing and learning are likely to occur. This is expected to give Missions wide range for expanding Mission Learning Approaches, like USAID/Uganda's Collaborating, Learning and Adapting approach to USAID project and activity levels. As the chart below suggests, monitoring fosters learning and actions that build on monitoring (and evaluations) to improve performance through a progression of steps and different types of interactive and intellectual activities.

Monitor and Share Data Interpret in Context Reflect and Re-Frame Decide and Act
  • Collect
  • Aggregate
  • Distribute
  • Review
  • Discuss
  • Question*
  • Ideas
  • Alternatives
  • Implications
  • Options
  • Decisions
  • Actions
Office-to-Office Workshop Cell & Emails Meeting

Asking questions is an important way to foster learning from monitoring data. The data are a starting point, but as discussion topics suggested in USAID's Portfolio Review guidance in the ADS indicates, more information about the larger context is often needed.

Similarly, questions raised – particularly in an interactive process that involves USAID implementing partners, other collaborating U.S. government agencies, and country partners, including both government and civil society – help ensure that monitoring's learning function is maximized locally. While there is no single list of "good questions" for learning, one that might help stimulate discussions can be constructed using a set of criteria the OECD/DAC developed for critically reviewing development projects, as illustrated in a table of learning questions developed for this kit, below. The OECD/DAC criteria on the left are now all "must review" items every time a project is reviewed. Some apply more readily at the start of a project and others are more useful later.

Criteria to Consider Illustrative Questions for Fostering the Monitoring » Learning » Action Transition
Effectiveness
  • For targets that were not met, do we know why?
  • For targets that were exceeded, do we know why?
  • Were the targets too high? Too low?
  • Given results to-date, can we achieve the project Purpose?
Relevance
  • Has the context changed?
  • What is the status of the assumptions? Do we know?
  • Was the need/interest judged correctly/misjudged at the beginning?
  • What is known about beneficiary participation/satisfaction?
Efficiency
  • Is the cost per unit service/benefit as expected? Higher? Lower?
  • Are there more economical ways of achieving the results?
  • Are there ways, within the budget, to achieve results faster?
Impact
  • Are more/fewer people affected than planned? Do we know why?
  • Are men/women participating/benefitting equally?
  • Are there spread effects -- beyond the target area or groups?
  • Have we heard about any unanticipated negative effects?
Sustainability
  • What are we seeking – sustained services/ benefits/funding?
  • Is there a clear and feasible plan in place?
  • What actions have been taken on the plan and with what success?
 << Learning from MEL to Improve Performance Up Hold Post Evaluation Action Reviews >>

ProjectStarter

BETTER PROJECTS THROUGH IMPROVED
MONITORING, EVALUATION AND LEARNING

A toolkit developed and implemented by:
Office of Trade and Regulatory Reform
Bureau of Economic Growth, Education, and Environment
US Agency for International Development (USAID)

For more information, please contact Paul Fekete.