OCHA HomeTable of ContentsContact Us

 

   
 
Executive Management
 
 
Emergency Response Coordination
 
 
Internal Displacement Division
 
 
Policy Development
 
  Advocacy and Informaiton Management  

 


Evaluation and Studies Project

This project is PDSB’s principal vehicle for promoting greater accountability and improving the effectiveness of humanitarian action. The unit’s brief includes managing a range of evaluative activities; documenting lessons identified; promoting institutional learning and knowledge sharing; achieving greater accountability in the humanitarian sector; and focusing on the achievement of results and the use of results information.

To achieve this, the unit initiates and manages evaluations, reviews and studies for OCHA and its humanitarian partners. In addition, the unit has been entrusted with helping to design and implement monitoring and evaluation (M&E) standards and systems.

Key objectives

  • Improve OCHA’s results-based reporting
  • Increase the impact of OCHA’s evaluative activities

While these objectives remained the same, a third objective – to create a sector-wide learning and accountability initiative in response to the tsunami – had a direct impact on the unit’s work.Much energy was spent on promoting sector-wide accountability and learning through the creation of a sector-wide Tsunami Evaluation Coalition (TEC) under the aegis of ALNAP, and two other evaluative activities on the tsunami response.

Activities

In 2005, the ESU was involved in 15 evaluation activities. Seven of these were directly managed or conducted by the unit. These included two interagency real-time evaluation missions to Darfur, an evaluation of the ISDR Secretariat, a lesson learning review (LRR) of OCHA’s initial response to the tsunami, an evaluation of tsunami emergency coordination efforts, and a review of the impact of select OCHA training courses. While this is one evaluation less than foreseen, this figure hides the complexities of the Darfur and tsunami evaluations. The unit also participated in an inter-agency evaluation of the UNJLC, led by WFP. The Humanitarian Response Review was supported and coordinated by the Geneva-based ESU staff throughout its duration.

Eight, or 53 percent, of the 15 evaluation activities were fully implemented in 2005 and another four (27 percent) are ongoing and should be finalised in 2006. The latter include a review of the effectiveness of Humanitarian Response Funds and a facilitated self-evaluation on the merger of OCHA’s headquarters emergency response capacities, which has been transferred to the independent management of OIOS. Three (20 percent) of the planned evaluation activities were cancelled, mainly due to shifting work plan priorities, unforeseen additional work plan items and lack of capacity. They included a comparative review of global needs assessment methods and resources, which was implemented by an IASC-working group and did not require a separate study; a review of exit strategies; and a review of the 2002-2005 strategic plan.

A key but unplanned activity in 2005 evolved around the sector-wide Tsunami Evaluation Coalition (TEC), which ESU was instrumental in creating. The Active Learning Network for Accountability and Performance in humanitarian action provides the institutional platform for the TEC, which contains five cross-cutting evaluations on needs assessment; coordination; local and regional capacities; the link between relief, recovery and development; and donor response. The TEC initiative experienced some teething problems but has been embraced by ALNAP and raised expectations that similar coalitions will be created to review future emergency responses. The challenge will be to ensure concrete action and improvements based on the recommendations.

ESU’s work on strengthening OCHA’s monitoring and evaluation capacity showed more mixed results. Staff provided internal advice on OCHA’s results-oriented monitoring and reporting issues, but implementation was restricted to prioritise time-intensive evaluation activities, such as supporting the Humanitarian Response Review and the TEC.

On inter-agency M&E issues, progress was made by the development of a strategic monitoring and evaluation framework, to be piloted in several CAP countries in 2006, which should lead to improved performance monitoring and evaluation of the CAP.

A communications strategy was not developed, although efforts to disseminate evaluation products increased through an updated website presence and the development of a key insight series.

The ESU is now staffed with a chief, four evaluation officers and one assistant: four in New York (three professionals and one assistant) and two in Geneva (two professionals). Three of these positions are short-term positions funded by Switzerland, Denmark and Germany.

Performance evaluation

The performance indicators for 2005 focused mostly on results-based reporting although the majority of the activities were evaluation related. Advances were made on the second objective of increasing the impact of evaluative activities and regarding the third objective, with the TEC. The first objective regarding improved results-based reporting remains to be achieved.

Improvements in OCHA’s results-based reporting were modest. The percentage of “theoretically” measurable and observable indicators rose five percent, from 92 to 97 percent. However, year-end reports often fail to include feedback on indicator achievements in their performance assessments as these are not monitored throughout the year. Most indicators continue to measure product or services, but do not demonstrate whether the set objectives were achieved. This is due in part to OCHA’s work planning process having shifted in 2005, which led to a reduced linkage of performance indicators to stated objectives. In recognition of this, the ESU prepared a strategy for improving M&E within OCHA, and intends to implement this in 2006. Feedback on the current planning guidelines has been mixed, with many staff finding them very useful but some finding them confusing and too long. Revision of the guidelines is envisaged as part of the M&E implementation strategy.

As in 2004, 75 percent of the evaluation activities were undertaken on an inter-agency or joint basis, reflecting OCHA’s emphasis on promoting and supporting inter-agency collaboration. While the bulk of the work in 2005 was focused on inter-agency activities, future work programming needs to also reflect internal learning and accountability needs.

Despite efforts to increase the effectiveness and use of evaluations, the implementation of recommendations remains uneven. The unit tracks the implementation of recommendations dating back to 2002. To date, about 58 percent of evaluation and review recommendations have resulted in concrete action and 25 percent are in the process of being implemented. The degree of compliance varies from evaluation to evaluation, and management needs to ensure that those recommendations reported as being implemented are, in fact, addressed completely.

The creation of a sector-wide learning and accountability coalition on the tsunami was achieved and over 40 UN Agencies, donors, NGOs and nonprofit entities agreed by mid-year to join and fund the initiative, and jointly evaluate sector-wide performance. The effectiveness of this effort remains to be seen, but the TEC is recognised as an innovative modality for cross-sectoral inter-agency evaluation.

 

 


 


 

 

Home Financial Information and Analysis Headquarters Core Activities and Projects Coordinaiton Activities int he Field Sudden Onset Disaster Coordination Activities Annexes Acronyms