Building Markets

Back to all blogs
0

The Peacebuilding Evaluation Paradox

Melanie Kawano-Chiu is the Program Director for the Alliance for Peacebuilding where she currently manages both the Peacebuilding Evaluation Project and the Peacebuilding Systems Project. Previously, Melanie was the Program Manager for the BEFORE Project, a program that applies collaborative peacebuilding in countries and regions that are known to be vulnerable to violence. Prior to joining the Alliance for Peacebuilding she worked at a university in Northern Viet Nam, the International Career Advancement Program, the Center for China-US Cooperation, and IFES.

Despite the initial belief that good intentions are sufficient proof of a job well done, the peacebuilding field has accepted the necessity of evaluating its work. Peacebuilding demands a long-term mindset. True to this, many in the field have been slowly chipping away at the technical aspects of peacebuilding evaluation for more than a decade. Building off of the conversations on evaluation, randomized control tests, and statistical analysis in fields beyond peacebuilding, the United States Institute for Peace (USIP) and the Alliance for Peacebuilding (AfP), launched the Peacebuilding Evaluation Project (PEP) more than two years ago.

While technical questions were addressed, peacebuilders were still developing clarity around the purpose of evaluation. How should organizations balance learning versus accountability versus compliance of evaluation? How can organizations manage the politics of evaluation? Given the power dynamics inherent to evaluation, it is perhaps not surprising that there has been a lack of transparency and dialogue between funders and implementers on this issue. As a forum for funders and implementers, PEP is enabling a discussion on the fundamental issues regarding how evaluation happens within the peacebuilding field.

After a year-long series of smaller, focused PEP meetings, USIP and AfP convened more than 70 peacebuilding funders, implementers, and evaluation specialists at the first Peacebuilding Evaluation Project Evidence Summit earlier this month. It was designed as a safe space for transparent conversation. The Summit aimed to showcase successful efforts at evidence-gathering and to bring together a community to provide input on how the evidence of impact can be made even stronger.

Initially planned to include 35 people, it was a surprise when more than 160 people expressed interest in participating in the Summit. Clearly, there is a need for dialogue on evaluations to go beyond a funder’s evaluation guidelines or the submission of a final evaluation report. And clearly, individual conversations are not happening organically and regularly.

Representatives from nine organizations, who were chosen through a competitive review process, presented their evaluation and received feedback from a panel of funders, implementers, and evaluation experts. Participants came from Kenya, Israel, Northern Ireland, the Philippines, Canada, and across the US. Organizations ranged from the World Bank, the United Nations, large and small private foundations, USG agencies, evaluation research centers, among others. Quite a few lessons were learned and even more questions emerged from the participants.

Many of the discussions boiled down to two overarching themes. The first was the tensions between using evaluations for organizational learning versus accountability to donors and local stakeholders. The reality is that each of these goals often requires different evaluation methodologies.

The second was the challenge both funders and implementers face in adapting complex evaluation methodologies for their constituencies. Funders want to be able to explain how methodologies help make evidence-based conclusions. Implementers confront the hurdles of varying familiarity with evaluation methodologies and constant capacity building.

We didn’t expect the Summit to fully and satisfyingly address these issues, but we did gain a more nuanced understanding on some of the core dynamics at the heart of better peacebuilding evaluation and, hopefully as a consequence, better peacebuilding practice. In the spring of 2012, USIP and AfP will produce an in-depth report on the Summit on just these topics, so stay tuned for more.

For now we can say that both funders and implementers – perhaps funders in particular – are more open to the evaluation conversation and building partnerships on this issue than is often believed to be the case.

For more on learning that has emerged from the Peacebuilding Evaluation Project, including the most recent AfP PEP Lessons Report or the USIP Special Report on Improving Peacebuilding Evaluation, visit the Alliance for Peacebuilding website.

Tags , , , , ,

Comments are closed.


Rss Feed Tweeter button Facebook button Youtube button