Building Markets

Back to all blogs
0

How can we Measure the Impact of Peacebuilding?

Melanie Kawano-Chiu is the Program Director for the Alliance for Peacebuilding where she currently manages both the Peacebuilding Evaluation Project and the Peacebuilding Systems Project. Previously, Melanie was the Program Manager for the BEFORE Project, a program that applies collaborative peacebuilding in countries and regions that are known to be vulnerable to violence. Prior to joining the Alliance for Peacebuilding she worked at a university in Northern Viet Nam, the International Career Advancement Program, the Center for China-US Cooperation, and IFES.

If, at times, building peace is a nebulous concept, then going about measuring the impact of that work can be even more abstract. Discussions on theories of change, methodologies like impact evaluation, logframes, and indicators often do not include the tangible constraints one experiences with applying these methodologies in the field.

To foster a concrete conversation on evaluation methodologies, the United States Institute of Peace (USIP) and the Alliance for Peacebuilding (AfP) hosted the first Peacebuilding Evidence Summit in December 2011 at the United States Institute of Peace thanks to support from the Carnegie Corporation of New York and USIP. The day-long Summit revolved around the strengths and challenges of nine different efforts that used a range of methodologies, from action research to randomized control trials (RCTs), to gather credible evidence of impact.

As the organizers of the Summit, Andrew Blum, Director of Learning and Evaluation at USIP, and I wanted to share the learnings from the Summit with as broad an audience as possible and have produced a report entitled “Proof of Concept”–Learning from Nine Examples of Peacebuilding Evaluation. One of the case studies features Building Markets’ work in Afghanistan where over $1bn was facilitated and an estimated 100,000 jobs were created. The report includes summaries of the nine evaluations discussed, in the form of case studies, and a synthesis of the major reoccurring themes from the Summit.

With the proliferation of calls for quantitative data, and some assertions that this is the only credible form of evidence, the conversation on RCTs was particularly enlightening. Based on several attendees’ experiences, RCTs were most useful to proving impact when dealing with external audiences, particularly donors. For program managers within an organization, however, the qualitative research proved to be much more useful in terms of tangible examples of how a program was, or wasn’t, quite reaching its intended goals.

This conversation circled back to the very common question of the use of evaluation resources: do these finite resources go toward meeting accountability requirements of donors or applied to learning and improving one’s practices? This tension between accountability to donors and organizational learning within implementers was at times stark during the Summit conversations. Although no consensus on whether these two goals are simply irreconcilable, it is clear that there is a continued need for dialogue among donors and implementing partners on what is realistic to expect from an evaluation.

For the full Summit report, “Proof of Concept:” Learning from Nine Examples of Peacebuilding Evaluation, click here.

Tags , ,

Comments are closed.


Rss Feed Tweeter button Facebook button Youtube button