How to better understand violent conflict and atrocity prevention: Mapping evaluation evidence inclusively

Since Integrity’s inception, our work has focussed on supporting clients by delivering effective programming in challenging contexts. Therefore, this year’s UK Evaluation Society (UKES) Conference theme, ‘Rising to challenges – how can our discipline respond to important policy and practice on key societal issues’, resonated with our presenters Ada Sonnenfeld, Head of Fragility and Violent Conflict, and Nick Moore, Senior Expert, MEL (right). They presented on Integrity’s forthcoming Evidence and Gap Map (EGM) of conflict and atrocity prevention (CAP).

What is an Evidence and Gap Map (EGM)?

EGMs are thematic collections of impact evaluations and systematic reviews that measure the effects of policies and programmes. They present a visual summary of studies in terms of the interventions evaluated and outcomes sought. This evidence is mapped onto an interactive visual grid-framework. EGMs are produced by systematically searching, screening, and analysing literature using recognised international best practices. An EGM is an important tool for getting relevant and credible evidence into the hands of policy makers and programmers. Finished EGMs led by Nick and Ada can be found here and here.

Why a Conflict and Atrocity prevention (CAP) EGM ?

In association with Integrity and the London School of Hygiene and Tropical Medicine, the Global Development Network was commissioned by the FCDO to identify and map evidence of the effects of conflict and atrocity prevention (CAP) interventions. After a relatively peaceful period from the mid-1990s, there has been an increase in armed conflicts globally since 2011. The FCDO has made commitments to help countries escape cycles of conflict and violence, underpinned through developing its understanding of what works.

Our CAP EGM approach

Integrity led the development of the EGM’s conceptual framework (the grid of interventions and outcomes) and advised on analysis, while our partners delivered the EGM, which will be published later this year. We screened over 40,000 studies and identified over 450 to include in the map. To develop the conceptual framework, we reviewed key literature and consulted with experts. We then supported the team to search and screen academic and grey literature, extract study data, and analyse the distribution of the evidence base. At UKES, we shared three key points from this experience for discussion.

1. We included a broader range of evaluation evidence than is typical, which changed our understanding of the conflict and atrocity effectiveness evidence base

EGMs tend to focus on large-sample (large-n) statistical impact evaluations – randomised evaluations (RCTs) and quasi-experimental designs (QEDs). We knew that many evaluations of CAP interventions adopted small-sample (small-n) evaluation designs. These use qualitative, theory-based approaches to measure impact, like processing tracing, contribution analysis, or qualitative comparative analysis. For our EGM, we included a relevant evaluation if it sought to answer a causal question (was the programme or project being evaluated the cause of a change in an outcome, at least in part?), and it used a sufficiently transparent and rigorous method.

We included 47 small-n evaluations, but had expected more. To try to explain this, we looked at how we had screened small-n studies. Many relevant small-n studies were excluded because they did not meet the criteria above; many qualitative evaluations explicitly stop short of measuring impact. But we also think there is a publication bias issue, in that much CAP evidence is sensitive and not in the public domain. A UKES participant also pointed out that publication of evaluations is not always a priority during a crisis or humanitarian response.

Question for readers: What can we do to safely and sensitively increase the publication of CAP evidence?

2. The policy context led to an informative analysis about how AP and CP interventions work.

Nick and Ada presenting

The FCDO were interested in how interventions addressing violent conflict or atrocity might differ and interact in a specific context.

Our initial theory was that the same indirect interventions (those that target underlying drivers of violence) were used for both conflict and atrocity prevention, while interventions that aimed to directly address violence look similar in contexts of latent or dormant conflict. But in active conflict contexts, we expected to see differences in the approaches used for atrocity vs conflict prevention.

Analysis of our EGM suggested some difference in approaches used to prevent conflict, versus those used to prevent atrocities – but not in the way we expected, as summarised here. We found that in contexts of active conflict, where violence was ongoing, there were evaluations of conflict prevention interventions using the full range of both direct and indirect approaches – but we did not find any evaluations of interventions aiming explicitly to prevent atrocities that used indirect approaches or conflict management and mediation approaches.

To fully test this theory, we would need to look at the distribution of all CAP interventions by category, not just those interventions that have been evaluated.

Question for readers: Does this theory make sense? How do you see conflict and atrocity prevention efforts differing?

3. Including broader literature creates solvable challenges for critical appraisal

By being inclusive on study design, the EGM includes evidence from multiple disciplines. This meant that we needed to draw on more tools to critically appraise these studies. While many critical appraisal tools exist for quantitative impact evaluation designs, few existed for small-N designs. We adapted an existing impact evaluation appraisal tool to create a comparable critical appraisal tool for small-n evaluations. Applying this tool highlighted that many qualitative evaluations did not clearly report how data was triangulated or how risks of bias were addressed. One UKES attendee suggested this may be due to reporting limits set by publications and/or evaluation commissioners, but it could also be due to a lack of common standards.

Question for readers: What can we do to improve reporting on triangulation process and bias checks made in qualitative evaluations? Can we set standards as a community?

What next for the EGM?

We are working with the FCDO and our expert advisory group to finalise the results and implications from our map. We expect to make an interactive version of the map public along with a descriptive analysis of the evidence base and our implications of the most important gaps to fill.

Get in touch

To talk about this project, efforts to prevent violent conflict and atrocities, evidence synthesis, and if you have thoughts on the questions above, please contact Ada ([email protected]) or Nick ([email protected]).

Recent Posts

Start typing and press Enter to search

Walking on salt in PeruCambodia street scene 2023