Skip to main content
search

The Prosperity Fund: Lessons from a complex evaluation and learning programme

Caitlin Smit, Senior Expert in Monitoring, Evaluation and Learning

Caitlin Smit, Senior Expert in Monitoring, Evaluation and Learning

Caitlin Smit is a Senior Expert in Monitoring, Evaluation and Learning. In this edition of Integrity Insights, she shares her top 3 lessons derived from her role on the leadership team supporting the evaluation of the UK Prosperity Fund.

In early 2020, I started a new role as Data and Evidence Lead on the Evaluation and Learning team supporting the cross UK government Prosperity Fund. The Fund had an allocation of £1.2 billion and comprised 26 global and bilateral programmes operating in 46 middle-income countries across the globe. The parallel Evaluation and Learning contract, was one of the largest ever procured by the UK government. The team from Integrity, along with TetraTech International Development (as prime contractor) and NIRAS-LTS included over 90 team members who engaged with over 600 stakeholders across the Fund over four years.

As you can imagine, this was initially a formidable task to be faced with, made more so as the Covid-19 pandemic hit, and we moved to home working, along with a reliance on video conferencing. The challenges were plentiful, but in hindsight, there were several key lessons that Integrity and I have taken from the experience. These insights have proven invaluable in how we approach other large portfolio and programme evaluations:

Lesson 1: Well-executed remote evaluations can deliver equivalent value to programmes

As the Covid-19 pandemic escalated, travel and other restrictions posed significant constraints to normal ways of working. Our team had to quickly adapt our approach to evaluation and engagement. We developed guidance for the team on how to conduct evaluations remotely and, being such a large team, had to ensure this was applied consistently on all evaluations.

At the end of 2020, there was unanimous agreement from the Prosperity Fund programme teams that the remote evaluations were effective and useful, and that being remote did not adversely impact the quality or balance of evaluation findings. In fact, working remotely had some benefits, such as greater flexibility during data collection, and an opportunity for a wider breadth of sampling.

This success was driven by having a strong team of evaluators in place, early and targeted planning and engagement with stakeholders, seeking additional evidence sources where stakeholders were not available, and the increased involvement of local evaluation team members. Of course, any possible biases and limitations introduced by working remotely had to be documented and considered when assessing our evaluation evidence.

Lesson 2: Evaluators are particularly valuable as ‘critical friends’ to programmes

During the contract, our team introduced a system of flexible support, where Prosperity Fund programme teams could draw down a certain number of days of evaluators’ time to support them on a range of tasks. These included developing theories of change (a key tool in programme design and evaluation), results frameworks, Annual Reviews and assessing programme approaches to key cross-cutting issues, such as gender, inclusion and value for money. This service was in addition to the formal annual evaluations.

The flexible support was incredibly valuable to programme teams, many of whom were new to Official Development Assistance (ODA) programming. They valued having an independent voice in the room, as well as the ability to sense check and bounce ideas off a technical expert with an understanding of their programme. It strengthened the relationship between evaluators and programme teams and enhanced our ability to inform ongoing programme adaptation in a way that lengthy formal evaluations can sometimes struggle to.

Lesson 3: Evaluation efficacy is underpinned by strong learning, knowledge management and communications

Our team included a dedicated group of experts focused on supporting learning within and across the Fund, as well as the uptake of evaluation findings and recommendations. At the outset, the team undertook a learning diagnostic to inform our learning strategy and plan.

A key deliverable was a website and digital learning platform, PFLearning, accessible to the global network of civil servants working on Prosperity Fund programmes, and which supported peer learning, networking and content publishing (ranging from evaluation reports to Covid-19 guidance). The team also developed a set of learning ‘touchpoints’ throughout the evaluation cycle. These involved participatory consultations and workshops with evaluators and programme teams that helped to build relationships, strengthen awareness and ownership of evaluations, and lay the groundwork for the use of findings and recommendations emerging from evaluations in programme decisions.

All evaluation teams produced a learning product capturing a key lesson from a programme, supporting programmes to learn from each other’s experiences. These outputs were produced in a range of formats, such as podcasts, infographics, blogs and animations. The key to their relevance and utility was a good understanding of the audiences and their needs and ensuring there was clarity and agreement on scope and delivery upfront.

Being part of a leadership team responsible for such a large and complex contract was an interesting, challenging, and ultimately rewarding experience. It presented opportunities to think creatively and truly commit to adaptation and responsiveness in fast changing situations. It provided essential learning that has shaped how we approach new projects and opportunities, particularly those focused on evaluating large and complex portfolios and programmes.