Almost a decade ago, during one of the early editions of the Latin American Semana de la Evidencia (Evidence Week), I joined a Mesa Verde on youth and rural development at the Instituto de Estudios Peruanos. This was a different kind of Mesa Verde.+ The speakers were not academics, experts or policymakers. The speakers were young rural Peruvians, and they took turns to share short but poignant PowerPoint presentations about their lives.
One after the other, they told us about their villages, their communities, their upbringing, the challenges, but also the joys of their lives. They posed questions and made recommendations for us in the audience to consider.
Across from them at the far end of the room sat the deputy minister for rural development.
It occurred to me that even though their stories were just anecdotes (it would take many more of them to be considered as reliable and representative evidence), the minister could not dismiss them. He could not toss them aside in favour of a representative survey or a systematic review of academic evidence.
The scene made an impression on me. These young men and women had a legitimate right to be heard by the government. And their evidence carried the weight that legitimacy conferred.
This particular event strengthened my view that evidence-informed policy has to acknowledge and embrace political, economic and social complexities. It is never neutral.
And therefore, any effort to impose a hierarchy of evidence that does not consider the legitimacy of individuals to be heard and to be taken seriously will struggle to develop into an institution. +
For over two decades, I’ve worked at the intersection of research and policy, supporting think tanks, policy entrepreneurs, and often governments and international development agencies, in designing and delivering strategies to influence change: policy change, narrative change, systemic change.
My work has spanned continents, sectors, and political contexts, always grounded in the belief that evidence, when used wisely, can help shape better futures.
Much of this work has revolved around helping organisations plan their influencing strategies — from identifying key audiences and allies, to crafting persuasive, context-sensitive arguments rooted in evidence, and carefully defining objectives. Just as importantly, I’ve spent considerable time helping them understand whether their strategies are working, through monitoring, learning, and evaluation.
Find out more about what OTT can do for you.
I’ve been deeply influenced by approaches like Outcome Mapping (OM), which emphasise non-linearity and contribution rather than the pursuit of narrow cause and effect relationships and attribution. During my time at ODI’s RAPID programme, I co-founded the Outcome Mapping Learning Community and developed the Rapid Outcome Mapping Approach (ROMA), a practical framework for planning, monitoring and evaluating policy engagement.
ROMA encouraged policy research organisations and teams to stop chasing simplistic notions of impact (e.g. health or education outcomes) and instead focus on the many intermediary outcomes they could reasonably influence: changes in the behaviour, relationships, activities, or actions of key stakeholders they had direct engagement with. This reflects a broader shift in the evaluation field led by thinkers associated with OM and complexity sciences, like Simon Hearn, who leads the Outcome Mapping Learning Community and has made important contributions to its application to policy influence and systems change.
This framework, like others developed by peers across the field, offered, in part, a response to the impossibility of cleanly attributing complex policy change to a single actor or intervention. The idea that a think tank, a research programme, or even the intervention of a donor or a coordinated campaign can single-handedly cause a policy reform — let alone a shift in population health or economic welfare — has always felt analytically and ethically unsound. And in countless evaluations I’ve led or contributed to, we’ve reiterated this point: policy change is emergent, political, negotiated, and non-linear. Attribution is a mirage.
And don’t even think of saying anything about your contribution to policy outcomes; education or health outcomes are beyond your influence!
This mirrors the contributions of Rick Davies, who has long challenged linear models of change and has developed methods like the Most Significant Change method, which invites stakeholders to articulate how they believe change has occurred rather than attributing it to any one actor — highlighting interpretation and meaning over quantification.
This also echoes my own critique of the concept of “research uptake.” As I’ve argued before, the assumption that research should be linearly “taken up” by policymakers often ignores the complex, messy reality of how knowledge moves through systems. Sometimes it’s “side-taken” — used in ways the researchers didn’t anticipate — or “down-taken” — taken up by actors lower in the policymaking hierarchy who eventually shape policy indirectly.
Rethinking attribution
And yet, a recent project with the Alliance for Health Policy and Systems Research at the WHO has forced me to revisit this position.
The Alliance has been working for over 20 years to strengthen health systems research across the Global South: building capacity, investing in institutions, supporting individuals, promoting new methods, shaping discourse, and sustaining networks. As part of a review into how the Alliance understands impact, we’ve interviewed dozens of stakeholders, many of whom have been embedded in this space for decades. Ministers, senior health system researchers, and civil society leaders. People who’ve seen the field grow in their countries and sectors from the inside.
And what many of them tell us, with remarkable clarity and confidence, is this: the Alliance made a difference. A real difference. They can name health policies that were strengthened. Expert communities that became more resilient. Careers that blossomed into positions of power as a result of grants or connections provided or facilitated by the Alliance. Concepts developed or popularised by the Alliance that shifted the way governments approach health systems and public health more generally. Practical interventions that led to tangible improvements in health outcomes.
In their view, these changes would not have happened without the Alliance’s sustained support. The Alliance may not have written legislation or implemented reforms, but it enabled those who did. It created the conditions for change.
Listening to these accounts brought back the memory of the Mesa Verde on youth and rural development.
From a traditional evaluation standpoint, we might discount these claims — or treat them with caution. After all, they aren’t backed by hard evidence, control groups, counterfactuals, or statistical attribution models. But this was not an evaluation, and I was therefore not limited by evaluation standards or best practices. I pressed on with questions in search of nuance in the stories we heard. What struck me the most is that they weren’t casual observers of changes in their health systems or research communities. They were informed insiders; the protagonists of the very stories we were telling us.
Their testimony is therefore not just a simple anecdote. It is lived, expert, reflective knowledge.
And it is legitimate. It carries political weight.
In parallel, we have been supporting an African think tank to gain a better understanding of its funders’ views on impact to inform how they document and communicate it, as part of OTT’s Advisory programme for think tank leaders. Having interviewed several funders, a pattern begins to emerge.
Those funders which see themselves and their grantees more clearly as political agents (e.g. ideologically identifiable funders or political foundations) rely less on intricate frameworks and indicators and more on careful due diligence, the insights of political insiders, ongoing discussions and a long-term assessment of their grantees’ role in their context. By contrast, funders that see themselves as technocratic partners or research funders (e.g., bilateral and non-ideologically explicit funders) rely much more on frameworks and indicators — and invest in their development and upkeep!
The former are more comfortable with the anecdotes of insiders; the latter worry about methods.
And so here’s the shift I want to propose
We need to move beyond methodological purism when it comes to assessing the impact of think tanks and other policy research organisations.
When a diverse, well-informed, and contextually embedded group of stakeholders — people who have lived the process — can say with confidence that an institution, programme, or network was instrumental to change, then isn’t that evidence of impact? Is that not attributable? Not in the narrow, positivist sense of randomised control trials and logframes, but in the broader, human, systems-aware sense that recognises how change actually happens in the real world.
In response to an early draft of this article, Aidan Muller made a bolder statement: “One source can sometimes be enough (news articles are written on the strength of a single source, though journalists will typically try to have it corroborated by at least another source).” A single well-informed, credible, reflective source? Why not?
This view aligns with evaluators like Tom Aston, who has written compellingly about how systems thinking and political economy should inform evaluation practice, not by demanding impossible causal clarity, but by helping us make sense of change through informed judgement, contribution analysis, and stakeholder sense-making.
It also reflects my argument for shifting how we assess the impact of think tanks altogether. Instead of narrowly seeking evidence of tangible influence, we should ask whether they have been useful. Have they provided timely, relevant, and actionable knowledge? Have they helped build the field? Created spaces for dialogue? Supported actors in making sense of their options? These dimensions of usefulness often matter more to policymakers and practitioners than measurable influence ever could.
We must learn to treat this kind of knowledge with the legitimacy it deserves, not as a last resort when other methods fail, but as a valid and rigorous form of evaluation in its own right. That means curating the right group of informants. That means tracing influence across time, space, and relationships. It can even involve triangulating stories – if letting go of methodological purism is hard. But most of all, it means trusting that people who have lived these systems understand how they’ve changed, and why.
This is particularly important in contexts with high information density — places or sectors where multiple actors, over time, have helped build a rich evidence ecosystem. In these contexts, the availability and circulation of diverse evidence sources increases the chances of better-informed policymaking, even when attribution is opaque. In such ecosystems, contribution is cumulative and layered, not linear or exclusive.
After 20 years in this field, this is perhaps my most important reflection: the impact of evidence-informed work may be difficult to measure, but it is not invisible. And those who have seen it unfold — those who have felt it shape their work, their institutions, their countries — can help us see it too.