Search for:
Webinar Series: How are top data teams making the move to remote?

When I say that Impact Evaluation is history, I mean it. Some people will question this. After all, Impact Evaluation just became mainstream in the last decade, driven by great improvements in experimental design methods like randomized control trials (RCTs). So how can I say that it’s already a thing of the past? It’s not Impact Evaluation’s fault. The world changed.

Methodologies like RCTs came from medical science, where you can give patients a pill and assess its impact with randomized trials. However, development is not a space where one pill will work for everyone. In development, the patients change faster, the illness evolves faster, and the pill needs to keep pace with both the patients and the illness. That’s where Impact Management comes in.

What Is Impact Management?

New Philanthropy Capital‘s 2017 Global Innovation in Measurement and Evaluation Report counts Impact Management as one of the top 7 innovations of 2017.

So what is Impact Management? Let me first explain what it is not. It’s not a one-time evaluation. It’s not collecting data for answering a limited set of questions. It’s not a separate activity from your program. It’s not just monitoring and evaluation.

It’s a way of making data-driven decisions at every step of your program. It’s about keeping a pulse on your program every day and finding new questions to answer, rather than just focusing on specific questions predetermined by your monitoring and evaluation team or funders.

The question that’s being asked more and more is, ‘How does evaluation feed into better management decisions?’ That’s a shift from measurement of impact, to measurement for impact.

Megan Campbell (Feedback Labs)

How Does Impact Management Work?

Impact Management uses the basic components of monitoring and evaluation, but with an outlook shift. It involves frequent data collection, regular reporting and monitoring of your data, and iteratively updating your program indicators and metrics as data comes in and the program changes.

Impact Management differs from Impact Assessment in that it promotes course correction on a daily basis. Organizations collect data on their programs as they conduct activities, analyze that information on a regular basis, and make changes to the program.

With an outlook that encourages frequent changes, as if you were trading in stocks, organizations will have the ability to A/B test their programs with real-time data to make decisions immediately; rather than wait to compare and contrast two different surveys. They can test out new things and make changes as they receive data in servers, even at the end of the day rather than waiting for the official year-end review. It becomes a way of deciding how they should execute a program daily rather than only seeing strategic changes through.

[Data collection] should be ongoing — it’s a value driver not a compliance requirement.

Tom Adams (Acumen)

In many ways, this is how decisions are made on Wall Street or Dalal Street in India. Analysts don’t wait until the end of the year to make investments by reviewing annual reports. They watch daily as the market fluctuates and strike as soon as they see new potential.

Impact Management works exactly the same. You should strike to increase your impact as soon as opportunity arrives, rather than waiting for a year-end external evaluation or approval.

How Can You Implement Impact Management?

To make Impact Management possible, switch from static data files to a flexible data system.

Today, most of your program officers and even your beneficiaries are armed with mini-computers in their pockets (read: smartphones). Leverage these to create a network of data ingestion devices, continuously tracking and measuring the impact of your programs. Use mobile data collection apps to add forms, deploy them to the field, and reach out not just to your field force but also your beneficiaries — not just at the end of the month or quarter, but as frequently as possible.

Then don’t let this data sit in Excel files. Use today’s technologies to create your own data management system, one that will link your beneficiaries, connect your programs, and answer queries. Have someone with an analytical bent look at this data regularly, or draw on machine power to analyze this data and generate meaningful insights or reports in real time.

We’re moving away from a static data world, where you work on datasets, and you write reports, to a dynamic data world where data is always being generated and created and it helps you do your job better.

Andrew Means (beyond.uptake)

Lastly, it’s crucial to tie this flexible data system back to your decisions. Make real-time data — rather than guesses or last year’s data — the basis of every program decision and the foundation of even weekly catch-ups. And don’t hesitate to test out new things. Data will tell you whether something worked or not.

Many of our partners are using our products to make Impact Management possible and track their programs in real time. They are creating and tweaking data collection forms, and monitoring incoming data in real time on their computer, in regular reports, or even on map-based dashboards. They are asking new questions about how their programs are doing and answering them with data.

If we really want to create the best development programs, we’ll have to think differently and use evidence not just once every month or year, but as we make crucial decisions every day. All backed by the tenets of Impact Management: test, fail, improve, repeat.


Resident Enterpreneur at Atlan. Believes in the potential of data & technology in solving many human problems. The harder the problem, the better it is.


  1. I applaud this post for its message that its time to shift outlooks from measurement of impact to measurement for impact. This is a mantra that can’t be said often enough or emphatically enough. Otherwise, what’s the point of all that time, money and effort?

    I will raise my hand though s one of the people who will question the farewell to impact evaluation. I am a die-hard fan of the evaluation profession which dates back to the 1950s (not just the past decade) as a living and dynamic trans-disciplinary field that has evolved in its approaches and thinking as time went on. There are many tools in the evaluation toolkit–in addition to RCTs. Perhaps one of its finest pointed tools is ironically actually a soft tool–and can be called evaluative thinking. This tool is more akin to a mindset or outlook that suggests that data serve to provide a jumping off point for reflective thinking and sense-making. Some examples of evaluative thinking include posing questions like: What constitutes success? What constitutes progress? What are the logical linkages between actions and outcomes? How will you recognize progress when you see it? Whose perspectives(s) are you considering when you refer to progress or success? Are there multiple data collections strategies employed that point in the same direction? Who is benefitting the most and who is benefitting less? Are there any unintended consequences? What are the implications of the data for strategic actions? If progress is not happening, is it due to the way that a strategy is implemented or is it perhaps the design and relevance of the strategy itself? And what are the systemic factors that create enabling environments for impact?
    These are the kinds of questions that are incorporated into the Impact Management Project ( which is working to create shared fundamentals about talking about, measuring, and managing impact based on the consensus of hundreds of stakeholders across many wide-ranging disciplines. Evaluative thinking is aligned with impact measurement and management and is one of the best skill sets that evaluation professionals bring to the table–in addition to a rich array of methods, including RCTs but not limited to RCTs.

    I am very pleased with the creative focus of this post, too, and the notion of flexible data systems. Evaluators have been moving in this direction for some time, as exemplified by rich methodological approaches like developmental evaluation and innovative reflection approaches such as strategic debriefs. While these methods employ both quantitative and qualitative approaches, they share the hallmark of your focus on flexible data systems based on various methodologies.

    My final comment is a bit of a caveat about the promises of technology for generating timely, high-quality data and evidence of social impact. The Rockefeller Foundation’s Sept 1, 2017 post on Spotlight on Tech-Enabled M&E ( concurs with the revolutionary new modes of data/information collection, analysis and sharing. The authors who are experts in development evaluation (Bamberger, Olazabal and Hoffman) issue a caution, however, about the reasons behind the constrained uptake of tech-enabled M& E. They cite pragmatic barriers such as time, money, expertise and political will as well as the ability for technology to empower low income and vulnerable populations due to excluding the most marginalized voices and asymmetrical power relations and resources between data collectors and the individuals who provide the data. Certainly, technology has great potential for advancing impact management (and measurement), however, structural and contextual barriers remain and will require attention in order to tackle the world’s most critical problems. And the author and her team have the creative thought processes, visionary thinking. and diligence to tackle these kinds of challenges.

  2. Abdul Wali Reply

    Thanks for the post, it is very useful. I would like to know the solution for the areas where Internet is very limited or IT rarely used to have real time data for day to day decision making? Do you think that there are ways to improve the paper based data collection system in the above mentioned area. Also, if there is no incentive for respondents/beneficiaries so they don’t respond to your questions or surveys even if they have access to smartphones and Internet. Is there any solution for this too?

  3. Don Townsend Reply

    “”””a way of deciding how they should execute a program daily rather than only seeing strategic changes through.”””” Is this a recipe for promoting robotic ‘decisions’ over vocational wisdom? Daily tinkering might be great for stocking shelves in supermarkets; but it is just not possible in most projects committed to sustainable development.
    It overlooks the time lags required for changes to be made by stakeholders; it overlooks the consultation processes which raise confidence and acceptability of interventions and changes; it is blind to the leadership factor in adaptation and equitable change management, especially in sensing the uneven distribution of political power in all kinds of organisations.
    Sorry for those elitists who want to have their hands (or their clients’hands) on the levers (or the pills): new data must pass through committee – style processes, just as new medicines must negotiate the complexities of immune systems

  4. Juliet Capito Reply

    This is a very beautiful piece not only because it is written so well but because Impact management is soooo relevant in this world where the social problems to solve are ever changing. Its is the best way to have relevant solutions. Thank you

  5. Pingback: Moving from “evaluation” to “impact management” | MERL Tech

  6. Ce document est très riche, même si j’ai du mal à comprendre l’anglais. De ce fait, je souhaiterais avoir ce document en version française si possible


  7. Hasmik Ghukasyan Reply

    Thank you so much! a very informative, very interesting and important article, interesting and easy to read, and sooo important topic, always pleasure to follow your posts, thank you for sharing. And thank you for great ideas!

  8. Pingback: #EvalTuesdayTip: New ways to continuously measure impact - Khulisa

Write A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.