Interview with an M&E-to-programming convert

After three years of managing the M&E system for an INGO’s interventions in Guatemala, Alice was both promoted and and transferred to the programming department at the INGO’s headquarters in New York City.  As the new Manager of Impact at the international level, she was in charge of the programmatic activities in three countries – but her background in M&E allowed her a unique perspective into evidenced-based implementation.


K:  To start off, how did you first step into M&E?

A: I was working in more of a fundraising and marketing role in my organization, and was hosting a donor trip in Guatemala.  I already had previous field experience in Mexico and Nepal, so the country director let me go off with the team and shadow some WASH engagements.  And I loved all of that raw goodness so much that I went back and started reading a ton about M&E and impact evaluation positions, but more on the qualitative side.  And then my organization offered me an M&E position which was both qualitative and quantitative.

K: And how was that transition?

A: I’m a really sequential thinker and sequential learner.  So it was easy for me to step into that kind of thinking. My mentality was that our programming was always first – I needed to know all about the program.  And then you ask the question, how are we going to know if it worked or not?  And then work back from there.

K:  Your academic background is in Anthropology. How do you feel that informed the way that you did evaluation work?

A: I think Anthropology is like trained empathy.  You’re supposed to be able to really put yourself in the other person’s shoes and into their world, to be able to understand it.  So for example, with a questionnaire, we have to make sure to use the same terminology, vocab, and style of speaking to have that person truly understand what we are trying to ask.

During my degree I learned and researched a lot about organizations that did bad work and ended up unintentionally causing collateral damage in communities.  But anthropologists were the ones there doing the qualitative work that was picking up on all of those aspects which were overlooked.  I always thought that was really noble in a way – like a safety net to make sure there’s no harm, no foul. (1)

K: So now you’re in your M&E role and you’re in charge of presenting data.  Can you give me an example of when your findings made some sort of programmatic change?  What were the mechanisms of that?

A: My favorite example is about our scholarships program, which we were hoping would increase progression from elementary school to secondary school.  The first step was to speak with all of the 6th grade students in our program schools.  As long as they had maintained attendance of 75% or more, they received a scholarship.  But we decided to give two kinds of messaging:  To half of the groups, we said the scholarship was just for their first year of secondary school.  To the other half, we promised scholarships for all of secondary school, as long as they maintained the required attendance.  We were wondering if kids just needed support to make the jump into secondary school, and then they’d be able to do it on their own, or if it really about the lack of resources for all three years of secondary.  Because at the end of the day, that’s a huge ROI question.

In the schools which received the messaging that the scholarship was only for the first year, it was something like 15 to 20% fewer kids were enrolling compared to the other group.  We had a team make calls to the parents to ask why the student didn’t claim their scholarship and across the board the answer was overwhelmingly “one year isn’t worth it.”  So we flagged it, and as soon as we had enough confirmation on the qualitative side, even informally, we immediately changed the messaging.  A large percentage of those kids ended up enrolling in the few days after.

K: Wow. But that sort of quick-turn around from findings to programmatic change doesn’t happen in most organizations.  What was it specifically about the organization or staff which allowed that to happen fluidly?

These programmatic changes were all done through me in-country.  The program manager and I shared the same supervisor.  So I would just tell him, “it’s really obvious we should change something,” and he would say “okay you guys should change that.” Everyone on the team at that point was really okay with that kind of flexible working environment.  And we were still super small, so it was very easy to go out and do the next day still.

K: At that time you had about 50 schools where you were involved.  And at the time of your transition to programming there were around 150.  So the work and the personnel tripled in this time. Do you feel like that made a difference in your ability to directly influence programming the way that you thought the data indicated?

A: I don’t think the size of personnel did, no.  But probably centralization – I stopped doing most of the analysis.  At that point I was giving constant recommendations in a formal way to the headquarters in the U.S. and I no longer worked on the how with anyone in-country.  For example, I would see in the data that having soap immediately present at handwashing stations was a huge predictor of whether or not a kid was washing their hands.  So I would recommend that the focus of our WASH programs needs to be getting soap at the stations during class. But I would never know if those recommendations actually made it into the action plan in the WASH workshops with teachers.  What we were really looking at in terms of “are we successful or not” still wasn’t coming to the surface as having a light beam on it.  When at the end of the day, if this is what we are measuring to tell us if we did a good job or not, we need to talk about it more.

K: So then you transitioned to a programming role at the headquarters where you’re managing the programming in three countries.  What was the view of M&E like from that higher level? 

A: It was always the most important piece of the puzzle.  You have to have the M&E because you have to have the data to know.  At the same time, I feel like the M&E people felt a lot of pressure to either cut corners or to say No, we can’t put another pilot in these schools because it’s going to dilute the sample size so much that the results won’t matter. They’d get an “are you sure?”  Well yeah, because it’s math.

But for people who work in a very fast paced environment with donors, who want to create impact the fastest way possible for as many people as possible – I think there’s an urgency there that M&E bears the brunt of.

K: Can you think of any catalysts that make it easier to use evidence for programmatic change?

Having people on hand, who are not on the programs team, that can quickly answer our data-based questions.  You’re asking someone to have very different skill sets if you’re asking them to be on the programs side and then they’re also supposed to analyze data and interpret.  When it is all at the in-country M&E manager’s fingertips and she’s just an email away, then I can take action right away.  (2)

What makes the job at headquarters so hard is that I became the manager of managers.  You become one more step removed from the actual field. It’s one thing to, as a field manager, be training your staff directly and looking for that change in the field.  It’s another to add a whole other brain in that mix, which makes it more like a game of telephone.

K: You mentioned centralization before.  Do you think that is part of it? The more centralized a process is, the more evidence gets lost in translation?

A: Yes, definitely.  Lost in translation and, people are less motivated to take the initiative in the field offices to make changes.  Because they feel like that’s no longer really their job.  Instead of having that sense of this information is coming to me first and I’m the one that is supposed to be acting on it immediately, it sort of removes their agency from it.

K: What would you have done differently as an M&E manager now that you have a different perspective on the position?

I would have asked the programming team to, at some point, be able to produce a list of next steps that they were going to produce based on results. There was no formalized process in place where programs would have to show up in that sense. And I think that feeds into how we build trust with communities related to the data. Because programming teams should be communicating that list of next steps and changes to those communities so that communities feel like their feedback was valued.

K: What else can you tell us about the transition from M&E to programming?

I think a big risk that you run is having someone in that programming seat that is too focused on whether or not the M&E is going to work.  They shouldn’t worry about that — only about what programs are going to do.  I had to fight that voice in my own head.  I would look at certain things that we would be designing and think, “they’re never going to be able to turn this into an evaluation.”  And at the end of the day I would have to just say, “oh well, that’s M&E’s problem.”  Because when I wear the programs hat, and I’m thinking about a new program in that way, what’s important is what will work best.  And that’s what I go with.


  1. For some great resources on evaluation anthropology, see the special NAPA bulletin on the topic.
  2. Alice’s insights on feedback loops and direct-to-programming M&E processes are mirrored in USAID’s call for Rapid Feedback models.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s