Making Evaluation Accessible

Program evaluation, and particularly impact evaluation, is an exciting and growing area within the field of international development.  Organizations are continually gathering new evidence that explains not only what works to alleviate poverty, but why it works.  Little by little, evaluation is helping us make better decisions that actually make sense in the context of poverty.

This blog aims to make evaluation — its methods, trends, and results — accessible to anyone who wants to learn how to make effective change.  New to the blog?  Start here.

Top 5 quotes you missed if you weren’t at the 2017 IEN Meeting

The 10th anniversary meeting of the Latin American Impact Evaluation Network was held in D.C. last week, so I was jet-set north to go learn from some of the brightest minds in the field of IE.  It was a fun group presenting some great papers which you can read on their website now.  But you had to have been there to catch some of the best quotes from the 3-day event.  My top 5 here:

Continue reading

Refugee policy is not exempt from evidence-based decision-making.

Just because it’s harder doesn’t mean we shouldn’t do it.

There’s been a lot of noise around the topic of immigration lately, in response to a few EOs from the desk of President Trump.  It’s fair to say that the EOs were met with a lot of emotion from all points on the political spectrum.  Indeed, in many conversations, from those on Facebook to the White House Press Conferences, it was emotion — not evidence — that was driving the dialogue.

Why the lack of evidence?  

Read about refugees and rigorous evidence here

Interview with an M&E-to-programming convert

After three years of managing the M&E system for an INGO’s interventions in Guatemala, Alice was both promoted and and transferred to the programming department at the INGO’s headquarters in New York City.  As the new Manager of Impact at the international level, she was in charge of the programmatic activities in three countries – but her background in M&E allowed her a unique perspective into evidenced-based implementation.

Read her interview

Sneaking Evidence into Policy-Making: the role of evaluator-turned-advisor

“Your job is to slip some common sense past the prime minister without her noticing.”

Rachel Glennerster, now Executive Director of J-PAL, remembers being tasked with the above burden while working for the UK Treasury during Margaret Thatcher’s term.  It’s a shared burden among policy advisors, and one that can be made easier or harder, depending on the political environment of the time.  The more evaluators are called to provide new proof for policy, the more we must learn how to navigate the political world in a way that furthers the mission of making evidence-based decisions.

Keep reading

How do you say “charge the tablet” in K’iche’?: Evaluating tech* interventions in Mayan language communities

We just finished our first year piloting an educational intervention which uses tablets to provide access to over one hundred educational games and books for children.  At year’s end, my team and I were in charge of conducting a first-level qualitative evaluation to determine the program’s merits for expansion.  While the kids were the ones who received tablets, we were particularly interested in the parents’ perceptions of the school’s program.

More on evaluating tech in a Mayan language…

How to be a crappy consultant, in five easy steps

I have just started a new position in Guatemala, and it involves getting to know an international organization with a huge range of evaluation needs.  I don’t anticipate feeling fully comfortable with my position for another 6 months.  It’s a permanent position (not a consultancy), but this limbo period has still called into memory the most difficult consultant I’ve ever worked with, in a humble attempt to make sure I don’t repeat the same mistakes.

Keep reading