The 10th anniversary meeting of the Latin American Impact Evaluation Network was held in D.C. last week, so I was jet-set north to go learn from some of the brightest minds in the field of IE. It was a fun group presenting some great papers which you can read on their website now. But you had to have been there to catch some of the best quotes from the 3-day event. My top 5 here:
“Academics are a little infatuated with stars.”
-Daniel Ortega, Director of Impact Evaluation and Policy Learning at CAF
By ‘stars,’ he means those asterisks that indicate the statistical (in)significance of a finding in a regression output. Evaluators often ignore any coefficient that isn’t graced with a star. As Daniel pointed out, we don’t always have the sample size or power necessary to reach statistical significance. But correlations still matter. Sizes/directions of coefficients can still suggest whether or not a project’s theory-of-change is on track. Plus, the very exercise of collecting data together with policy makers, and explaining what makes a rigorous evaluation, is worthwhile in itself.
“Together, we can do policy evaluation on steroids.”
-Arianna Legovini, Head of DIME at the World Bank
Arianna invested good time in trying to expand the audience’s perception of impact evaluation. She gave a great example of an IE on an infrastructure intervention, expanding a public transport line to economically marginalized communities. Some people would do a diff-in-diff or other ‘usuals,’ but as Arianna said, this would show you the impact “in the absence of thinking.”
A lot of things could mess that up: for example, displacement of poorer families with wealthier ones could happen in communities that are now suddenly connected. Why not run an evaluation on another type of intervention, which also connects communities to markets, to understand these mechanisms first? Test the components of a larger system, then use the reactions of the communities to better understand the impacts of larger infrastructure projects.
Of course, this is just one example of creative problem solving for evaluation challenges. But Arianna’s call is to stop limiting ourselves to projects that we know we can evaluate, and start figuring out how to evaluate more types of projects.
“Nowadays it’s fashionable to have alternative facts.”
-Gonzalo Hernandez, Executive Secretary of CONEVAL (Mexico)
Gonzalo and CONEVAL work heavily with the Mexican government to evaluate their public policies and to change/drop/expand depending on results. It’s incredible work that seems to have very promising results: Gonzalo maintains that the Mexican government effectively uses the research, and that the overall result is increased government transparency. But don’t be naïve, he cautioned. Governments sometimes need some convincing, particularly if results are contrary to their long-held policial ideals (and campaigns!). His advice to young researchers is to start understanding how policy makers behave — these skills will be invaluable for influencing policy in the future.
Read more about Mexico’s state-led M&E here.
“The kids who benefited most from this program are short kids with educated mothers.”
-Berk Özler, Senior Economist in Development Research Group at the World Bank
This quote stuck out to me for several reasons. First, it’s such an eccentric finding. In Berk’s research, the “short kids” are those kids whose early-life nutrition caused stunted growth. These heterogenous effects not only provide insight into where the early childhood development (ECD) program was most effective, but also raises the question: why do educated mothers still have stunted kids? Apparently that particular distribution exists in Malawi, and the teacher training and parental support classes were most effective with that demographic in a few indicators of ECD.
Secondly, it may suggest that a combined ECD program (with both teacher training and group parenting classes), could be combined with adult education programs for a positive synergistic effect. Which brings us to our final quote:
“Complementaries are one of the sexiest things there is.”
-Miguel Urquiola, Professor of Economics and International Affairs at Columbia University
Admittedly, the part that made this quote so good was mostly the reaction on Felipe Barrera-Osorio‘s face when he unexpectedly heard the word “sexy” used in the middle of a panel of economists.
A complementary is a program that acts like a catalyst — it might be good in itself, but its added value lies in the fact that it also boosts the impacts of a separate but related intervention. The mother’s literacy + ECD combo from quote #4 above is a good example of this. Basically, if you can find a program that works well and complements another intervention, you’ve hit a jackpot.
But this wasn’t even remotely Miguel’s most important contribution to the meeting. He gave a fascinating Keynote Address covering the topic of education and evaluation. Summing it up here won’t do it justice, but the argument is that two realizations are missing from evaluation research in education:
- Schools and principals look at not only test scores to measure teacher effectiveness, but a wide range of other things (multidimensional outputs). If a teacher is well-liked by the PTA, or if graduates of the school make more money than their peers from other schools, maybe test scores aren’t an effective way of advocating for better teachers…?
- Stakeholders in the education system can also change their responses according to changes in educational quality. For example, if parents know that their kids are suddenly getting a higher quality education, educational efforts at home might decrease. It’s a good time to start measuring behavioral responses to education interventions.
Overall, the meeting was an impressive success — huge congrats to the organizers (listed on the first link above). Keep your eyes open for next year’s meeting, which as usual will be posted on the events page.