Imagine it’s your job to prevent crime and violence in Guerrero, the second most violent state in Mexico, with 46 murders per 100,000 inhabitants. What would your prevention program look like? What sort of information would you want access to before you designed your program?
Much like actual policy-makers, you would probably want to know a few things right away. Where is the violence occurring? What groups of people are committing the violence? Have any violence prevention approaches proven successful in the past?
In Mexico, where the homicide rate is rising again (19,000 homicides in 2014 versus under 9,000 in 2007), policy-makers generally know the answers to the first two questions. Research shows that 90 percent of the country’s homicides are concentrated in 465 out of a total of 2,457 municipalities. Research also shows that young men with little education are the most frequent committers and victims of violent crime. All of this suggests that resources should be devoted to targeting these young men in a relatively small number of municipalities.
Unfortunately, when it comes to the third question, policy-makers are in the dark. There is a lack of evidence to prove what approaches work (and how they work) when it comes to preventing crime and violence among these particular groups in these areas.
This lack of evidence is why Juntos para la Prevencion de la Violencia (JPV), a five-year USAID program that seeks to generate and use evidence so more effective interventions can be replicated and scaled up, has designed the JPV Scale.
Through this tool, we are seeking to be more rigorous when we call something “a best practice” by categorizing it based on its use of high-quality evidence. While similar tools only rate empirical evidence, the JPV Scale adjusts to the Mexican context. It recognizes that policy-makers’ understanding and capacity to generate evidence on preventing crime might be low, and that programs might have solid non-empirical evidence that should be considered. We are trying to be less-empirical oriented, at least in the short run. We also assess implementation fidelity so that we can have more elements to identify interventions that can be replicated or scaled up in different contexts. Our goal is to identify interventions that seem promising because they have both a strong theoretical foundation and high-quality evaluations.
We want to set up a baseline on the use and quality of evidence in Mexico and make sure that, within time, implementers know (and are willing to) produce better evidence about impact so that we can know what type of interventions are bringing violent behaviors, crime, and homicide rates down. To make sure this happens, JPV is offering training to different civil society organizations and public actors so that they can build up M&E capacity in specific areas that have been identified by the tool. Through such technical assistance, we want them to feel more comfortable and have better knowledge about what evidence really is, how to create it and how to use it.
When you look into development topics like microfinancing and access to education, there is plenty of evidence coming mainly from top-quality, randomized control trials that let you know which intervention has the most significant, positive effect. Not only that, you can compare costs among these interventions (is it more cost-effective to give uniforms, deliver free lunches, counsel parents or give scholarships to children so they can be more attracted about going to school?).
When it comes to crime and violence prevention, we can only hand pick high-quality rigorous evidence about what works and how. We hope that in the following years we are able to improve access to high-quality statistics, generate better non-empirical and empirical evidence, and improve the use and analysis of such evidence so that it becomes a constant element in policy-making.
Ursula Quijano is the monitoring and evaluation specialist for USAID’s Juntos para la Prevencion de la Violencia (JPV) program. She will be presenting on the JPV Scale at the 2016 American Evaluation Association Conference.