Democracy and Governance: Measuring for Success


I once commented to a caseflow management trainer that caseflow management is “as much art as science.” He replied that it is actually “much more art” than science. My work on rule of law projects has led me to the same conclusion. Democracy and governance (D&G) programs are dynamic. Each program begins with a clear plan and well-defined objectives, then many competing interests and priorities emerge during implementation. D&G programs aim for positive changes in organizational processes, structures, and people, but how those processes and structures are defined, and how those reforms are supported and implemented, are what really determine success.

I started my career path as a musician, so I tend to relate to the lessons I learned from music, such as listening carefully to all voices, staying in tune, allowing interpretation, and even improvisation where appropriate. In my D&G work these lessons are applied by constantly listening to all perspectives, reassessing whether the approach taken is the best approach, and inspiring belief and confidence in the reform. Focusing on all voices ensures cognizance of all possible approaches and that all persons contribute in the reform process. A project must be flexible enough to respond to good ideas coming out of the collaboration, even if they may not have been contemplated in the original design.

In Indonesia, for example, where I am chief of party on USAID’s Changes for Justice Program (C4J), the impetus for reform came out of the Supreme Court’s desire to implement the new Freedom of Information Law through IT. This was not anticipated by the original design of C4J. That interest, along with many discussions among stakeholders at many levels of the judiciary, led to development of a new automated case tracking system that grew from 4 pilots to 350 courts in under three years – using the courts’ own resources. This has opened the way to new reforms, such as in case management, human resources, financial management, supervision, and training.

Based on my experiences, in addition to the donor’s standard indicators, a separate set of internal (or even personal), regularly assessed indicators might help us to be more reflective, creative, and responsive during our reform process. Such possible formative evaluation indicators at the level of our engagement with stakeholders might include:

  • How many leaders in key positions do we collaborate with regularly? Are there other key leaders, including new ones, with whom we should be collaborating?
  • Who are the key stakeholders “critical to” the success of the reform? And who are those “critical of” the reform? Are we engaging with each of them?
  • Have we developed a matrix of all reasonable alternative approaches to the reform? And have we assessed the costs and benefits of each of the alternative approaches?
  • Are our meetings and training programs including all voices? Are we ensuring an equal distribution of leaders from higher and lower offices? Of technical and support staff? Of males and females at each level of responsbility? And of multiple regions and ethnic groups?
  • How often do we assess our approved list of indicators against new risks and new opportunities?
  • What new ideas, proposals, or opportunities have we tried, or at least considered?
  • Are any of our efforts failing? And why? Is the failure because of the idea, or should we change our approach?
  • How often do we document, assess, and apply our lessons learned?

A formative evaluation approach is more challenging and time-consuming than simply defining a process and reporting on the standard indicators, but the investment can enable a shared vision and critical mass of support that makes the critical difference between just implementing a project and sustaining a reform.

David Anderson serves as the Chemonics chief of party for USAID’s Changes for Justice Program in Indonesia.

Leave a Comment