Development Evaluation Making Sense: Beyond the narrative (and the Power Point)
In today’s development space, there is no doubt that there are evaluations and “there are evaluations”!
Many evaluations make sense and many don’t.
It is all too familiar to read Terms of Reference for evaluations in developing countries, written in such a way that they are simply “output” focused.
These “outputs” usually refer to an evaluation report (and perhaps a PowerPoint slide deck for good measure)! Added to this, is the very short timeline that is given to have these “quick and dirty” evaluations completed.
In fact, many of these end of project type evaluations are treated as an appendage to the project rather than as integral to programme development, influencing policy formulation and stakeholder learning and development.
While reports are a very important output for an evaluation intervention, there is need to go beyond the “outputs” and focus more so on the outcomes and even the impact the evaluation can and should have on its key stakeholders. Of particular significance are the following questions:
- What utility will the evaluation have from design through to completion stage?
- Will there be real learning and development during and as a consequence of this evaluation?
- How will the evaluation process and findings be used to develop and or improve on what already exists or what is to come?
These questions only scratch the surface of what should be a process that is largely about matching the development needs of key stakeholders with the provision of opportunities to gain new knowledge, insights and skills that make the evaluation design, implementation and findings “real”, “sensible” and “useful” to them.
Development evaluation is expected to go beyond “ticking boxes” against accountability check lists and focus on adding real value to the development process.
To achieve such a development end calls for ongoing engagement and reflection with key stakeholders about what works and what doesn’t. It also calls for understanding the “tide and tempo” of the development context and agenda for the beneficiaries and other key stakeholders.
In a chapter titled “Thinking Outside Evaluation’s Boxes”, the developmental evaluation guru, Michael Patton
shares some refreshing insights about his work in the Caribbean in the 1980s and the enormous value of adapting evaluation to the Caribbean way of doing things! Though the experiences referenced in the case were over two decades ago, his reflection on the need for evaluation to “make sense” is sobering to say the least.
In fact, much of my experience working with programme development and evaluation work in the Caribbean since the mid-1990s, mirrors the assertion Michael made in the innovative Caribbean case example he used;
” A lot of evaluation doesn’t make sense. A lot of evaluation is designed within tightly contained boundaries, narrowly prescribed parameters and mandated templates imposed by people who want it done and done their way, whether it makes sense”
Moving beyond the widespread prescriptive notions of evaluation to allow for greater engagement, inquiry and reflection with stakeholders will go a long way in development evaluations making sense.