Updating the DAC Evaluation Criteria, Part 6: Concluding summary
<p><img src="https://zendaofir.com/wp-content/uploads/2018/01/thumb1-196x300.png" alt="DAC part 6" width="196" height="300">After many spirited comments on the DAC criteria posts, I want to conclude by highlighting the main lines of argument that have emerged. One comment: While I have focused the series on development, the arguments are also relevant in the humanitarian sector and at the interface between humanitarian work and development.</p>
I argued that the DAC criteria should get due credit. As noted in Part 1 of this series, they have been useful in many ways. Most importantly, they drew our attention to important aspects that we need to consider whenever we evaluate in the development space.
The DAC criteria had another interesting influence. Conventional practice dictates that evaluations are focused by tailored evaluation questions determined by stakeholders and evaluators. But in development evaluation the reverse has been true, in large part due to the dictates of the DAC criteria.
This runs counter to conventional evaluation practice; I consider this a good thing. I know it is a controversial stance to take, but political imperatives, ignorance or lazy evaluation design can prevent stakeholders and evaluators from engaging with what really matters when assessing development. And sadly, I believe we cannot trust that all evaluators and even evaluation commissioners will have the power or inclination to ensure consistently that negotiations around questions and criteria focus on such issues.
I cannot emphasise this issue enough: If we evaluate for development, there are issues on which we have to focus (see my blog post and article on this topic). We cannot each time leave the questions and criteria completely in the hands of stakeholders.
On the other hand, the DAC criteria conceptualisations and definitions are not sufficient to ensure that we are effectively evaluating contributions to development, as discussed in Part 2 of the series. They do not reflect all the really important issues to consider, especially when development is viewed through a complex adaptive systems lens. They do not compel us to consider critical aspects such as the need for coherence and synergy between policies, interventions, goals, activities, etc.; the significance rather than relevance of what is being done or achieved; the neutralising effect that negative impacts can have on positive achievements; the crucial linkage between impact and sustainability that is necessary to assess almost any intervention as a successful contribution to development; the need for responsiveness to co-evolving cultures and contexts, as well as for improvisation during implementation; the need have an environment/ecosystems lens on everything - and more, as discussed in Part 5 of the series.
This matters a great deal more in the Global South, where all the low income and lower middle-income countries (LICs/LMICs) are located. Development challenges are much more pronounced in these countries, and this is where 'contributions to development' really matter. Development has to be transformative. Development trajectories at national or regional level must evolve from a relatively low base in almost every aspect of society. Much has to happen in sync, and positive trajectories must be sustained for a long time. This is even more challenging in this era we have now entered - one where we have major disruptive forces as well as exciting new models influencing development.
Even single interventions in the LICs and LMICs have the explicit or implicit intention to contribute to positive development trajectories in the short, medium or long term. Then why do we not evaluate more seriously for that, and try to resolve what some call the micro-macro paradox?
Evaluation criteria have to help us to make judgments that are sufficient and appropriate for this purpose. From this perspective, the existing set of DAC criteria have deficiencies that cannot be remedied through more nuanced definitions or better application. We have to have criteria that are sufficient, strategic and nuanced enough to enable us to relate our findings and summative judgments with confidence to successful contributions to development at national or regional level.
Three dilemmas
Three important dilemmas confront us:
First, we have limited resources for evaluation, and hence limitations in how many criteria we can deal with in any one evaluation. We have to be pragmatic about this without losing the quality and utility of evaluations for development.
Second, as I said earlier, in evaluation convention stakeholders are free to determine whatever evaluation questions and criteria they believe will be useful during a particular snapshot in time. Either our evaluation questions, or our evaluation criteria, have to force us to focus every single time on crucial aspects that have to be assessed if we are serious about evaluating for development.
Third, we need to have a credible process in place (i) to determine whether a standard set of 'generic' criteria is desirable or not; and if so (ii) to rethink the DAC criteria, or develop a completely new set. Much has changed in the last 20 years since the DAC criteria became widely used. The 2030 Agenda concedes that 'development' is now necessary in all countries. The community of evaluators is now global. Geopolitical power has been shifting, and the Global South wants a strong, equal voice in everything, including in processes that will influence the global evaluation system. We still have to figure out what this means for any criteria reform process. Or even whether there should even be such a process.
I do not have ready solutions - except perhaps for dilemma 2. I now finally turn to that.

One organising framework for development evaluation criteria
In Parts 3-5 of this series I proposed two changes to our current practice in order to have an organising framework that will remind us what is important when we select evaluation criteria for any evaluation:
Thoughtfully tailor criteria per evaluation. Negotiate for each evaluation a set of criteria tailored to its specific purpose and context. However, put the notion of contributions to positive development trajectories as the core focus of every evaluation. Consider beyond stakeholder questions and criteria also the inherent nature of development viewed from (i) a national or regional perspective, and (ii) a complex adaptive systems perspective. Then make a final decision about the evaluation questions and criteria.
Second, draw from each of three categories to tailor the criteria:
Category 1: Criteria determined by the characteristics of development noted above. They have a signally function, as Osvaldo Feinstein pointed out in a comment on Part 5 in the series. They signal what is essential to attend to when evaluating development. The criteria in this category therefore have to be considered as the core - non-negotiable set that should be widely applied, similar to the DAC criteria.
Category 2: Criteria determined by the intersection between the organisation or partnership's mandate, and societal, regional or global norms - a somewhat flexible set, as norms and priorities can change with time, context and stakeholder.
Category 3: Criteria determined by stakeholder interests - a completely flexible set as they will depend on stakeholder needs at a particular snapshot in time.
With slight adjustments in definitions and descriptions, most of the criteria can be applied across diverse types of evaluations and evaluands. Rubrics that make our values and yardsticks explicit will be critical. Some of the criteria will challenge our practices. We will also need to do much more to shift our evaluation practice back from our obsession with impact in isolation of everything else, to evaluating much more smartly for appropriate design and implementation, and pathways to success.
But the debates about Doing Evaluation Differently have started, and should gather momentum in 2018. For this, I want to thank two great colleagues: Caroline Heider, the World Bank's IEG Director, who initiated the discussion about our criteria with so much insight, and Indran Naidoo, Director of the Independent Evaluation Office of UNDP, who commissioned and inspired the work that led to this series of posts as part of his ongoing efforts to improve the quality of UNDP's evaluation function.
Member discussion