2 min read

Dialogue evaluation

Dialogue evaluation
Dialogue-ThumbN

Many years ago, at the beginning of my evaluation career, I read about 'dialogue evaluation'. It is a simple mechanism that we can use more frequently to mediate between an independent evaluation and self-evaluation - two distinct approaches that many see as incompatible. Dialogue evaluation (see below) can have real benefits in this context, yet it is seldom applied when independent evaluation teams descend on countries or societal groups under evaluation - especially when the evaluation is at a strategic rather than micro ('community') level, and a 'parachute in, parachute out' evaluation team is used.

Dialogue Evaluation

I have not seen it used to refer specifically to the systematic comparison of self-evaluation findings with independent evaluation (emerging) findings, and understanding the reasons for differences. It relates to what is regarded as dialogue in evaluation, or dialogic evaluation, which refers to engagement and interaction between evaluators and stakeholders, ideally but not necessarily to reach some type of consensus (see here and here). This is usually defined as 'participatory evaluation', often considered as anathema in independent evaluations.

It has become good practice to ask management or executive teams to do a self-evaluation report aimed specifically at informing independent evaluations. Evaluation teams commissioned to do independent evaluations tend to use such a report as one of many inputs into their process. They seldom engage in a systematic analysis and then one or more systematic conversations - a dialogue that is detailed, respectful and for mutual benefit - about emphases, issues and (emerging) findings that differ between the two evaluative exercises. Too often such engagement is only at the end of a field visit or at the end of the independent evaluation during a 'stakeholder workshop' or request for a 'management response'. This is often too late to analyse each other's sources of evidence, their different interpretations, and the root causes of such differences. Or at least to develop mutual respect for the differences.

It will not compromise the independence of the evaluation to have such a dialogue with those who did the self-evaluation, and to continue to shape data collection, final conclusions and/or recommendations with these conversations in mind.

When opportunities for mutual learning, respect for alternative evidence or interpretations of evidence, and in-depth understanding get lost, it depletes the quality and credibility of the evaluation. A stronger focus on  more systematic and analytical 'dialogue evaluation' in evaluation methodology guidelines can add significant value to the utility and credibility of independent evaluation processes.