Our Approach

Our approach to program evaluation and performance measurement, developed over almost three decades of practice, is grounded in five principles:

Look forward, not backward

Most program evaluations look backward at what happened in the program’s past life. Our approach is forward-looking: aiming to provide information that will be useful to upcoming and future decisions about how best to use resources in designing and delivering programs and services. We do this by paying careful attention during the evaluation design stage to the program’s current context and environment, while sounding out decision-makers about their future information needs and upcoming decision cycles.

Choose and acknowledge positionality

Starting from the premise that no evaluator is ever completely neutral, our approach aims to surface potential biases and deal with them openly by deliberately choosing and acknowledging our position vis-à-vis the entity being evaluated (in sociological research, positionality¹). By choice, this is often in the position of “critical friend”²: our experience has confirmed that evaluations in this position are often more useful and effective in supporting program improvement.

Use the right tools, properly

As program evaluators and performance measurers, we have experience with a very wide panoply of qualitative and quantitative tools, models, methods and techniques. Based on a judicious assessment of our clients’ information needs, appetites, and capacities, we bring the right tools to each unique assignment. Moreover, we use these tools properly by adhering to professional competencies and standards in program evaluation, and applying recognized standards of practice in qualitative and quantitative data collection and analysis. To this effect, we use statistical analyses that are appropriate to the size and nature of the quantitative dataset, and stick to a qualitative paradigm in analysis of qualitative data.

Rigour is a stance, not a recipe

We define rigour as making sure that our evaluations (designs, tools, analyses) can enable us to be surprised by the results. Whether qualitative or quantitative, simple or complex, evaluations where the results are known beforehand, or where only one result is possible, do not pass this test. In our practice, every piece of the process is challenged from this stance.

Learning matters most

We believe that evaluations are worth doing only to the extent that they provide new insights about program performance that lead to improvement. Evaluations done solely for accountability or compliance are huge missed opportunities for learning. Happily, in our experience, everyone involved in designing, managing and delivering social programs is after the same thing: doing the best job possible; learning is always natural and welcome. Our consultants are mere vehicles and our reports are mere vessels: the most important result of any evaluation process is new knowledge, fully integrated into programming³.

¹E.g., Pasquini M.W.; Olaniyan, O. (2004) The researcher and the field assistant : A cross-cultural, cross-disciplinary viewing of positionality. Interdisciplinary Science Reviews, 29(1) 24-36
² Rallis, S., Rossman, G. (2000) Dialogue for learning: Evaluator as critical friend. New Directions for Evaluation, 86, 81-92
³ I am grateful to Aboriginal scholars I met in the course of an evaluation for the Social Sciences and Humanities Research Council, who showed me that real knowledge doesn’t exist in reports or libraries. See “Head Knowledge is Hollow”