Post originally published as a collection of 13 evaluation cartoons on March 25, 2014. Updated with way more context on May 27, 2020.
When you search “What is Evaluation?” on Google, you get the kinds of responses you might expect.
- The Wikipedia page…if a topic is important enough it will have a Wikipedia page. And whatever you may think about Wikipedia, it’s one of Google’s favorite sites.
- Dictionary pages. We did ask for a definition, right?
- Evaluation association definitions.
- Large government and NGO definition pages.
They are somewhat formulaic. Usually the definition is tied directly to a prominent author or theorist. Or it’s sourced to a recent article which sourced a prominent author or theorist. Then the organization expands upon that definition.
But why are you asking? Is a direct answer really all that important?
If it is, well then, here is a good one.
What is Evaluation?
Evaluation is the process of determining the merit, worth and value of things, and evaluations are the products of that process.
Michael Scriven – Evaluation Thesaurus, Page 1
Now for the indirect answer, let’s dive deeper.
Evaluation can be hard to explain.
Back in 2014 the American Evaluation Association put together a task force with the purpose of defining evaluation.
The statement is meant to encourage dialogue — so based on comments and responses it will be revised periodically. Thus, your reactions and comments are encouraged (see comment section below).The Task Force was comprised of both long-time evaluation professionals and AEA members newer to the profession. All have experience and expertise in communicating to others about evaluation. The task force included:
Michael Quinn Patton, Chair, Edith Asibey, Jara Dean-Coffey, Robin Kelley, Roger Miranda, Susan Parker, and Gwen Fariss Newman
The American Evaluation Association’s “What is Evaluation?”
The Canadian Evaluation Society took a similar root to find their own.
(Through a reflective process, the CES Board of Directors, supported by a consultation of members, has crafted and adopted the following as the CES definition of evaluation. PDF version. Cheryl Poth, Mary Kay Lamarche, Alvin Yapp, Erin Sulla, and Cairine Chisamore also published Toward a Definition of Evaluation Within the Canadian Context: Who Knew This Would Be So Difficult? in the Canadian Journal of Program Evaluation, vol. 29, no. 3.)
The Canadian Evaluation Society’s “What is Evaluation?”
For most of us “What is Evaluation” is an open question. It evolves over time and adapts based on context. But I am not sure evaluators would have it any other way.
Research vs Evaluation
From this perspective, evaluation “is a contested term”, as “evaluators” use the term evaluation to describe an assessment, or investigation of a program whilst others simply understand evaluation as being synonymous with applied research.
The Evaluation Wikipedia Page (accessed May, 27, 2020)
I’m pretty sure the Wikipedia evaluation page editor is trying to call us out.
But honestly, there are a lot of converted researchers in evaluation. And there are a lot of “evaluators” who are really just doing research.
I was a converted researcher. I saw the similarities in the methods and thought that it was pretty much the same thing. But it’s not what you do.
The how, where, who, and why really matter in evaluation.
Evaluation is not…
Research: The purpose of research is to generate new knowledge, while evaluation is about making evaluative claims and judgments that can be used for decision making and action
From FSG’s What is Evaluation, Really?
Figuring out the Type of Evaluation
Just like there is not one definition, there is not just one type of evaluation or one way to do an evaluation.
Evaluation can range from being very simple service evaluations to complex evaluative research projects. Each service will require a different approach depending on the purpose of the evaluation; evidence base, stage of development, context of the service; and the resources and timescales for the evaluation.
Evaluations can focus on implementation and learning (formative evaluation), how a service works (process evaluation) and whether it has worked (outcome/summative evaluation) – or all of these aspects over the life cycle of a project.
A lot of organization’s boil evaluation down to these three kinds of pursuits.
On BetterEvaluation, we use the word ‘evaluation’ in its broadest sense to refer to any systematic process to judge merit, worth or significance by combining evidence and values.
But there are all sorts of types of evaluation out there in the world.
Here are some thoughts on the topic by Emily Elsner, a current member of the freshspectrum Panel of Experts.
Michael Quinn Patton defines evaluation thus:
Evaluation involves making judgements about the merit, value, significance, credibility, and utility of whatever is being evaluated: for example, a program, a policy, a product, or the performance of a person or team.’
Evaluation Facilitation
Evaluation seems to raise two assumptions in people: firstly, that there is an easy ‘off-the-shelf’ solution, and second, that evaluation is going to be critical and negative. The angle of judgement, and (as Patton elaborates in his book) the association of judgement with values, is a crucial aspect that can be forgotten. Yes, evaluation can be critical, but it can also provide strategic guidance, support decision-making, and more – all positive, useful things for projects and organisations.
Linked to this, if evaluation is to be supportive of projects and organisations, then it needs to be tailored to the project/organisation, otherwise it is just measuring for the sake of it, as Muller, in his book ‘The Tyranny of Metrics’, reminds us: ‘
There are things that can be measured. There are things that are worth measuring. But what can be measured is not always worth measuring; what gets measured may have no relationship to what we really want to know […] The things that get measured may draw effort away from the things we really care about. And measurement may provide us with distorted knowledge – knowledge that seems solid but is actually deceptive.
The Tyranny of Metrics
Emily Elsner is an impact and evaluation consultant based in Zurich, Switzerland. She has spent the last few years working in the migration-entrepreneurship-social enterprise space, and is now independent, balanced between the social and environmental spheres.
Overcoming the inherent tension between the evaluator and program being evaluated.
Who gets to say what works and what does not work? What does it all really mean? It’s definitely not hard to move back and forth between evaluation and philosophy.
Evaluation and evaluative work should be in service of equity.
The first principle of the Equitable Evaluation Framework
Focusing on the Right Outcomes
We are what we measure.
UNEG’s definition of evaluation further states that evaluation “should provide credible, useful evidence-based information that enables the timely incorporation of its findings, recommendations and lessons into the decision-making processes of the organizations and stakeholders
The United Nations Office of Drugs and Crime – What is Evaluation?
Connecting the Dots
Evaluative reasoning is the process of synthesizing the answers to lower- and mid-level questions into defensible judgements that directly answer the high-level questions. All evaluations require micro- and meso-level evaluative reasoning… not all require it at the macro level.
Jane Davidson’s UNICEF Methodological Brief on Evaluative Reasoning
Making Comparisons
In its simplest form, counterfactual impact evaluation (CIE) is a method of comparison which involves comparing the outcomes of interest of those having benefitted from a policy or programme (the “treated group”) with those of a group similar in all respects to the treatment group (the “comparison/control group”), the only difference being that the comparison/control group has not been exposed to the policy or programme.
EU Science Hub’s Counterfactual Impact Evaluation
Systematic Assessment
Effective program evaluation is a systematic way to improve and account for public health actions by involving procedures that are useful, feasible, ethical, and accurate. Several key documents guide program evaluation at the CDC.
The CDC Approach to Evaluation
At various times, policymakers, funding organizations, planners, program managers, taxpayers, or program clientele need to distinguish worthwhile social programs from ineffective ones, or perhaps launch new programs or revise existing ones so that the programs may achieve better outcomes. Informing and guiding the relevant stakeholders in their deliberations and decisions about such matters is the work of program evaluation.
From Rossi, Lipsey, and Henry’s book Evaluation, A Systematic Approach
Evaluation is the systematic assessment of the operation and/or the outcomes of a program policy, compared to a set of explicit or implicit standards, as a means of contributing to the improvement of the program or policy.
From Carol Weiss’ book on Evaluation from 1998.
Impact Assessments
Impact analysis is a component of the policy or programming cycle in public management, where it can play two roles:
Ex ante impact analysis. This is part of the needs analysis and planning activity of the policy cycle. It involves doing a prospective analysis of what the impact of an intervention might be, so as to inform policymaking – the policymaker’s equivalent of
business planning;Ex post impact assessment. This is part of the evaluation and management activity of the policy cycle. Broadly, evaluation aims to understand to what extent and how a policy intervention corrects the problem it was intended to address. Impact assessment
OECD’s What is Impact Assessment
focuses on the effects of the intervention, whereas evaluation is likely to cover a wider range of issues such as the appropriateness of the intervention design, the cost and efficiency of the intervention, its unintended effects and how to use the experience from this intervention to improve the design of future interventions.
We need to accept the fact that what we are doing is measuring with the aim of reducing the uncertainty about the contribution made, not proving the contribution made.
John Mayne Addressing Attribution Through Contribution Analysis: Using Performance Measures Sensibly
Want more evaluation cartoons?
If you were brought here because of the evaluation cartoons, you can find lots more by checking out the following post: 111 Evaluation Cartoons for Presentations and Blog Posts. The post will also provide you with information on licensing terms and use.
Courtney says
I laughed too hard at a lot of these. Possibly out loud.
Chris Lysy says
Laughing too hard is encouraged. Thanks Courtney 🙂
Wendy Tackett says
“Figuring out type of evaluation” hits all too close to home too often! I think, as a field of evaluators, we are doing a better job of educating people of the value of thinking about and planning for evaluation at the beginning of the project, but other fields (e.g., education, healthcare, nonprofits) need to systemically build it into their thinking as well 🙂
Chris Lysy says
Thanks Wendy 🙂
Fazeela Hoosen says
These cartoons are very interactive and a very creative way of explaining what evaluators and do, what evaluations are and the benefits of different types of evaluations. These cartoons will be extremely useful to use in academic courses as well as to explain in a non threatening and simple way to stakeholders about evaluation…would it be possible for me to use these when explaining evaluations to stakeholders?…keep up the good work!
Chris Lysy says
Thank you Fazeela,
Please feel free to use the cartoons wherever and whenever 🙂
Francesca Wright says
Chris,
We humans love a story, an image, and a giggle. You offer all three.
It is generous of you to so freely share. I have your blog bookmarked and look forward to following your work and finding ways to incorporate cartoons into my communications.
Cesca
Chris Lysy says
How nice, thanks Cesca 🙂
Lisa Richardson says
Chris- I will be holding a meeting with a group of people about evaluation options for a project. The group knows little about evaluation and is much more familar with research. These will come in handy!
Chris Lysy says
That’s great Lisa 🙂
Josh Penman says
I think the client’s perspective in “figuring out the type of evaluation” is spot on: from there, it’s the job of the evaluator to manage the interview well enough that they can put the evaluation type into Evaluation Jargon.
But they should be sure that that jargon is always translated back into easily understandable terms when they report to the client. . .
And maybe when we think of great translations for the jargon that clients will understand, maybe we should start using them more and more ourselves:) (As long as the questions of continuity of language and the in-group cohesion and sense of identity that are conferred by the jargon are dealt with. ) … I really want term-by-term evaluations of terms in Evaluation:)
Chris Lysy says
Thanks Josh.
Do you want to put together a list of evaluation Jargon? I could see that being a good base for a cartoon post.
Josh says
Making a list sounds good:) I’ll see if I get around to it:) Sounds like something to put on the evaluationwiki.org if and when we are able to resurrect it. . . but for now, I have to study for my next evaluation class!
KJ says
The use of cartoons disarms people and allows for the seriousness and complexity of the topic to be fully engaged. This was a fun way to discover that there is work to be done through effective communication.
Rebecca Muller says
Thank you for sharing your drawings. I would love to use a few of them in my visual representation of evaluation. It’s always good to find a humorous way to get the point across. 🙂
Chris Lysy says
Please feel free Rebecca 🙂
Eugenia says
Great stuff Chris, will surely use your cartoons for explaining evaluation! Please make more if you have time, I’d be most interested if you could make any cartoons on international evaluation and the challenges that can bring about, thanks!
Sally Cupitt says
HI there,
I have found a couple of brilliant Freshspectrum cartoons online – oen above and one elsewhere – can I use them in an article I am writing about randomised controlled trials, please?
Thanks!
Chris Lysy says
Feel free to use them anytime Sally!
Maria Gutknecht-Gmeiner says
Hi Chris, these cartoons are not in the drawer. They are fresh and constantly used. With your permission I translated them into German some years ago. I know them by heart. The only one I did not know yet is the one with death looming. (Great!) They have enriched my and my participants’ experience in numerous workshops, seminars, conference presentations etc. You are the Gary Larson of evaluation. You should do a book, one of these books everyone must have, like a textbook – only it does not have so much text. “Evaluation according to Chris”, “what you always wanted to know about evaluation but did not dare to ask”… Seriously, your cartoons are really really great. They should be one of the pilars of evaluation teaching 🙂 BTW – I don’t really have this one favourite cartoon, because all of them are great. But with the social media hysteria, your cartoon about the million klicks on youtube comes to my mind quite frequently. This is speaking truth to all those mindless administrators, bureaucrats and “experts” (also the young “we know everything better because we have a instagram, facebook (you name it) acccount” …) – and increasingly it’s them who make everybody’s life so hard (and not only the ones officially “in power”). So keep on and find a way to make this evaluation cartoon “textbook”:)