Jane Davidson will be talking causation as part of the BetterEvaluation/AEA coffeebreak webinar series. If you read this prior to Tuesday, May 21, 2013, 4:00 – 4:20 PM EDT, you can sign up for the free live webinar. If you get to this after the date, you can go here for an Open to the Public recorded version. Just search the page for “CBD141.”
Prior to webinar I asked Jane if she wanted a cartoon, she said yes, so of course I created a whole set.
About the Illustrations
The expertise is Jane’s with the cartoons being my contribution. If you would like to learn more about causation check out Better Evaluation.
I have a few things going on this week so I’ll pretty much be an absentee blogger. So forgive my lack of prompt RTs and comment responses. If my awesome loyal followers could do me a favor and reply to any comments and RTs you notice, I would be much obliged.
Oh, one more thing, the best way to follow this blog is via my email list. Each time I post (about weekly) I send a short email. Also, I have an idea for a special giveaway just for my email followers and, as soon as I get the chance, I will make it available.
A few notes:
- If you like the post, write a comment and let me know.
- Share it with colleagues. Seeing people sharing my cartoons inspires me to create more cartoons.
- What do you have to say about causation? Let us know in the comments.
- Please feel free to use my cartoons in presentations, training materials, etc.
Defining Causation
First things first, I asked Jane to give us a quick definition (FYI: you can tell this is her response because of the “u” in favorite)…
My favourite definition of causation is Scriven’s from his Evaluation Thesaurus, 1991.
“Causation: The relation between mosquitoes and mosquito bites. Easily understood by both parties but never satisfactorily defined by philosophers and scientists.”
I don’t have a formal definition I use myself, but usually explain it by saying you can’t do outcome or impact evaluation unless you know that those things you are documenting as “outcomes” and “impacts” actually “came out of” or “were impacted by” the program/policy/project/etc.
If you don’t have any evidence they did, then all you are documenting are coincidences. And what’s the point of that?
Correlation is not causation
Jane’s key point: You can’t actually do an impact or outcome evaluation without causal inference.
Over-complicating causality
Jane’s key point: We really do seem to be massively overcomplicating this space by thinking it’s harder than it is. We make causal inferences every day in our real lives and the way we do it is often quite sound. So let’s build on that, find ways to do it more systematically, transparently, rigorously, defensibly.
Real gold standard
Jane’s key point: Causal inference – like evaluation – is more about reasoning than methods; the real Gold Standard is sound causal reasoning backed by whatever evidence, whatever mix of methods you need to make the case for the audience you are speaking to (and using the evidence you can cost-effectively get your hands on).
Non-experimental and qualitative causality
Jane’s key point: Causal inference CAN in fact be done with non-experimental and even with purely (or heavily) qualitative methods. Counterintuitive for those raised in the traditional social sciences, but it really is true. [It also means you are not off the causation hook if you do qualitative evaluation!]
Never 100% certain
This specific point Jane didn’t send to me, but she has addressed it in the past (PDF) and I like posting at least five cartoons…
The causal link doesn’t have to be demonstrated to 100% certainty; we need to match the level of certainty to the decision making context (not throw in the methodological kitchen sink).
Additions
What else can you add to the discussion?
Chris Lysy says
Nice link Megan, fits in well eight post, thanks for sharing 🙂