I was listening to comedian Rod Quantock on the radio today talking about reviews. He said he never reads reviews because if he reads the good ones, then he’d have to read the bad ones too – and they can hurt.
Got me thinking about a couple of things: the similarity between reviews and evaluation, and the fine line between criticism and learning.
Of course there are differences between reviews and evaluations. Reviews are often done by one person, read by many; evaluations may be done by many, read by few. The effect is similar – encouraging others to attend a favourably-reviewed performance; encouraging others to employ (or re-employ) someone who receives favourable evaluations. Or continue with a project. The problem with many evaluations is that they are not used at all. Done to satisfy some policy, and then forgotten. Pity really, because good evaluations can really help with learning and improvement.
That’s the main reason why I tend to evaluate ‘on the run’ – using some processes to assess how things are going so as I can modify in the moment. It’s a bit late to discover things were going pear shaped after the event. Oh, and if you’re not aware that things are going pear-shaped, you’re probably in trouble anyway.
Even when someone insists on a written survey for evaluations, I still use my own participatory processes at the end of a workshop because it gives me invaluable information, helps share the learning amongst the participants, reinforces what we’ve done (and learnt), is fun, and a god way to bring an event to a close.
Here’s my current favorite approaches – I usually use all of these and encourage people to self-select which one they want to do so as they can evaluate in a way that suits them. Each approach is posted on the wall on flip chart so they are quite visible. I also get the small groups to report back in the order below. It always ends with a real buzz amongst the group.
1. ORID Report
Facts and figures – Reactions – Significance – Difference it’s made