The larger roads in northern India have lane markings. They appear to be indicative, as is the direction of traffic. Cars and motor bikes jostle with rickshaws, trucks (so many trucks!), buses, bicycles, camels, donkeys, cows and dogs. And people. Crossing a road, on foot or in any vehicle is a case of trusting the process.
The only thing I saw that was scary was a drunk driver in charge of a B-double truck. It was on a back road, he weaved from one side of the road to the other, seemingly oblivious to his erratic trajectory.
I’m sure there are many accidents, yet I didn’t see any. And a lot of vehicles, which should be a smorgasbord of dints, scratches, scrapes and bumps are remarkably free of any marks. And the horns! How anyone can make sense of all that noise is beyond me, yet they seem to manage. It’s not like Melbourne, which in many ways is scarier – certainly faster and a lot more aggressive.
Which got me thinking about expectations. Everyone talks about the traffic in India yet it’s not possible to really know what it’s like without actually being in it. We only have our own field of experience to draw on and if that doesn’t include the erratic, and seemingly chaotic, way traffic is in India, then it’s hard to imagine, and it’s hard to know what to expect.
In retrospect, I can talk about what I experienced, what surprised me, what scared me, and I can make comparisons with other traffic experiences. If I’m asked in advance what my expectations are I can only talk in generalities: I think it will be different, more chaotic, busier than I’m used to. Do I expect to be scared? Maybe.
And then there’s that oft-asked question: were my expectations met? That’s always hard for me to answer. Maybe if I was clearer about my expectations I could answer yes or no. Yet I’ve already explained why I can’t be clearer about my expectations.
So it’s a nonsense question – to ask before an experience, and afterwards.
Better to focus on the reality of experiences rather than obscure and abstract expectations.Evaluation, General | Comment (0)
If you look at the top of my blog you’ll see this. Maybe one day I’ll write a post about why I call my business Beyond the Edge (in case you haven’t worked it out already.) And I do write a lot about facilitation. What’s with the evaluation though?
Here’s the back story.
I had been working with a friend, Peter Box, in East Gippsland. We had finished a two-day workshop and were driving back to Melbourne (where I was living at the time). He had to stop off to have a meeting with a woman who was heading up the evaluation team in the State Government Department of Primary Industries. I tagged along. She wanted him to sit in on some training that was being delivered where there were some difficulties. I can’t remember why but she also invited me too. So a few weeks later we went to a three-day evaluation workshop designed to equip the participants with the skills, knowledge and enthusiasm to develop evaluation plans and do formative evaluation to improve existing or subsequent projects. It was all gobblydegook to me. What we could do was advise on the delivery – moving away from a lecture style to a more hands-on, practical, participatory approach – facilitated learning, if you like.
One of the trainers was Jess Dart. She was completing her PhD on Most Significant Change and wanted a proofreader. I’d been a journalist and an editor in another life, and work was slow, so I agreed to proofread her PhD. I didn’t know much about evaluation or MSC, but was intrigued by a technique that relied on gathering stories and facilitated workshops. One thing led to another and I found myself working with Jess to deliver that same evaluation training. We worked well together – Jess bringing the content about evaluation and me bringing facilitation and learning processes. Through osmosis I suppose I gradually learnt the content too. Jess left to have babies and I found myself continuing to deliver this training. I also found myself delivering evaluation training to scientists in Vietnam, Philippines, and Fiji and even doing some secondary impact evaluations for Oz-Aid. This is one of the ways that I got a foothold in to the humanitarian aid sector – an area of work that still gives me great pleasure, challenge and opportunity.
I no longer work directly with evaluation. Occasionally I’m asked to help train people in how to collect evaluation information using facilitated approaches. If you’re interested you can read about it here. Our training was influenced strongly by the utilization-focused evaluation work of Michael Quinn Patton. This week Chris Corrigan posted links to developmental evaluation, created by the very same Michael Quinn Patton. His comment that “this is the first thing I have seen on evaluation that has got me excited about the connection between complexity, systems thinking and change” was enough to spark my interest.
I actually sat up and took a lot more notice when I read this slide in Michael Patton’s slide set about developmental evaluation. (Okay – it’s not an Insanely Great slide, but I really like the message.) Here’s some of the key points that I gleaned from the Practitioner’s Guide to Developmental Evaluation.
Developmental evaluation supports real-time learning in complex and emergent situations. The evaluator is part of the team making real-time decisions based on feedback, rather than reporting to an external authority. It relies on collaboration and innovation.
Developmental evaluation isn’t always needed – if cause and effect relationships are clear, then traditional evaluation is good enough. However, in those situations where the environment is always changing, it’s difficult to plan or predict, it’s socially complex and innovation is called for, then maybe developmental evaluation is recommended.
Traditional evaluators may not be the best people to do developmental evaluation (and here’s where there are some interesting links with facilitation). Developmental evaluators need to have some facility with strategic thinking, pattern recognition, relationship building and leadership. And, community connectedness, curiosity, appreciativeness and a whole bunch of other skills that I’d list under the heading of facilitation skills.
And I love the answer this question: How is developmental evaluation practiced?
Any way that works!
There is no prescribed methodology. Again, like facilitating. I do like this list of types of developmental evaluation interventions: asking questions, facilitating, sourcing or providing information, mapping and modeling, pausing, reminding and match-making.
The more I explore developmental evaluation the more links I see with facilitation. And as I reflect on the Noam Chomsky quote above, I’m reminded that maybe it’s also time for a new way of looking at facilitation, recognising that traditional forms of faciltation have helped us get this far, but now we need something different to deal with the complexities and challenges faced by the world we now find ourselves in.
I was thinking of removing the word evaluation from my header. Maybe I’ll leave it there for a little while longer.Evaluation, Facilitation | Comment (1)
The brief seemed fairly straightforward: design and facilitate a conference on evaluation of behaviour change. Oh, and make it different, groundbreaking even…unlike all the other conferences around. And could we also explore complexity? And could it be self-catered, by students? Could it be the sort of conference people talk about it the future? Not so straightforward.
So we designed Show Me The Change. At times it was hard to break away from convention, from expectations, to say no to keynote speakers and pre-prepared agendas. I’ve written before about how much harder it is to be disruptive than to be conventional.
Whether we disrupted the field of behaviour change evaluation remains to be seen. We know we gave it a good shake.
And to continue the conversations Geoff Brown has done a great job in capturing the conference story. It’s an awesome multi-media summary and is on the web site for all to see. If you haven’t seen it yet, go here to check it out. Now that’d be the sort of conference I’d go to. And it just might be the sort of conference report that someone would read.Conferences, Evaluation, Slips | Comment (0)
Regular readers will known that I’m involved in a national conference on evaluation of behaviour change called Show Me The Change. I’m excited about this conference for many reasons, but evaluation and behaviour change are not high up there. What? I’ve been known to dabble in evaluation. I’ve even been known to dabble in behaviour change. And they make fascinating areas of study. The reason I’m really excited about this conference is that it’s an opportunity take part in a radical departure from traditional conferences and to demonstrate how meaningful a conference experience can be while allowing the participants to do what they do so well – connect with each other and talk about their own experiences of evaluation of behaviour change including their successes and failures, and come away with renewed insight, inspiration and ideas.
I’m not one to get too excited about many conferences these days. That’s because many conferences follow a predictable pattern of ‘high profile’ keynote speakers, panels, Q & A, workshops that were selected months in advance by a steering committee and are too expensive to attend.
I remember a turning point for me and conferences. It was an international conference on community engagement, in Sydney or Brisbane – I can’t quite remember the specifics. I paid my own way as is always the case when you have your own business. It wasn’t cheap. More than $1000 for three days plus accommodation and travel. There were lots of speakers. So many that they were jam packed three at a time into one hour slots. You do the arithmetic. There was lots of very slick organisation. And lots of very boring powerpoint. There was no engagement. I kid you not. It was a classic case of the only engagement happened in the breaks, and there were so many people spread over a very large, soulless venue, that it was just about impossible to find the people you wanted to speak to. I also discovered a number of people who turned up for their session and then disappeared. So much for connection. But I can’t blame them really, because the form was not at all conducive to anything other than reinforcing traditional patterns of hierarchy and status.
Is it a risk to depart from this traditional approach? I guess it depends on your perspective. And seeing as Show Me The Change is about evaluation, I guess it depends how you measure success. So is success at conferences measured by the number of bums on seats? By profit? By the ‘VIPs’ it attracts? By the keynote speakers? Or by some less tangible measures? The ideas shared? The connections made? The collaborations that ensue?
What makes a successful conference for you?Evaluation, Innovation | Comment (0)
Good question. I first heard about liminal space from Patti Digh and David Robinson of The Circle Project. Among other things, they work with corporate America to surface and tackle racism. Now if there’s one difficult subject for behaviour change, then racism has to be up there. They describe the work they do as “helping individuals, organizations, and communities create new patterns, new stories, new cultures.” Sounds like it could be another way of describing behaviour change.
So maybe there’ something useful in understanding liminality – or maybe not? You be the judge.
Liminal space is described (rather unhelpfully, I think) as the ‘space between’.
If you want to do some research yourself on this topic here’s a few options.
The Journal of International Political Anthropology, July 2009, (and you thought I was just a facilitator) is completely devoted to liminality. In the introduction, Liminality and Cultures of Change, the editors write “This issue is concerned with the concept of liminality, a major concept in cultural and social anthropology whose importance for the understanding of wider processes of social and political change has been understudied so far.”
And then there’s this article written by Charles La Shure titled ‘What is Liminality’.
Here’s the crux of what I understand about liminality and why it’s important to behaviour change (caveat – these are my own thoughts). If we’re serious about changing the way people act we have to consider that people act a certain way because of habit. I can only speak for myself, but I know that more information will not do the trick. In fact, Seth Godin wrote about a similar thing recently (albeit in relation to marketing). Here’s what he wrote about Too much data leads to not enough belief:
“Business plans with too much detail, books with too much proof, politicians with too much granularity… it seems as though more data is a good thing, because data proves the case. In my experience, data crowds out faith. And without faith, it’s hard to believe in the data enough to make a leap. Big mergers, big VC investments, big political movements, large congregations… they don’t usually turn out for a spreadsheet. The problem is this: no spreadsheet, no bibliography and no list of resources is sufficient proof to someone who chooses not to believe. The skeptic will always find a reason, even if it’s one the rest of us don’t think is a good one. Relying too much on proof distracts you from the real mission–which is emotional connection.”
Now I don’t profess to know what to replace yet more information with – although I do know that building relationships and emotional connection is part of the answer – but I do know that if I’m going to change my behaviour, be it in relation to the environment, health, safety or whatever, I have to first LET GO of what I’m currently doing. That can be hard. Even in the face of overwhelming evidence, my habits are, well, MY habits. I own them. They are a part of who I am. They are a part of my character. They make me feel safe (if not be safe), and they are predictable.
When you’re asking me to change a particular behaviour (even if it’s for my own good, or for the well-being of others, or even the planet) you’re asking me to let go of something familiar and take up something unfamilar. That space between letting go and grabbing on to something new is called liminal space. You’re asking me to enter a space of unknowing, of uncertaintly and of change. Is it any wonder I’m reluctant?
I’m more likely to enter liminal space if I think it’s OK, if I feel safe, and have some idea of what I’ll be grabbing onto. Think of it this way. If you were a trapeze artist, would you let go of the bar if there was no safety net and no-one on the other trapeze to catch you? Or if the trapeze is a bit of a stretch for you, think of monkey bars at the playground. Spend some time watching kids playing on them. There you can see liminal space in action. It’s not possible to make any progress on monkey bars unless you let go of one bar before grabbing hold of the next one. In fact, that’s probably an even better analogy for behavior change, because on the monkey bars, you usually hedge your bets – holding on to the previous bar with one hand while grabbing the next one with the other. Sooner or later though you STILL have to LET GO to progress.
So in behaviour change programs, what are you asking people to let go of and how are you supporting them in liminal space?
This article was originally written for the Show Me The Change blog. If you’re interested in behaviour change, complexity and the art of evaluation you might want to look at the Show Me The Change Conference in Melbourne, May 4 – 6. Invitation here.Evaluation | Comment (1)
Lake Superior University Banished Words List 2009
Web tools for qualitative data analysis from Michael Wesch at Kansas State University
2008 The Year in Review from the folk at JibJab
Les Posen on 2009 being the year when presenting well comes of age. I hope he’s right!
A short video that explains Net-Map, a networking and power mapping tool for communities developed by Eva SchifferCulture, Evaluation, General | Comment (1)
How does facilitating vary from other meeting roles – such as chairing, mediating, moderating and emceeing? While there may be some elements of these other roles in facilitation, the main difference is one of intention. Facilitators need to be clear about their intention – why they are facilitating at all and who for, why they are using this particular process or activity, and what experience they intend the group to have. This is very different to having pre-conceived outcomes and takes courage to let go of what you think the group may need to do and take your cues from the group itself.
Why then have a facilitator at all?
I think it’s about changing perspectives and giving people permission to go into unexplored territory. It can be hard for a leader of a group or organisation to signal that THIS meeting will be different, when everything remains the same. Participants may fall into habitual behaviour simply because ‘this is how it always is’. By bringing in a facilitator from outsode the group or the organisation, it signals that THIS meeting is indeed different. For one thing, someone else is leading it.
So that’s a big responsibility for a facilitator – to use that ‘endowed power’ to be useful for the group.
Sometimes it’s simply a matter of setting the scene, asking a few questions and getting out of the way. Other times we need to shift perspectives or the frames through which people view their worlds. To be able to do this requires far more than the traditional facilitator toolkit of processes and activities. It requires an ability to see what might be helpful for the group – at this particular time – through a broad lens. Which means facilitators also need a kit-bag full of models and frameworks; to sometimes share as a way of illuminating particular behaviours and dilemmas the group might be exploring, or more often, to illuminate their own thinking and understanding of what the hell is going on!
And that old saying is ever so true: the map is not the territory. Like maps, frameworks and models can be useful guides but are no substitute for the real thing.
Here are some of my favourite models and frameworks.
Splash and Ripple
Developed by Plan:Net in Calgary, Canada, this model helps not-for-profits plan outcomes-based projects. You can read about it here in this pdf document Splash and Ripple: Planning and Managing for Results. I’ve found it useful in those situations where groups entangle themselves in planning language and arguments get in the way of any actual planning. It’s basically a metaphor based on inputs (people and rocks), activities (throwing rocks), outputs (splashes), outcomes (ripples) and impacts – what difference all those inputs, activities, outputs and outcomes actually make. There’s more to it than this, of course, but it’s useful to have this framework handy for those times when people talk about activities as outcomes. In many of the not-for-profit and government agencies that I work with, activity-based thinking is the norm and it takes a different lens for them to see their activities in a broader context.
This is a program logic model, and one that I like using because it is people centred. Bennett’s Hierarchy was developed by Claude Bennett of the USDA to help describe what was expected in agricultural extension programs where some sort of behaviour change was wanted. It is based on seven levels and describes a causal chain of events. It can be used for planning and for evaluating a program.
This framework explores the relationship between that which is simple, complicated, complex and chaotic – and importantly, how we might respond. A lot of my work is with groups who are operating in a complex environment. Often they will try to use tools from the simple and complicated domains to make sense of that which is complex. Using this framework can help groups explore other ways of making sense of the complex worlds in which they operate – and do so so more effectively. The drawing of the framework used here is courtesy of Anecdote.
The Groan Zone
When a group is struggling the temptation is to jump in and help them. After all, isn’t that what facilitators are supposed to do? Make it easier for groups? Not always. From Sam Kaner’s Facilitator’s Guide to Participatory Decision-Making is the concept of the Groan Zone – a phase that a group needs to work through, sandwiched between divergent thinking and convergent thinking. Holding space for a group to struggle is some of the most important – and difficult – work a facilitator can do.
Evaluation, Facilitation, General | Comments (6)
Regular readers will know that I attended this year’s Applied Improv Network (AIN) conference in Chicago in October. Using SurveyMonkey we asked participants to respond to a feedback survey so as we could continue to improve on previous conferences. After all, that’s why you get feedback, isn’t it?
Today I spent the morning analysing the data. There’s some basic quantitative data, but it’s mostly qualitative – comments, reflections, likes and dislikes. I’ve put the analysis together in a slide show, which you can see here. I’d be interested in your comments as this is the first time I’ve presented evaluation results this way.Evaluation, General, Presentations | Comment (1)
I was listening to comedian Rod Quantock on the radio today talking about reviews. He said he never reads reviews because if he reads the good ones, then he’d have to read the bad ones too – and they can hurt.
Of course there are differences between reviews and evaluations. Reviews are often done by one person, read by many; evaluations may be done by many, read by few. The effect is similar – encouraging others to attend a favourably-reviewed performance; encouraging others to employ (or re-employ) someone who receives favourable evaluations. Or continue with a project. The problem with many evaluations is that they are not used at all. Done to satisfy some policy, and then forgotten. Pity really, because good evaluations can really help with learning and improvement.
That’s the main reason why I tend to evaluate ‘on the run’ – using some processes to assess how things are going so as I can modify in the moment. It’s a bit late to discover things were going pear shaped after the event. Oh, and if you’re not aware that things are going pear-shaped, you’re probably in trouble anyway.
Even when someone insists on a written survey for evaluations, I still use my own participatory processes at the end of a workshop because it gives me invaluable information, helps share the learning amongst the participants, reinforces what we’ve done (and learnt), is fun, and a god way to bring an event to a close.
Here’s my current favorite approaches – I usually use all of these and encourage people to self-select which one they want to do so as they can evaluate in a way that suits them. Each approach is posted on the wall on flip chart so they are quite visible. I also get the small groups to report back in the order below. It always ends with a real buzz amongst the group.
1. ORID Report
Facts and figures – Reactions – Significance – Difference it’s made
2. Three Bears
Woah! Way too much!
Please, can we have some more?
3. Our Story Spine Story (hat tip to Kat Koppett)
Once upon a time…
But one day...(we came to this workshop/training etc)
Because of that...(x n)
And ever since then…
And the moral of the story is…
PS: It’s really important to do the moral of the story – this usually captures the essence of the workshop in a single statement and invariably gets a satisfied cheer.
On a blank sheet of flip chart, Visual Explorer cards are used to tell the story of the event.
Creativity, Evaluation, Facilitation, General | Comments (2)