A while ago I listened to the audiobook version of Margaret Heffernan’s ‘Wilful Blindness’ . Through the chapters of this thought-provoking book, Heffernan helps us to explore the range of reasons why we choose to ignore what we would see to be true if we chose to look objectively.
Wikipedia tells us that the term ‘wilful blindness’ is “used in law to describe a situation in which a person seeks to avoid civil or criminal liability for a wrongful act by intentionally keeping himself or herself unaware of facts that would render him or her liable or implicated.”
Why on earth would this be relevant to learning and development professionals?
As we consider how we can best use data and evidence to inform our practice, I feel that we should admit that, on occasions, we choose to keep ourselves intentionally unaware of facts. In recent times, we’ve been encouraged to consider our unconscious biases. Perhaps now we should look more widely at some of the ways in which we are prone to use (or not use) the information that we gather during learning needs analysis, learning design, the delivery of interventions and evaluation of our success.
I know that most of us strive to use best practices and we would love to be able to create programmes that address the true needs of our organisations and those of our clients. However, I fear that we are still required to act pragmatically in the face of the requirements of our organisational and societal learning cultures so that we may contribute as much and as far as possible.
We may choose to ignore some of the evidence that tells us that training is not the answer…
Learning needs analysis (LNA)
At the LNA stage we often receive information via line managers during the appraisal / performance review process and are asked to compile a ‘training calendar’.
At this point we will ask a range of questions to ensure that the needs are actually learning needs and not a different type of need such as a process that needs to be reviewed. Sometimes though we might be pressured into providing a learning solution when a different solution would be more appropriate.
Sitting alongside our analytical self, our inner survivor may also come into play at this point and we may be influenced by a fear of doing ourselves out of a job by not having any training programmes to offer.
And so, we may choose to ignore some of the evidence that tells us that training is not the answer…
Instead we should be asking ourselves: What does the data tell us? What does best practice tell us? What do our hearts and emotions tell us? Where do we focus our attention and what informs our next action?
At the point where we begin to design a learning intervention , we are often influenced by the resources available to support that intervention, including available time and financial investment. This may lead us to design something that fits into the resources, rather than a piece of learning that will actually lead to behaviour change.
I am definitely open to correction on this one, but I do wonder the extent to which a short online ‘training course’ with a multiple choice test at the end will influence a learner to reconsider their discriminatory behaviour or the way in which they move and handle heavy items.
Again, we should be asking ourselves: What does the data tell us? What does best practice tell us? What do our hearts and emotions tell us? Where do we focus our attention and what informs our next action?
I wonder how many of us can put our hands on our hearts and say that we always evaluate our learning programmes at every level.
I have spent many happy hours in a training room setting. In that setting I often see smiles and hear laughter. Sometimes I see folded arms and grimaces too! As a human being I know that I respond more positively to the smiles and therefore I might be prone to act in a way that will precipitate these responses. Having said this, I also know that much of what we learn comes at times when we are uncomfortable and not necessarily happy and smiling.
In the training space, I have to be careful not to be too concerned with what the ‘happy sheets’ say at the end of the workshop and try to focus on the longer-term outcomes relating to learning and behaviour change. Sometimes this is easier said than done, especially when I am being measured solely on learner reaction. Which leads on to…
Donald Kirkpatrick has helped us to bring levels of evaluation into the dialogue that we have with our organisations and clients about measuring the success of a learning intervention.
I wonder how many of us can put our hands on our hearts and say that we always evaluate our learning programmes at every level. Do we really know what difference we are making to organisational success or is this still a holy grail ?
How can we persuade our organisations and clients that the number of attendees and the number of high scores on the end-of-workshop evaluation form is only one measure and will not predict accurately the on-job performance of our learners?
Let us ask ourselves: What data is available? What does it tell us? Is it accurate? Is it valid? Is it reliable? Is it consistent? Is it robust? Is it reproducible in a range of settings?
What should we report back about the efficacy of what we have delivered?
Taking off the blinkers
I’m not sure that, as emotional beings, we will ever be able to be 100% objective.
Perhaps remembering to ask ourselves a few simple questions might help us to be a little more detached and therefore more impartial as we make important decisions. Here are a few questions that might help:
Am I seeing what is really there or am I interpreting what I observe so that I see what I hope is there? What would a video camera see?
What am I choosing not to see because it feels too uncomfortable?
To what extent would others in different parts of the organisation believe this to be true?
How much are my emotions influencing my decision-making at this point?
Am I feeling that my position / status, my sense of contribution or my feelings of being in control are under threat? What impact is this having on how I am viewing the information presented?
What would I do if all I was concerned about was the end result for the organisation?
What decision would I make if the learners were all that mattered?
I hope that these questions prove useful and I would love to hear about other ways you are maintaining distance and independence as you make learning-related decisions.
Interested in exploring this topic further? Read ' Do we always need learning analytics – or is gut instinct a valid approach? '
Please login or register to join the discussion.
There are currently no replies, be the first to post a reply.