Judging by the e-mails I received, my recent post about smile sheets clearly struck a chord -- and a few raw nerves. Most reactions seemed to agree with my basic contention that end-of-class happy-sheets are abused almost universally. They are often interpreted to mean what they could never have measured, and are at best useful for scoring the food and the venue – and as a resource for promotional quotes. They have little to do with the quality of learning.
At the same time, nobody can contest that we need to formalise a feedback and review process in some way. I’d like to look at this in a little more detail, wearing my market researcher’s hat.
To be useful, any survey has to have clear objectives that are measurable, actionable, and meaningful. It has to be structured so as to maximise respondent engagement, minimise ambiguity and confusion, and allow for un-heroic interpretation of results. So the environment in which you conduct the survey has to be conducive to the kind of thinking you are asking respondents to do; and the questions, their wording, sequencing, and scoring mechanisms must be designed to minimise bias. Leading questions should always be avoided.
Here are some thoughts:
Objectives: What do you really want to measure, and why? Work backwards from the decisions you need to make, to the information you need to support those decisions, to the questions you need to ask to get that information. If you need to make decisions about future caterers or menus, then do a meaningful study that will help you make the right decisions – don’t just ask a generic question like: “Rate the food from one to five.”
If you want to evaluate how well the course met learner expectations, in order to fine-tune either course content or marketing positioning, do an appropriate study that gets inside the issue enough to be useful. Don’t just ask learners: “How well did the course meet your expectations” and expect an aggregate score on a Likert scale to tell you anything actionable. Above all, avoid gratuitous nice-to-know but not actionable questions.
Structure: In any research, you have to respect respondents’ time. A blank questionnaire may contain more accurate responses than one filled out in haste! So make sure learners have the time to do the job you are asking of them. And make sure your questions are structured and sequenced to minimise confusion.
Don’t jump from a ten-point numeric scale to a five-stage semantic differential if you can help it. Ask questions that learners will understand and will see the purpose of. Always test any questionnaire on a sub-sample of people who have not been involved in its design, because your own insight into what you are looking for can blind you to weaknesses in your wording and structure.
Interpretation: The area where most nonsense creeps into a study is in the interpretation of the data. If you have structured the study well, interpretation should be easy and obvious. If not, huge conceptual leaps are invoked to get to the conclusion you are looking for, and those leaps are loaded with bias and wishful thinking. To go back to the earlier example, you have asked learners to rate the trainer’s presentation skills and everyone gives a low score. What are they talking about? Vocal projection? PowerPoint design? Body language or eye contact? Content or structure? Pace? Audience involvement or lack of it? Attitude? Since you have no idea what it means, you will interpret it according to your own beliefs, prejudices, or defence mechanisms.
Another example: you have asked learners to rate how well the course met their expectations, and you get a good score. Does this mean it is a great course? Or does it mean they had low expectations? It is possible that their expectations were quite different from what the course objectives assumed they were.
Environment: If you start with the objectives of your study, you will probably find that one single questionnaire administered at the end of a course is quite inadequate. The environment at course close is simply not conducive to the kind of reflection or honesty that you may need. Relevance and effectiveness maybe better tested a few weeks later, when learners have had to apply what they learned on the job. An all-learners survey may similarly not be necessary – you can get really good information by pulling a sample of learners from multiple sessions of the same course into a focus group for an hour or two a month or so after their learning experience. If nothing else, such a session might reveal to you some issues that you never thought of asking about.
Some aspects of learning effectiveness may be best tested in real-time at the moment of learning, during a session where learners are trying to grasp what is being taught. As trainers, we should have the sensitivity and curiosity to be continually taking the pulse of our trainees and adjusting accordingly. Within the constraints of our schedule, good trainers do try to be wired into learner feelings and to react where possible and appropriate. But many courses are not designed for flexibility and are often conducted under such time pressure that “nurturing” is out of the question.
Perhaps we need to add another level to Kirkpatrick - level 0, measured at the point of interaction. But what tools would we use?
Original in TrainingZONE Parkin Space column of 4 Feb 2005
No comments:
Post a Comment