There is an unhealthy tradition in training that every course must end with participants filling in a questionnaire in which they express their feelings about their learning experience. We ask people scurrying for the door to take time to check boxes on multiple Likert scales and to scribble a few comments. Then we gather up the questionnaires and flip through them hoping desperately for a few feel-good moments of positive feedback, and finding good rationalizations for any negative scores.
Then, if the training organisation is vaguely professional, the questionnaire results get incorporated into a database where they are manipulated to produce trends and averages. Attention gets paid to those aspects that learners are clearly less happy with, and trainers with consistently good average scores acquire a sort of heroic status. Oh dear.
I can’t think of too many other fields of activity where we try to measure customer satisfaction immediately after the event each and every time. Does the waiter refuse to let you leave the restaurant till you have filled in the form? Do you get accosted by questionnaire-wielding cinema managers during the closing titles of a movie? Does your dentist insist that you check a few boxes before you are allowed to escape the chair?
I used to work with a training company that evaluated the competence of its trainers based on their smile-sheet score averaged across all questions and all participants in a particular session. Trainers would come in beaming with the news that they'd just run a 4.7, as if the number actually meant something (though of course it did -- their daily rate was pegged to their perceived ability to make learners happy, not to the impact that they had on the effectiveness of those learners once back on the job).
My background is in market research, so I know a thing or two about questionnaires and survey design. Smile-sheet questionnaires are often really badly designed, with ambiguous questions or counter-intuitive formats. The process is far from ideal, with tired respondents working in haste. And the environment is inherently invalid, with respondent-trainer relationships interfering with objective perceptions or honest responses. The data that comes in is badly flawed, and those flaws are magnified with every manipulation and interpretation. Garbage in, landfill out. Evaluating training quality by smile-sheets and then taking action based on that evaluation is naive and delusional.
Technical validity issues aside, we all know that smile-sheet scores are not a meaningful measure of training effectiveness. Nor do they provide any real insight into what, if anything, a participant has learned, or the impact that it might have on that person’s real-world work performance. They are at best a blurred snapshot of customer satisfaction at the time, nothing more. But we continue to use and revere them.
As trainers, we should have the sensitivity and curiosity to be continually taking the pulse of our trainees and adjusting accordingly. So why do we perpetuate the smile-sheet ritual? Is it because “management” wants some easy means of monitoring trainer performance? Or is it because learners have come to expect some kind of formal feedback process beyond the ability to speak up in class?
Whatever the motivation, we are simply not measuring the things that we are reporting on. There are a series of large conceptual leaps between what is actually measured (vague momentary happiness) and what we interpret as being measured (training effectiveness). The endless discussions that go on about whether a 5-point scale is better than a 7-point scale, whether checking boxes is better than circling numbers, whether semantic differentials (marvelous - - - awful) are better than numeric intervals (5 - - - 1), are all irrelevant if the core concept is inherently invalid.
If we really care about the quality of the learning service we are providing, we should put a little more effort into measuring and monitoring that quality. Quality is all about the extent to which we achieved the learning objectives, and that cannot be measured at the point of training. It has to be measured after the event, where the training is applied on the job.
We should at least care enough to acknowledge that the smile-sheet is only a token gesture at quality control, and devise and implement systems or processes that are more incisive and significant. We pay lip service to Kirkpatrick’s four levels and wistfully wish it were easy to measure beyond level one. But that is rarely part of our brief, so we let it slide. How much easier would it be to do all of those ROI projections if we had at least some valid data to work with beyond the fact that we scored a smile-sheet 4.7 average in the courses we ran last year?
Original in TrainingZONE Parkin Space column of Jan 21 2005
4 comments:
Some excellent thoughts on why the smile-sheet is not an effective or even valid reflection on the training quality. However, you don't offer an alternative.
Therefore, I'll have to give you a 4.2~~~
Damn. There goes my bonus!
How many time you left the resturant and mailed the comments card? I never did. There is a reason why we ask them to provide the feedback at the end of the training event. Smile sheets are not a best way to measure the training but they are Step # 1 of any evaluation program. Dr. Kirkpatric offered a four step evaluation process. The smile sheet or the Reaction sheets only tells you what they supposed to measure (i.e. environment). I agree with you that if not done correctlty the smile sheet could be waste of time. However, they are pretty good tool to measure few things. I know some instructors take them too seriously when asking for a pay raise.
Mat
does anyone have presentation material on my class request below?
You are to present, "How I will reinforce/support developmental training when the participants are back on the job."
Post a Comment