Monday, June 13, 2005

A trainer's output is performance, not courses

In Kirkpatrick terms, most “learning professionals” don’t like looking beyond level two. Because it is generally considered too difficult to evaluate training once learners leave the classroom, they have a good excuse for setting themselves much more easily achieved objectives.

A couple of weeks ago, in a reputable training forum, I heard a trainer proudly declaring that his learning objectives were always cast in such a way that they would be fully achieved by the end of a course. I was appalled, and said so. My reservations about competency-based learning management aside, surely instructional designers and trainers seek to impact the performance of learners once they return to the real world? If that is true, instructional designers and trainers have a responsibility for the ultimate on-the-job effectiveness of their handiwork. And, if they have a responsibility, they should set objectives to know whether or not they are meeting that responsibility.

I was stunned by the number of people who came back at me as if I were mad, declaring adamantly (and somewhat self-righteously) that their responsibility for learner performance stopped at the door of the classroom or at the closing of the e-learning browser window. What happened after training was not their problem. If instructional designers as a group have such a limited perception of their importance to the business, then whoever is training them is doing a lousy job.

The question "what things will participants be able to do at the end of the session that they were not able to do pre-session" is a common cop-out that does not produce learning objectives. Rather, it produces a reverse-engineered set of often grotesquely constrained learning promises that have as much substance as those of a used-car salesman.

I am increasingly wary of any learning objective whose time horizon is only the end of a course. By the end of any learning experience, learners may have acquired some new competence. But within months that competence should have grown, been applied, and be achieving business results. That scope should characterise our learning objectives. To be satisfied with taking responsibility only for that which is directly within our domain, important as it is, ignores the fact that learning has a greater business purpose.

Setting objectives that do not look beyond the point where a learner hands in his/her smile-sheet is abdicating responsibility for the effectiveness of training. If course-end is the training performance horizon, then post-course drop-offs in recall or ability become acceptable to trainers as "not my problem."

You could argue that as long as the trainer has created an acceptable change between pre-training ability and post-training ability, then he or she has performed successfully.

But learning is not only about acquiring temporary knowledge and skills. Motivation is an important part of what a trainer should be doing. At the very least, we should design our courses and conduct them in such a way as to motivate people to want to learn, to want to keep learning, and to want to apply what they are learning once they leave the training environment. If that motivation is absent, we have not done our job.

Admittedly, there are many factors beyond the control of individual trainers that impact transferability of learning to the workplace. Management and systemic obstacles all affect the way people grow in their jobs. But trainers can’t simply ignore these. We should try to exert influence even if we have no overt responsibility. And if those obstacles are immovable, we should design our training to accommodate them and to equip learners to deal with them.

Simply saying "not my job" is an inadequate response in someone who has taken on the task of improving the performance of fellow employees. Trainers and instructional designers do not have total responsibility for a learner’s performance on the job, but I think they should share some of that responsibility. If their salaries, bonuses, or careers depended on it, we’d see much more effective learning experiences being developed.

4 comments:

Bill Bruck said...

Absolutely. While, as you say, trainers and IDs are not totally responsible for improve performance, our responsibilities cannot stop at the door. It's fair for our customers to ask what post-training reinforcement and coaching are built into our instructional design.

I think it's part of the paradigm shift we need to continue to promote - training as process over time, not training as one-time event. This is one of the examples of the law of unintended consequences - as 2 day ILT moved to 2 hour WBT, we've started thinking of training as time-limited event way way too much.

-bb

Bill Bruck (Q2Learning)
mailto:bbruck@q2learning.com
Collaborative Learning Blog http:q2learning.blogs.com
Join our CoP at http://cop.collabhost.com

jacqui fogarty said...

I cannot agree with you more. As a student studying training we are only taught to make sure that the learners in our training courses meet all of our learning objectives and thats it. This has always been a problem for me because i do not see the point in training someone and not making sure that they are applying the information because if they do not apply it, the training was just a waste of time for myself, for the particapants and for the organisation.

Pam Marinko said...

Agreed. My question is, should designers include the post-work activities, 10-day and 6-week follow up exercises and schedule for feedback sessions as part of the original program? Typically I see this as an "add-on" from most vendors. If it should be included, then how should it be reflected in the pricing of the program creation? Thanks for the insight.

-pm

Pär said...

My 2 cents of this, as having worked in a large company for many years with training, I see it as almost impossible for a teacher to 'get hold' of the students after they leave the training facilities. In fact, management have complained and seen it as advertising and not as a follow up. On the other hand, I see it as the training departments responsible to follow up on level 3,4. In fact, that must be the strategy you build the training around. As for actually doing the post-measurement, hat could be done by managers with support from the training department. I do this by showing the correspondence between the needs analyzis (or JTA..) and a level 2 evaluation, plus a meeting(correspondece) with the students managers. The needs analyzis should show where to look for improvements/changes in behaviour.
Regards
Pär Ljunggren (Learntech)