Thursday, August 25, 2005

Evaluation professionals are undervalued

The evaluation of training is too important to be left to trainers.

Unless certification is involved, at the individual intervention level, at the strategic enterprise level, and at all points in between, the quality assurance processes applied to formal learning initiatives in most organizations are often rudimentary at best.

That does not mean that the quality of training is poor, just that we have no real data to support our feeling that we are doing a good job. Training departments are usually stretched thin, and don’t have the time or resources to do a “proper” quality assurance job at either the course level or at the aggregate departmental level.

Informal evaluation is done by the trainer, most of who know whether an activity is “going well” or not. Formal feedback is carried out at course end, and emphasizes learner reactions, likes and dislikes. And, if the activity involves testing, there are always the scores to look at.

These are all important elements in assessing the quality of a learning experience, and they provide valuable feedback to a trainer. But they are not enough, not by a long shot.

When left in the hands of trainers and instructional designers, the focus of evaluation is too micro, too inward looking. The purpose of training is to improve organizational performance through improving the performance of individuals and teams. Learning evaluation should serve that purpose, as a quality assurance tool. To do that, evaluation has to be pan-curricular, and must adopt a higher level perspective.

This “helicopter view” is hard to achieve if the responsibility for designing and implementing evaluation is too course-specific. Yet who has the time, or the mandate, to step back from a busy course development or training schedule and get strategic?

Only the largest firms have dedicated evaluation resources who know what they are doing and have the credibility to influence policy. And even those resources are becoming imperiled by the inroads that the LMS is making.

Does it matter? There are several reasons why it does.

Implementing a regimen that elevates the strategic importance of evaluation (across all levels) and places it on a more professional level will do two vital things. It will improve significantly the effectiveness and efficiency of all learning activities; and it will save a tremendous amount of unnecessary, un-useful, or redundant work.

My fear is that with the advent of LMS-based evaluation and record-keeping, the information we have about the quality of our learning activities is becoming more narrowly focused, and its usefulness is becoming further diluted. Just as LMS functionality tends to constrain the nature of our design of instruction, it constrains the nature of our inquiry into its impact.

I’d like to see more training departments creating evaluation units and staffing them with a trained expert or two who can help get past the simplistic "smile-sheet & ROI" approach and start building systems that put the important issues on the dashboards of individual trainers, instructional designers, and senior learning managers.

Some LMS tools claim to be able to do just that. But, as with all tools, without a trained and committed hand to guide them, they simply don’t get used.

Just as most of us never use more than 5% of the potential of our spreadsheet software, the potential of emerging tools does not get realized. Automation was supposed to help us do things better; in reality, it often makes us complacent. We dumb down our expectations, dumb down our evaluations, and ultimately dumb down the business impact of our training endeavors.

The “hot career” of the past five years was Instructional Systems Design. I’d like to see companies valuing Learning Evaluation professionals as highly. They can contribute substantially to the quality of training, and to the business results that it achieves.


Original in TrainingZONE Parkin Space column of 19 August 2005

No comments: