Tuesday, December 16, 2003

Should training ever be mandatory?

There is no absolute on this issue. Like all things in training and education, "it depends" on a host of factors.

The fact that some people vociferously criticize the training they were mandated to attend cannot be taken as valid criticism of the training itself. The majority of people probably go into mandated training with some resistance, if they are not pre-sold on the goals/benefits. Those most resistant will usually find fault with the training -- the trainer, the content, the context, the learning design, will all come under fire. Others will come out of the experience enlightened and appreciative.

Maybe our inherent dislike for being forced to participate in training is a throw-back to our school days. How many of us appreciated mandatory attendance of classes at school? How many of us ran down the teacher, the content, the relevance of the learning, just to rationalize our desire to be elsewhere? Where schooling is not mandatory, a nation declines. The same may be true of training in a company. A corporation has goals, the achievement of which require certain competencies. Each employee cannot be expected to automatically align his/her personal training needs with those of the company, and there are bound to be fundamental disconnects.

This is particularly true in times of change, and it may well be that those individuals most resistant to change are the ones least likely to participate in required training voluntarily.

Thursday, December 04, 2003

Sample size and statistical significance

How many people do you need to survey to get a significant result?

For statistical significance (in statistics, "significant" has a very specific meaning), you need to use a valid sample size. You also need to use a valid methodology for selecting who goes into your sample.

As a rough rule of thumb, your sample should be about 10% of your universe, but not smaller than 30 and not greater than 350. If you are doing multivariate analysis, the sample should be ten times the number of variables you are testing.

If you want to be more pedantic, you should define what confidence level you want and what margin of error is acceptable to you. A confidence level of 95% and an error margin of 5% tell you that your result will be within 5% of the true answer 95% of the time you run the survey. So if you tested 100 samples, 95 of them would return a result that was within 5% of the truth.

The correct sample size is a function of those three elements--your universe (how many people make up the group whose behavior you are trying to represent), your desired error margin, and your preferred confidence level. It's a simple formula (well, not so simple). For most purposes, I'd go for a 10% error margin at 95% confidence. For varying numbers of learners in your universe here are the ideal sample sizes (the first at a 10% error margin, the second at 5%):

50 in the universe, sample 33 or 44
100 in the universe, sample 49 or 80
200 in the universe, sample 65 or 132
500 in the universe, sample 81 or 217
1000 in the universe, sample 88 or 278
and so it goes till you find that an ideal sample for a 10% error margin hardly moves above 350 no matter how big the universe (it's 500 for 5%).