I apologise for taking a few months’ hiatus at Parkin’s Lot without sticking a “Gone Fishing” sign on the blog.
I’d like to thank all of those people who were concerned enough at my prolonged absence to e-mail me and ask if I was still breathing, and for all the kind comments from those who missed my posts.
I’ve been busy road-showing the learning evaluation strategy approach and doing a lot of “Marketing 2.0” work (or should that be 3.0?), while simultaneously changing lifestyles, companies, and countries. And no, George, I was not renditioned by the agency for my occasional caustic comments about management in corporate America.
I am also now in a land where access to the web is erratic and absurdly expensive – how does $300 a month for 300kbps “broadband” access sound? You can get it a little cheaper, but you then have to live with the system shutting you down once you hit your total bandwidth usage cap of 3Gb/month. Welcome to the world of state telcom monopoly, where through ignorance, greed, indifference, or complacency the government gets to handicap the growth of the economy and education by effectively denying it access to what elsewhere has become a knowledge utility.
(OK, so it’s not a total monopoly – I can get 100 kbps on my “3G” mobile phone and pay a mere $1 for every 3 Mb transferred either way).
But I have found a way forward at last, so will shortly be back to my usual unpredictable posting regime.
Saturday, September 16, 2006
Tuesday, April 04, 2006
Learning Evaluation: useless without a strategy
In an interview this week with the UK's TrainingZONE, Martyn Sloman of the CIPD (Chartered Institute for Personnel and Development) made the astonishing assertion that it is not necessary to evaluate training. His statement that “If you’re properly aligned to the business needs and the organization recognizes the value of the training and development there shouldn’t be any need to be obsessed with the figures after the event” is a sentiment we’d all like to agree with, but it’s not realistic.
In order to know that you are continually aligned and to get the organization to clearly perceive your value, you do have to measure the outcomes of your activities on an ongoing basis. Even if you are aligned and if the organization recognizes your value, you still need to keep track of that status in order to be able to adjust your performance and keep those perceptions unshakable.
Now it is true that not a lot of companies bother with evaluation. The American Society for Training and Development does a ‘state of the industry’ study every year.
In 2004, 74 per cent of US companies evaluated training at Kirkpatrick’s level one; 31 per cent at level two; 14 per cent at level three; and only 8 per cent at level four.
In my experience, evaluation within organizations generally is not worth the time and effort that goes into it. Evaluation is generally poorly conceived and executed, often measuring the wrong things in the wrong way, typically subject to significant errors in interpretation, rarely producing actionable or meaningful information, and hardly ever being adequately communicated to decision-makers.
On that basis alone, I’d agree that evaluation, as it is normally carried out, should simply be terminated. OK, keep scaled-back smile sheets as an ego stroke for classroom trainers, but ditch the rest.
Unfortunately companies have reporting requirements, and, like other departments, training departments are under pressure to produce any kind of data which demonstrate that the money being spent has some kind of positive payback. You can’t do that with smile sheets. Nor can you do that with the in-vogue assumption-laden ROI approach, which, to anyone who knows anything about statistics, is fatally flawed at worst, specious at best.
As companies start to accept the changing nature of organizational learning, with an increasing emphasis on the informal, the challenge of evaluation just gets greater. Measuring the impact of formal training interventions is tricky enough, but with informal learning it’s hard to get a grasp of both the impact and the cost, to say nothing of the difficulty of knowing who was even involved.
In putting together evaluation strategies, you have to look at the “desired business impact” end of learning and work back down to the activities that are supposed to cumulatively result in that impact. Typically, trainers seek to do evaluation the other way around: measure as much as you can at the training activity end, then cut and run, leaving someone else to cobble together some aggregate ROI number a year later if it is really demanded.
This leads to the systemic unnecessary collection of huge amounts of irrelevant data with massive disconnects between what is measured and what needs to be measured. The hidden costs of badly-done evaluation are enormous. Trainers and ISDs are simply not qualified to evaluate training, nor should it be a part of their responsibility.
Depending on the training priorities of a company or department, I will usually advocate a strategy that minimizes conventional in-class, all-learners, questionnaire-based data collection, and makes use of well-established market research methods instead.
Use samples instead of evaluating everyone, use bigger collective impacts instead of evaluating every course, and use your results to diagnose problems worthy of more detailed investigation. In order to do this, because you are stepping out of the classroom and into the business, you need to expand the mandate of the training department. This means you need top-level commitment to your evaluation strategy.
You can, for example, get very actionable information about at-work application from a couple of dozen learners in focus groups, rather than having hundreds fill out Likert-laden questionnaires. You can use mystery shopping techniques to confirm actual implementation of, say, customer service skills.
For a very convincing job of demonstrating business impact you can simply “data-mine” already collected information. For instance, in evaluating selling skills training, using small samples of salespeople you can look at before-and-after sales performance and at changes in performance between a control group and a group who has been trained.
Arguing the business case for training is like making a case in court: you assemble the evidence and present it in such a way that it makes a very convincing argument.
You don’t need to produce a smoking gun, and an ROI number can easily be discredited. For those C-level reports in which you justify your ongoing existence, you can select the above plus an array of key performance indicators that reflect, in whole or in part, the impact of the work of your department.
Present your evidence graphically in a dashboard, so that everything is on one page. If you set up your systems to update the dashboard monthly or quarterly so that trends can be established, evaluation can rapidly become an essential ingredient of senior management decision-making.
Far from being unnecessary, evaluation is strategically vital to the ongoing health and success of any training endeavor.
In order to know that you are continually aligned and to get the organization to clearly perceive your value, you do have to measure the outcomes of your activities on an ongoing basis. Even if you are aligned and if the organization recognizes your value, you still need to keep track of that status in order to be able to adjust your performance and keep those perceptions unshakable.
Now it is true that not a lot of companies bother with evaluation. The American Society for Training and Development does a ‘state of the industry’ study every year.
In 2004, 74 per cent of US companies evaluated training at Kirkpatrick’s level one; 31 per cent at level two; 14 per cent at level three; and only 8 per cent at level four.
In my experience, evaluation within organizations generally is not worth the time and effort that goes into it. Evaluation is generally poorly conceived and executed, often measuring the wrong things in the wrong way, typically subject to significant errors in interpretation, rarely producing actionable or meaningful information, and hardly ever being adequately communicated to decision-makers.
On that basis alone, I’d agree that evaluation, as it is normally carried out, should simply be terminated. OK, keep scaled-back smile sheets as an ego stroke for classroom trainers, but ditch the rest.
Unfortunately companies have reporting requirements, and, like other departments, training departments are under pressure to produce any kind of data which demonstrate that the money being spent has some kind of positive payback. You can’t do that with smile sheets. Nor can you do that with the in-vogue assumption-laden ROI approach, which, to anyone who knows anything about statistics, is fatally flawed at worst, specious at best.
As companies start to accept the changing nature of organizational learning, with an increasing emphasis on the informal, the challenge of evaluation just gets greater. Measuring the impact of formal training interventions is tricky enough, but with informal learning it’s hard to get a grasp of both the impact and the cost, to say nothing of the difficulty of knowing who was even involved.
In putting together evaluation strategies, you have to look at the “desired business impact” end of learning and work back down to the activities that are supposed to cumulatively result in that impact. Typically, trainers seek to do evaluation the other way around: measure as much as you can at the training activity end, then cut and run, leaving someone else to cobble together some aggregate ROI number a year later if it is really demanded.
This leads to the systemic unnecessary collection of huge amounts of irrelevant data with massive disconnects between what is measured and what needs to be measured. The hidden costs of badly-done evaluation are enormous. Trainers and ISDs are simply not qualified to evaluate training, nor should it be a part of their responsibility.
Depending on the training priorities of a company or department, I will usually advocate a strategy that minimizes conventional in-class, all-learners, questionnaire-based data collection, and makes use of well-established market research methods instead.
Use samples instead of evaluating everyone, use bigger collective impacts instead of evaluating every course, and use your results to diagnose problems worthy of more detailed investigation. In order to do this, because you are stepping out of the classroom and into the business, you need to expand the mandate of the training department. This means you need top-level commitment to your evaluation strategy.
You can, for example, get very actionable information about at-work application from a couple of dozen learners in focus groups, rather than having hundreds fill out Likert-laden questionnaires. You can use mystery shopping techniques to confirm actual implementation of, say, customer service skills.
For a very convincing job of demonstrating business impact you can simply “data-mine” already collected information. For instance, in evaluating selling skills training, using small samples of salespeople you can look at before-and-after sales performance and at changes in performance between a control group and a group who has been trained.
Arguing the business case for training is like making a case in court: you assemble the evidence and present it in such a way that it makes a very convincing argument.
You don’t need to produce a smoking gun, and an ROI number can easily be discredited. For those C-level reports in which you justify your ongoing existence, you can select the above plus an array of key performance indicators that reflect, in whole or in part, the impact of the work of your department.
Present your evidence graphically in a dashboard, so that everything is on one page. If you set up your systems to update the dashboard monthly or quarterly so that trends can be established, evaluation can rapidly become an essential ingredient of senior management decision-making.
Far from being unnecessary, evaluation is strategically vital to the ongoing health and success of any training endeavor.
Tuesday, March 14, 2006
Why Trainers Need Selling Skills
Everything we do in business involves collaboration, problem solving and negotiation, and you can’t do any of those without understanding the perspective of your counterparts and helping them all to get on the same page. Establishing a common perception of and agreement to the needs, constraints and solutions is what vision building is all about. It is also a central skill in selling and in training.
Many skills, and knowledge itself, are depreciating assets – whatever I know today, and much of what I can do today, is likely to be irrelevant tomorrow. That doesn’t mean these things are not worth learning, because we all make a living in the present.
But there are certain core skills that serve us well throughout our lives because they are valid, irrespective of context. Those skills include many of the so-called soft skills that senior management dislikes spending money on because they are so hard to pin to a particular project: communication, analytical thinking, problem solving, decision making, leadership and selling. These are the skills that have the broadest impact and longest payback period in any organization, and for any individual. Selling skills in particular should be an enterprise-wide requirement.
Before I became one myself, I used to think that salespeople were about the lowest form of life in the enterprise pond. And there is a reason why so many salespeople are abhorred by their prospective customers – they do not understand their role and have not learned the skills needed to fulfill it.
One of the best ways to learn is to teach. I am forever grateful that my first manager, decades ago, was perceptive enough to make me put my money where my arrogant newly-graduated mouth was. If you are such a marketing know-it-all, he said, you can put together a training program to get all of your more experienced colleagues up to par.
That is when I started learning how little of real value I actually had in my head, and discovered how complex the real world can be. We were in the business of providing long-term market research and consultancy. The conventional wisdom in the company was that the more you knew about marketing and the markets the better able you were to sell, so training had been focused on developing that knowledge. But the business results were mixed. Level Fours are easy to gauge in sales training – if you are not signing contracts, your training has failed.
After accompanying a number of people on sales calls, it slowly dawned on me that the people closing deals had an intimate understanding not of the markets in which their prospects operated but of the prospects themselves. They sought to understand the people and their concerns and motivations, as well as the needs of their companies, and were able to comfortably hold penetrating conversations with them about those issues.
And often it helped to not know much about the particular market, because then the quest for understanding was real. Those who “knew it all” were less successful – they were show-and-tell salespeople, intent on impressing the client with their expertise, and focused on talking them into a buying decision. The best salespeople intuitively used a customer-centric process, had an unquenchable interest in learning from their clients, and sought to craft solutions that would work to mutual advantage.
That there are communication skill processes that can be defined, taught, and applied irrespective of context was a revelation to me at the time. Soon afterward I discovered commercial sales training packages that did a good job of helping people internalize and habituate those processes, and I have been a selling skills advocate ever since.
It is baffling to me that, in most companies, selling skills training is considered to be relevant only to sales people. Other employees may not be selling products, but they are selling ideas every day. In those companies where I have implemented programs for non-salespeople (for example project managers, creative teams, ISD people, or IT staff), the impact on their ability to achieve their own objectives while delighting their internal clients has been immediate. But often these programs have to be positioned with care, because most people do not see themselves as needing to learn how to sell.
Trainers (other than sales trainers) tend to be the last to want to develop their own selling skills. There is an almost visceral aversion to the very notion of trainers selling their services. This is based on the perception that selling is all about arm-twisting and pushing product.
But consultative selling skills are a long way from the techniques employed by sleazy used car salesmen and over-eager LMS vendors. If trainers, and training departments, were better skilled in dealing with their clients we’d see a lot less order-taking, more effectively conceived interventions, and a better class of service being provided. This would build the respect, credibility, and perceived ROI of the training organization. In turn, the role of the trainer as consultant would be reinforced. That is an upward spiral that can only be good for any organization.
Many skills, and knowledge itself, are depreciating assets – whatever I know today, and much of what I can do today, is likely to be irrelevant tomorrow. That doesn’t mean these things are not worth learning, because we all make a living in the present.
But there are certain core skills that serve us well throughout our lives because they are valid, irrespective of context. Those skills include many of the so-called soft skills that senior management dislikes spending money on because they are so hard to pin to a particular project: communication, analytical thinking, problem solving, decision making, leadership and selling. These are the skills that have the broadest impact and longest payback period in any organization, and for any individual. Selling skills in particular should be an enterprise-wide requirement.
Before I became one myself, I used to think that salespeople were about the lowest form of life in the enterprise pond. And there is a reason why so many salespeople are abhorred by their prospective customers – they do not understand their role and have not learned the skills needed to fulfill it.
One of the best ways to learn is to teach. I am forever grateful that my first manager, decades ago, was perceptive enough to make me put my money where my arrogant newly-graduated mouth was. If you are such a marketing know-it-all, he said, you can put together a training program to get all of your more experienced colleagues up to par.
That is when I started learning how little of real value I actually had in my head, and discovered how complex the real world can be. We were in the business of providing long-term market research and consultancy. The conventional wisdom in the company was that the more you knew about marketing and the markets the better able you were to sell, so training had been focused on developing that knowledge. But the business results were mixed. Level Fours are easy to gauge in sales training – if you are not signing contracts, your training has failed.
After accompanying a number of people on sales calls, it slowly dawned on me that the people closing deals had an intimate understanding not of the markets in which their prospects operated but of the prospects themselves. They sought to understand the people and their concerns and motivations, as well as the needs of their companies, and were able to comfortably hold penetrating conversations with them about those issues.
And often it helped to not know much about the particular market, because then the quest for understanding was real. Those who “knew it all” were less successful – they were show-and-tell salespeople, intent on impressing the client with their expertise, and focused on talking them into a buying decision. The best salespeople intuitively used a customer-centric process, had an unquenchable interest in learning from their clients, and sought to craft solutions that would work to mutual advantage.
That there are communication skill processes that can be defined, taught, and applied irrespective of context was a revelation to me at the time. Soon afterward I discovered commercial sales training packages that did a good job of helping people internalize and habituate those processes, and I have been a selling skills advocate ever since.
It is baffling to me that, in most companies, selling skills training is considered to be relevant only to sales people. Other employees may not be selling products, but they are selling ideas every day. In those companies where I have implemented programs for non-salespeople (for example project managers, creative teams, ISD people, or IT staff), the impact on their ability to achieve their own objectives while delighting their internal clients has been immediate. But often these programs have to be positioned with care, because most people do not see themselves as needing to learn how to sell.
Trainers (other than sales trainers) tend to be the last to want to develop their own selling skills. There is an almost visceral aversion to the very notion of trainers selling their services. This is based on the perception that selling is all about arm-twisting and pushing product.
But consultative selling skills are a long way from the techniques employed by sleazy used car salesmen and over-eager LMS vendors. If trainers, and training departments, were better skilled in dealing with their clients we’d see a lot less order-taking, more effectively conceived interventions, and a better class of service being provided. This would build the respect, credibility, and perceived ROI of the training organization. In turn, the role of the trainer as consultant would be reinforced. That is an upward spiral that can only be good for any organization.
Subscribe to:
Posts (Atom)