Thursday, December 08, 2005
There is no doubt in my mind that the activity of corporate learning will be around for a long time, but the role of the corporate trainer in that activity is becoming increasingly unimportant. This may not be as apparent in the UK as it is in the US, but since most American organizational evolutions (good and bad) eventually find their way across the pond, the potential demise of the corporate trainer is worth taking seriously just about everywhere.
What evidence do I have to support the contention that many trainers’ careers may be in jeopardy? First, I look at my own personal experience of corporate training over the past three decades, in which I have worked as an outside consultant to dozens of big companies in Europe, the US and around the world. I have seen the scope and scale of training activities of an awful lot of corporations, and I know how rapidly those empires are shrinking. The days of “corporate universities”, residential training facilities, and extensive training support services started slipping toward the end of the 1980’s, and the decline has accelerated ever since.
As more and more training is outsourced, in-house trainers are becoming vendor managers. At the same time, the attitude of large companies toward the development of their employees has turned from nurturing to dismissive, if not outrightly abusive. Once the notion of “human capital” took hold, and employees mutated from people to units of production, it was inevitable that the usually inappropriate concept of ROI would creep in as a simplistic gauge of training’s worth. When the CLO position materialized and training finally got a seat at the table, the Pareto optimization of training followed rapidly, urged on by the false promise of e-learning economies. The marginalization of the in-house trainer is a natural result.
I recall a time, not so long ago, when a company in trouble would seek to re-train its employees rather than fire them. That sense of responsibility, albeit paternalistic, is rare today, with employees seen more as perishable resources than as long-term investments. Satisfying shareholders’ quarterly lust for results undermines the commitment to investment not only in the business, but in its people. Recently, the CEO of General Motors was ardently assuring employees that the company had no plans to go bankrupt and that management would pull the company through. Soon after, just a few weeks before Christmas, he announced that in order to make good on that promise 30,000 people would be fired. No thought of retraining there.
While I am aware that one man’s perception is hardly a body of evidence, I have seen these concerns echoed among many of the people considered to be thought leaders in the learning field. There are currently two relevant growing discussions. The first, and by far most extensive discussion, centers on the future of learning and how to make sure that, as learning becomes less formal and devolves to individual employees, the learning needs of the corporation do not get subsumed by the often conflicting personal needs of the individual. The second discussion is about what, if anything, trainers can do to evolve and stay relevant. Neither of these discussions assumes a future role for corporate trainers, or training departments, as we know them today. It is alarming that as corporations allegedly place increasing emphasis on human performance as a critical success factor, one of their traditional drivers of that performance – trainers – are dying on the vine.
Finally, studies done recently by Ambient Insight (referenced by David Grebow, one of my co-authors in the Learning Circuits blog) examined the role of the corporate trainer and tried to extrapolate into the future. I am inherently skeptical about most studies, since way too many of them are inexpertly designed and executed, and even the best are sometimes badly interpreted. But this one resonates with my own observations, so of course I give it the benefit of the doubt! In a nutshell, the study says that corporate trainers have been in numerical decline for several years. As companies cut overhead, trainers are moving from influential positions within the organization to less influential external vendor positions.
If the study is correct, in the US, the number of corporate trainers is predicted to drop from 75,000 last year to 45,000 in 2008, till by 2012 there will be only 20,000 in-house trainers left in the US. When an occupation is set to lose three quarters of its members in only six years, current incumbents should be a little concerned. Some of those who leave will become employees of, or contractors to, outsourcing operations; those who stay will become vendor procurement managers.
While trainers may be comfortable with that, does it mean that learners will have to make do with more and more of their formal training coming in generic mass-market product form? And what does it do to the idea of training as an important strategic driver of company performance?
Perhaps the semi-consensus is because those commenting, online and off, tend to be established veterans with battle-tested competence in the field. When you know that you are good at what you do, and have been doing it long enough to know you are not deluding yourself, you tend to look askance at outside bureaucracies that profess to be able to pass judgment on your worth. But you also tend to feel an undercurrent of frustration that many in your field, and most of those outside of it, have no idea what value you contribute and have no basis for making that evaluation. Enter certification.
The problem with certificates is that they are invariably pitched at the baseline, and seek to verify that their holder has an adequate grasp of essential fundamentals, at whatever level. Why, for example, would I put an inordinate amount of time any money into getting an NVQ4 (UK) when, at the end of the day, all I have is a piece of paper that verifies that I have done, to a certain standard, some of the things that any experienced trainer should be able to do? Certification says nothing about quality or richness of experience and does not measure or reflect all the fuzzy hard-to-quantify characteristics that distinguish a ‘seasoned professional’ from a rank beginner. It’s a great ‘elevator’ for those relatively new to the field, of course. And while neither experience nor certification guarantees quality, certification is seen by the risk-averse to be less open to interpretation.
Increasingly, employers and clients use certification as an expedient filter or differentiator in their selection process. Those recruiting trainers without having themselves much ability to tell Chateau Margaux from Beaujolais Nouveau find a certificate indispensable. There’s an irony in this, which is echoed in other fields: once certification becomes a requirement, employers can deny themselves access to best-of-breed performers whose time constraints (or egos) have prevented them from leaping through the requisite credentialing hoops. The more widespread and credible a particular certification becomes, the more pressure there is on trainers to acquire the relevant pieces of paper. The only route open to holdouts like me is to “get with the program” and trust that the cluster of credentials that one opts for will have lasting value.
Which is where the other side of professionalism becomes so important. If you have to become certified, would it not be a good idea to have a certification process that results in something of inherent value, rather than a token piece of paper that is useful only as a checkmark in a recruiting box? Though I have looked, I have yet to find such a program. The reason appears to be that there is no body of training professionals – none – that has the advancement of the profession at heart. Even if such an objective appears somewhere in their charter, it is not manifested in their behavior. It may be too much to hope for that any professional body could be anything more than a committee-hampered bureaucracy destined to put all of its efforts into resisting change and preserving the status quo. Dynamism and forward thinking are not characteristics that one associates with such organizations.
Yet here we are, at what I believe is a crossroads for the corporate training ‘profession’, facing diminishing relevance and possible extinction within a decade, without any organized way forward. What are entities such as ITOL, BLA or ASTD doing to help guide companies toward more effective learning strategies? More importantly, what are they doing to help trainers adapt for the chaotic future in which we have to thrive? Certifying that members know how to dress appropriately, design courses, and make presentations is hardly adequate (OK, I know that’s a gross oversimplification). A professional body should be taking a much more strategic view of the learning outputs sought by companies and the changing cultures and climates in which trainers operate. I don’t see that reflected in member education priorities, certification requirements, marketing activities for the profession, or topics under discussion at their various conferences. It pains me that we who are so committed to needs analyses, objective setting, process design, and continuous improvement accept such lackluster myopic thinking from those who claim to represent us.
I know of many who have abandoned the “training” label altogether because they feel constrained by the limited perceptions others have of trainers. I tend to describe my own role in terms much more specific to any project for the same reasons: performance improvement facilitator, for example, or organizational developer (without the capitals), or learning strategist. But these labels are themselves a little grotesque – I would far rather call myself a “trainer”, and would if the term connoted more than the narrow and old-fashioned concept that the profession has become trapped in.
It will take an in-touch, dynamic, and courageous professional body to change both the perception and the reality of what training is, and can be. Do we put 40,000 volts through one of the existing bodies and transform it into something useful, do we create yet another new body, or is it a case of everyone for themselves, certificates in hand?
Original in TrainingZONE Parkin Space column of 2 Dec 2005
Thursday, December 01, 2005
The debate as to whether or not trainers or instructional designers are really professionals raises its head from time to time, and while some see it as irrelevant semantics, many get rather passionate about the subject.
To some, if you make your living from it and you are pretty good at what you do, you can wear the label of professional with pride. To them, professionalism is a state of mind, an attitude to achieving results, quality and customer satisfaction that raises one above the hacks, charlatans and well-meaning-but-inept people that so often infiltrate the field.
To the purists, a profession involves lengthy academic education, proven expertise in practice, and formal accreditation by an acknowledged association of your peers. It may also involve being licensed via some formal, non-trivial process, adherence to a set of standards, behaviors and ethics, and a commitment to a continuous education process that keeps your license current. Typically, a profession has a body of peers that oversees the interests and the reputation of its members. When you tell a doctor, a lawyer, or an accountant that you are a professional, this is what they expect to be behind your assertion.
True, trainer certification is available from various vendors, but passing such exams is hardly a guarantee of any breadth or depth of competence. I know many people who call themselves trainers or instructional designers who are extensively certified but incompetent, and many who are outstanding in their roles but have no formal qualification behind them. Most people in the field fall somewhere in the middle.
Getting a certification may help you get a job interview, if that is one of the filters employers use to short-list applicants. There is certainly no harm in taking a certification program such as the CTP or CeLP or the CPLP that is provided by the American Society for Training and Development, particularly if you are relatively new to the field. But I am not a great believer in the value of formal certification processes, largely because those that I have seen (or, in moments of weakness, have been involved in creating) are trivial – commercial opportunism thinly disguised as rigorous training and evaluation.
So, by the empirical standards of the purists, I fail the professionalism test. But (dammit) I am a professional – I have the experience, knowledge, reputation, competence, body of work, attitudes and integrity that collectively make me very comfortable with that label. The key question, however, is this: if the field in which you operate is not a profession, how can you call yourself a professional?
I don’t think that there is much question that training is not (yet) a profession, simply because it does not have the formal underpinnings of other professions. Training industry bodies, where they exist, do not fulfill the same role as say the General Medical Council or the Legal Bar. For that matter, there is no industry association, at least not one that has an omnipotent purview that even approaches those in the medical or legal fields. Training associations are more akin to trade associations, providing primarily the ability to network and in turn exploiting their internal market to sell publications, courses and conferences.
There is much apathetic complaining about training associations treating training as an occupation or vocation and failing to elevate the field as a profession. But it is the members themselves, the trainers, instructional designers, and managers who should determine how their representative body behaves, instead of complaining impotently about their association as though it were an independent entity. (There are parallels here with the way trainers view the senior management of their companies – we yearn for “a seat at the table” without ever expecting to have to make that happen ourselves.)
We need to stand up and make a little noise. If we insist on certification, we need a really “professional” certification process, involving education, training, experience, referrals, rigorous testing of knowledge and performance, managed by a truly dynamic and credible training association. In addition to being expensive, it would be elitist and exclusionary, both politically incorrect, but that does not seem to phase doctors or accountants. I would be willing to get involved in creating something like that, and in promoting it.
Until that happens, you can keep your token certifications. I’m happy to be a self-satisfied self-certified professional.
Tuesday, November 15, 2005
There have never been so many people on the cusp of retirement, and financial advisors are circling them like sharks around a sardine run. The normally staid and sensible financial advertising imagery is giving way to flower-painted VW microbuses, long hair and lava lamps, richly underpinned by the evocative music of the 70’s. We are urged to believe that the person in the dark suit who wants up to 4% of our liquid assets annually in return for helping us to buy a stairway to heaven is just a grown-up hippie at heart, man, who can really relate to our values.
The training challenge is significant if the advisors’ behavior is to synch with the marketing message. Does the repositioning taught in class actually make it through to discussions with potential clients?
The nature of the business makes it very difficult for managers to observe and objectively evaluate those reporting to them. Training effectiveness can be inferred from actual sales results, prospect conversion rates, and before-and-after data mining studies. But if advisors are losing a lot of potential converts, such empirical data do not help to diagnose where (if at all) the training may have been deficient. So how can you be sure that the training is actually leading to on-the-job application?
One solution is to “mystery shop” for financial advice. Using mystery shopping to test customer interaction skills is a widespread approach in industries ranging from cars to cosmetics to coffee. The concept is simple: send someone in to do business and have them report back on the behavior encountered. It is typically used to target and remediate poor behavior in specific sales/service individuals, or to check up on the quality of management of establishments such as restaurants and retail outlets. It is rarely used to evaluate the effectiveness of training. That’s because mystery shopping tends to fall under the control of sales, marketing, or customer service departments who simply assume that training has done its job, and see implementation is a matter of personal choice or supervisor diligence. A direct link between individual on-the-job competence and training is rarely made by such departments, and training departments are reluctant to raise it.
For a wide range of customer contact skills at Level Three, secret shopping can tell us a great deal about the strengths and weaknesses of our training and its impact on behavior. It gives us unbiased feedback, from the perspective of a customer, on how well desired behavior patterns or skills are adopted. It can also tell us a lot about the environmental and systemic obstacles to application of the learning.
Yet trainers don’t often use it as part of their continuous improvement process. There are several reasons:
But it need not get higher than that, because you are looking for a diagnostic indication, not a statistically valid sample. You may only visit three or four dozen individuals, the same number you might pull into focus groups. And you may be able get your sponsoring department to help with the costs.
But when the time comes for organizational cut-backs, mystery shopping at Level Three might buy trainers a stairway to heaven; relying on Level One smile sheets may just be a highway to hell.
It seems to me that these days the hard-working trainer gets maligned by just about everyone, including fellow trainers, not for doing a bad job but for not attaining a Renaissance Person status to which few other corporate functions aspire.
I may be more guilty than most of doling out the criticism. I constantly berate those in the training profession for not continuously evolving at the pace of their environment, for being complacently stuck in outmoded paradigms, for defining their roles too narrowly, or for jumping from ineffective low-tech ruts into high-tech ruts that may be equally ineffective.
But when I look at what I am asking of training professionals, I can’t think of any other corporate field in which the desired changes are so broad and so deep:
- Comprehend, master, and stay ahead of strategic implications in emerging technologies.
- Anticipate the direction and performance needs of corporations whose strategic and tactical navigation is in accelerating flux.
- Understand, relate to, and accommodate a wave of digitally savvy employees whose world view, technological competencies, instincts and modes of operation are radically different from those of the established employee base.
- Customize your service to the individual, reducing your operating costs at the same time.
- Continually improve effectiveness, cutting time to market, time to competence, and time away from task.
- Demolish or at least plasticize your formal processes, making them more flexible, more adaptable, and more workflow-snug.
- Develop skills and competencies with constantly evolving tools (personal, group, and enterprise) that span administration, web authoring, testing, evaluation, presentations, databases, scheduling, collaboration, networking, globalization, project management, and communication (broadcast, podcast, mail shot, synchronous, peer-to-peer, mobile).
I could go on, but you get the idea. What about those in other corporate functions? It is true that innovation is called for everywhere, and the impact of e-business touches the goals and processes of all people throughout the organization.
Marketing and sales have gone through significant revolutions in many aspects of their work. Obviously, IT people have different systems to deal with. Customer service people deal online with customers who bought online. Administrative departments have to integrate their operations online with those of suppliers and business partners. Strategy groups are (hopefully) building new visions for the future of the organization. Legal people are rethinking contracts, intellectual property issues, and management of privacy. HR systems are becoming real-time, and more recruiting is done through the internet. And financial people are transacting online.
But I doubt that as individuals there is anyone who has a broader front of continuous change thrust upon them than those with a training responsibility. Nor is there anyone whose fundamental personal operating processes are challenged so deeply. Marketing people may disagree, and trainers have great deal to learn from them about making non-linear change happen rapidly, and about understanding and responding to individual customer needs. But most marketing people specialize in one aspect of the process – training people are expected to be competent across the spectrum.
In most companies, it is not unusual for individual training people to have to set strategic direction, conceive, architect, build, deploy, administer, test, and review everything that they do, with a little help from those in IT. Instructional designers are supposed to help, of course, but too often their only pedagogical qualification is some fluency in Macromedia’s tools – and their ability to provide broad strategic input is limited.
In an environment such as this, there is a tendency to withdraw, redefine our responsibilities within narrow confines, and hope that all that external change will eventually settle down. But it won’t. In a technology sense, and in a workplace culture sense, trainers have to get out more. We have to foster the curiosity and find the time to become more au fait with what is going on in the world of applied technology, e-collaboration, workflow learning, and those aspects of corporate strategy that hinge on knowledge and skills. Trainers need more training themselves, not in task-specific skills but in the environment in which they are going to have to operate.
It is ironic that this “development” part of T&D is the hardest to get budget approval for, yet it is fundamental to the future success of everything we do.
Original in TrainingZONE Parkin Space column of 28 Oct 2005
Wednesday, October 26, 2005
Now, I understand the general idea that if people don’t enjoy the training, they are less likely to give it high smile-sheet ratings, or recommend it to a colleague. But are they less likely to learn if the process is not liberally peppered with “fun” experiences?
In theory, we learn best when we are relaxed and in harmony with our learning environment. A good trainer can set the tone and help create the most appropriate atmosphere. Ice-breakers contribute to that state, and are particularly helpful for skeptical learners, those uncomfortable in learning situations, and inexperienced trainers.
But training does not have to be entertaining.
E-learning’s rise has brought this issue to the fore. The constant admonition from instructional designers that e-learning has to be punctuated every couple of minutes with “interactivity” is one of the saddest mantras of our time. It’s like the American notion that food cannot be palatable unless you smother it with ketchup. If you are working with training that is bland and dry, by all means bring on the sauce. But would it not be better to make the training itself more engaging in the first place?
The distinction between engagement and interactivity is crucial, and it’s one that many instructional designers – and those who commission the development work – do not appear to understand. Engagement is intense mental absorption; interactivity is often just busyness or sugar-coating. It is vitally important that learners be engaged. Interactivity, entertainment, and fun can contribute to cognitive engagement. They can equally well distract from it.
There are possibly three causes for the boom in gratuitous fun in learning: managers want motivational experiences disguised as training; instructional designers lack the skills or imagination to architect inherently engaging learning experiences; and trainers seek high smile-sheet scores and possibly a release from their own boredom.
Managers: Too often, when managers commission training, what they really want is a motivation activity. I can't tell you how often, for instance, I have heard the whine: "We can't make people take that course online, because we'd lose out on the motivation, energizing, bonding, or social interaction that we get from the classroom course."
My response is to challenge the learning objectives: if the primary desired output of the "event" is all that warm fuzzy stuff, then why pretend to be running a training course? It is far more cost effective to structure a motivation session that achieves those goals, and does not come out of the training budget. (Through my focus on sales and marketing, I have designed and run many such events, some of them wild or lavish, and all of them very effective, but I have never called them "training" events). Naturally, if the self-discovery or bonding is germane to the learning (in, say, leadership training or some interpersonal skills training) then it may be relevant and appropriate.
But too often the kind of feel-good factor that is called for in training is (once again) training being abused as a surrogate for good management.
Instructional designers: Many learning experiences are atrociously conceived. The designer has to work with unrealistic timelines, limited subject matter knowledge, poorly specified learning objectives, and myopic supervision, as well as limitations or constraints in the way the training is to be implemented.
The expedient approach often taken is a linear exposition of content, made less dull by the frequent insertion of bits of fun. “We’ll do an icebreaker at the start, a game here, bring out the Lego over there, run a video here, chuck in some role-playing and a bit of team competition intermittently, and make the tests like Who Wants to be a Millionaire. ‘Triple Bottom Line Accounting for HR Professionals’ will never have been so much fun! Who cares if a week later the only thing they remember is the egg-dropping contest?”
Instructional designers need to look beyond the smile sheet for their inspiration, and companies need to stop hiring as ISD's people whose only pedagogical qualification is a mechanical competence in Macromedia's tools.
Trainers: From a trainer's perspective, "having fun" can be an acceptable way to leaven the densest of subject matter, so long as it does not distract from the learning goals. And, because in too many companies the smile sheet ratings are the primary indicator of how good a trainer is at his/her job, "fun" is welcome. It is often true that a learning experience becomes less engaging for the trainer the more engaging it is for the learner. If you are running the same course over and over again, the entertaining components serve to keep you from putting yourself to sleep.
From a learner's perspective, those who do not really want to participate in the learning experience in the first place may be distracted, if not seduced, by the injection of "fun" components; those who really want to learn ask themselves why they are wasting time on irrelevant padding.
Fun can be very constructive. Well-designed entertaining experiences that are relevant and fully integrated into the learning process can work as powerful illustrations of concepts or living analogies. If the fun is designed as an effective instructional process, contributing to the achievement of specific learning objectives, I'll opt for fun over dull any day. But if it is gratuitous, I won't waste my time, or that of learners, by indulging in it.
Original in TrainingZONE Parkin Space column of 21 Oct 2005
Tuesday, October 18, 2005
Why do the views of these Newmils matter? They see the world, and interact with it, differently from earlier generations. Today’s US K-12 pupils were born into digital technologies. They intuitively integrate things like computers, the web, instant messaging, cell phones, and e-mail into their daily lives.
I’ve not seen more recent data, but way back in 2002 children aged 13-17 were already spending more time accessing digital media than they did watching television. For the US, that was a huge milestone. (Of course, TV is a digital technology these days, so there goes another useful trend line.) By 2002, every classroom in every public school had internet access, and nine out of ten 5–17-year-olds used computers. According to their parents, more than a third of children ages 2-5 went on-line, up from only 6% in 2000. By now it is safe to say that the internet and other digital technologies are ubiquitous for the youth of America.
The study asked this digitally savvy group what they would like to see invented that would help kids learn in the future. Then the authors of the study consolidated the results and came up with a description of the vision that school-goers have for the future.
I’ll quote the report verbatim:
Every student would use a small, handheld wireless computer that is voice activated. The computer would offer high-speed access to a kid-friendly Internet, populated with websites that are safe, designed specifically for use by students, with no pop-up ads. Using this device, students would complete most of their in-school work and homework, as well as take online classes both at school and at home. Students would use the small computer to play mathematics-learning games and read interactive e-textbooks. In completing their schoolwork, students would work closely and routinely with an intelligent digital tutor, and tap a knowledge utility to obtain factual answers to questions they pose. In their history studies, students could participate in 3-D virtual reality-based historic re-enactments.
Now to me this sounds bland and unimaginative, almost status quo, not at all the kind of creative or exciting thing that kids should be coming up with. It sounds like something a committee of adults would produce one evening over tea and biscuits. And indeed, that is effectively what seems to have happened. All of the spontaneous innovative ideas produced by 160,000 kids were filtered, sanitized, and compromised by a committee of "analysts" who stripped all of the freshness out of them and boiled them down into this pedestrian, adult, government departmental interpretation of a child’s vision.
The authors admit to throwing out all but 8,000 responses, and then only looking at those that met their pre-determined criteria of "meaningfulness". Beyond the summary, they do provide a few actual examples of real responses to illustrate their conclusions and in those carefully selected comments lie some clues to the gold that was not mined.
There were numerous requests for pocket-sized multi-functional computers linked wirelessly to the web, pre-loaded with text books. The next evolution of the iPod Nano perhaps, or an extension of the iTunes-web-enabled mobile phone.
The kids in this study talked about wanting automated learning, straight to the mind, using teaching-hats or smart helmets, cable connections in the head, or wireless chips implanted in the brain. I remember as a child wishing that I could put on headphones before going to sleep, flip a switch on a cassette player, and wake up the next day with all of my schoolwork already remembered. It seems that is an enduring desire in the species, only the technology gets updated. Learning is like losing weight and getting fit: we all want the end result, but the process we have to go through is so unpleasant that many of us will avoid it if we can.
While this report appears to be written by those who only see what they want to see and can only understand that which fits their current frame of reference, I may be wrong. If the responses are really as broadly unimaginative as they are represented to be, maybe that’s just further evidence of the challenge faced by technology developers: even kids can’t visualize what they can’t conceive; they really do not know what they need or want until they see it.
I hope that the original responses to this survey have not been discarded and will be made available for other analysts. You can download a copy here.
Thursday, October 13, 2005
What we are seeing is simply further implosion of the dedicated e-learning technology industry. The more oligopolistic this market becomes, the more generic it becomes, and the less able it is to sustain the pretense of any meaningful differential advantage. As open source systems undermine it from below (particularly in the academic arena) and ERP systems make it redundant from above (particularly in the corporate arena), the less relevant this relatively small software market segment becomes.
Hence the increasing investor wariness. Soon to be followed by more enthusiastic uptake of what I have been advocating for ten years now – educators, trainers, and especially learners will start to focus less on the means and more on the end, invoking whatever technologies happen to be mainstream to facilitate whatever learning experience is most appropriate to them. IT departments will find it easier to wrest away from training departments the decisions about enabling technologies, and learning information flows will move out of their relatively proprietary niche and finally become fully integrated with the rest of the corporate nervous system.
The openness and dynamism of the web will finally be allowed to permeate the thinking of the learning establishment, and Model-T e-learning will succumb to a flood of performance-driven innovation.
Friday, September 30, 2005
Can traditional learning experiences be decimated and served up in nano-chunks without losing effectiveness? Or might they actually be enhanced? Unlike music, where the song is the focus, learning is all about impact on performance – the ‘course’ itself doesn’t matter. If we are to take custom-tailoring of learning experiences to heart, the more granular our solutions, the more accurately we can fit each learner’s individual needs.
Smallness is one of the key characteristics of the content of Internet 2.0 and, in turn, of what is becoming known as “e-learning 2.0”. Big chunks of solidified content are simply not easy to look into or combine. With sandstone blocks you can build pyramids; with sand you can build anything from beaches to hourglasses.
Perhaps one of the greatest shortcomings of the early web and the applications that have to date been enabled by it is that it has primarily been about recording, organizing, and making accessible "stuff" from the past. There's great value in that. But the necessary emphasis on the here-and-now, and on the future, is rapidly coming into focus. It's not content, or even context, but process that gets us where we are going. We are not what we have done, but what we are trying to do. All of the diverse components of e-learning 2.0 aim to fill the vast white spaces between the legacy content piles with dynamic processes that can instantly mine the relevant diamonds from those piles, and pull them together in real-time into something unique and immediately useful. Some of those processes are lively discussion, synergistic collaboration, spontaneous project work, and nano-customization. Just as micro-transactions are starting to make e-commerce really different from traditional commerce, micro-learning experiences will make e-learning 2.0 really different from Model-T e-learning. Or that’s the theory.
Smallness is increasingly important in all data flows, and learning is simply another kind of dataflow. If learning is water, old-school SCORM learning objects are ice-cubes: uniform, predictable and transportable, but they melt and lose their usefulness rapidly. What we really need to do is vaporize the water. Knowledge vapor is simply learning liberated, in its smallest possible components – unlike learning objects, if you do not contain it, it disseminates itself far and wide, except where the circumstances are created to condense it back into liquid, or ice, again. The best technology available right now for vaporizing and liberating learning and for finding, filtering, and condensing it as needed, is the human mind. Of course lots of cool web technologies are emerging to help facilitate that process (e-mail was probably the first; blogs and wikis are getting there, as are communities of practice; mobile broadband helps). But technology is not the real key to the success of e-learning 2.0. What connects all our small pieces of learning is not a technology, but our humanity.
As far as jargon goes, I must confess to having major objections to the term "2.0" as applied to the web and to e-learning. It suggests a formal release of a new beta-tested version under some kind of planned production process. That's not the way things work any more. The evolution of technology and our ability and willingness to use it creatively has outpaced our ability to manage it, at least in any traditional sense of the word "manage." Trying to "manage" internet-enabled progress is increasingly delusional, like King Canute trying to command the tide not to come in. Agility, opportunism, plasticity, and instinct need to replace outmoded notions of structure, hierarchy, and traditional planning and financial controls, not to mention the sacred cow of centralized "managed" corporate learning.
But since (largely thanks to stock markets, tax authorities, and standardized accounting practices) our current economies rest on those rotting bureaucratic timbers, the bigger corporations may not be able to change, to take the right actions, in time to stay relevant and save themselves.
Google rose from nothing to become the new Microsoft so rapidly it looked like a conjuring trick. China makes 80% of everything sold by Wal-Mart, the world’s largest retailer, a transformation that took less than five years. Before the ink was dry on most American e-learning IPO’s, Indian companies were eating their lunch. It may be only months before the next new megacorp bursts from the web and makes even Google look tired and outdated. What role will training play in the success, or failure, of those endeavors? Can you train an organization to be agile, instinctive, anticipatory, and adaptive? Or can we merely work to remove the containment of knowledge and facilitate the vaporization of learning?
Recently so many people have been asking me to review their questionnaires and surveys that I thought I’d update a document I first created several years ago which sets out some essential best practices for creating good questionnaires. While written for training evaluation, the guide is applicable to any surveys.
1. Ask: “Why are we doing this?”
- What do we need to know?
- Why do we need to know it?
- What do we hope to do when we find out?
- What are the objectives of the survey?
2. Ask: “What are we measuring?”
In training evaluation, what you measure can be influenced by the learning objectives of the course or curriculum you are measuring:
- Perceptions of any of the above
Your questions, and possibly your survey methods, will differ accordingly.
3. Be aware of respondent limitations.
- Where possible, pilot your questionnaire with a sub-group of your target audience.
- The complexity of your questionnaire and its language should take into account the age, education, competence, culture, and language abilities of respondents.
4. Guarantee anonymity or confidentiality.
- Confidentiality lets you follow up with non-responders, and match pre- and post studies.
- Confidentiality must be guaranteed within a stated policy.
- Anonymity prevents you from doing follow-ups or pre-post studies.
5. Select a data collection method that is appropriate.
Consider the speed and timing of your study, the complexity and nature of what you are measuring, and the willingness of respondents to make time for you. Options:
- E-mail – fast, inexpensive, not anonymous, requires all respondents have e-mail.
- Telephone – time consuming, not anonymous, may require skill, has to be short.
- Face-to-face interview – slow, expensive, requires skill, best for small samples, qualitative studies.
- Web-based – fast, inexpensive (if you use services like Zoomerang), can be anonymous, best for large surveys.
6. Write a compelling cover note.
Where appropriate introduce your questionnaire with a brief but compelling cover note that clarifies:
- The purpose of study and why it is worth giving time to.
- The sponsor or authority behind it.
- Why you value the respondent’s input.
- The confidentiality or anonymity of the study.
- The deadline for completion.
- How to get clarification if necessary.
- A personal “thank you” for participating.
- The signature or e-mail signature of the survey manager (or, ideally, of the sponsor).
- If sending an e-mail, have it come from someone in authority who will be recognised, use a strong subject line that cannot easily be ignored, and time it to arrive early in the week.
7. Explain how to return responses.
If not obvious, make it clear how and by when responses must be returned.
8. Put a heading on the questionnaire.
State simply what the purpose is, what the study is about, and who is running it.
9. Keep it short.
- State how long completion should take and make sure that it does.
- Make questionnaires as brief as possible within the time and attention constraints of your respondents (personal interviews can go longer than self-completion studies).
- Avoid asking questions that deviate from your survey purpose.
- Avoid nice-to-know questions that will not lead to actionable data.
10. Use logical structure.
- Group questions by topic.
- Grouping questions by type can get boring and cause respondents to skim through.
- Number every question.
- Where possible, in web-based surveys put all questions on one screen, or allow respondents to skip ahead and back track.
11. Start with engaging questions.
Many questionnaires are abandoned after the respondent answers the first few questions.
- Try to make the first questions non-intimidating, easy, and engaging, to pull the respondent into the body of the piece.
- Try to start with an open question that calls for a very short answer, and ties in to the purpose of the questionnaire.
12. Explain what to do.
Provide simple instructions, if not obvious, on how to complete a section or how to answer questions (circle the number, put a check mark in the box, click the button etc.)
13. Use simple language.
- Avoid buzz words and acronyms.
- Use simple sentences to avoid ambiguity or confusion.
- If necessary, provide definitions and context for a question.
14. Place important questions at the beginning.
- If a question requires thought or should not be hurried, put it at the beginning. Respondents often rush through later questions.
- Leave non-critical or off-topic questions, such as demographics, to the end.
15. Select scales for responses.
- Keep response options simple.
- Use scales that provide useable granularity.
- Make response options meaningful to respondents.
- Make it obvious if open-ended responses should be brief or substantial by using an appropriate answer-box size.
16. Fine-tune questions and answer options.
- Keep response options consistent where possible - don’t use a 5-point scale in one question and a 7-point in the next unless absolutely necessary; don’t put negative options on the left in one question and on the right in another.
- Be precise and specific – avoid words that have fuzzy meanings (“rarely” or “often” or “recently”).
- Do not overlap response options (use 11-20 and 21-30, not 10-20 and 20-30).
- If you use a continuum scale with numbers for answer options, use a clear concept at the top and bottom of the scale (instead of “on a scale of 1 to 5, how good is it? : 1-2-3-4-5, use 1=very bad -2-3-4-5=very good).
- Use scales that are centred– don’t have one “bad” answer option and four shades of “good”.
- Don’t force respondents into either/or answers if a neutral position is possible
- Allow for “not applicable” or “don’t know” responses.
- Edit and proofread to make sure that answer choices flow naturally from the question.
17. Avoid leading or ambiguous questions.
- Don’t sequence your questions to lead respondents to answer in a certain way.
- Avoid questions that contain too much detail or may force respondents to answer “yes” to one part while wanting to answer “no” to another (e.g. “How confident do you feel singing and dancing?”).
- Minimise bias by piloting your questionnaire before it goes live.
18. Use open-ended questions with care.
- Open responses are difficult to consolidate, so use them sparingly.
- They often provide really useful data, so don’t avoid them completely.
- Doing a pilot or running a focus group before rolling out a survey can provide useful insight for creating more structured closed questions.
- Provide at least one open question so respondents can express what is important to them.
19. Thank the respondent.
- Thank the respondent once again. Reiterate why you value the input.
- If you intend to feed back results, emphasize when and how they can expect to get them.
- If you have offered an incentive, specify what the respondent has to do to claim or be eligible for it.
In the early days of web-based learning, before the oppressive influence of standardized Learning Management Systems, before SCORM took the spark out of ISD, in amongst the prevailing stand-alone conversions from CD-ROM there was a lot of interesting experimental design going on. Much of that innovative learning design centered on using the web for what it did best – connecting people with people to share experience.
When I started an e-learning company in 1998 to provide a project management curriculum online, I built community into the design, rather than simply replicating the tried-and-true classroom versions of the courses. While the content of the course was delivered in a relatively traditional way, it was structured to have learners collaborate with each other, creating their own supplementary content. Every learner had a SME mentor, who was available by e-mail throughout the course. The pool of SMEs hosted online chat sessions around the clock, covering topics related to course content, where learners could exchange ideas, trade war stories, and get clarification on issues. Those sessions were all logged, scrubbed of proper names, and stored in a searchable online library. After tens of thousands of learners, that organically growing repository of community experience was a fabulous resource. The community was so valued by learners that many subscribed to it after completing their courses, so that they could continue to engage with each other and access the content.
That booming company was acquired by a traditional learning business that had no time for such esoteric notions, and stripped the courses back to computer-pumps-it-at-you mode. They saw e-learning as a way to cut costs even at the expense of dumbing down learning effectiveness, and providing human interaction was considered counterproductive. Whether it was the result of ignorance, tunnel vision, technology standardization, or accountants getting the numbers wrong, such was the fate of most e-learning around the beginning of this century.
But the prosumer market is still with us in other fields, and it is stronger than ever. Blogging is the obvious example, and it is till growing so fast that the statistics are out of date the moment they are published. Along with creating blogs, publishing your own photographs on the web has taken off, aided by free photo-hosting services like Flickr. Communities such as Del.icio.us, which are specifically designed for sharing information and links of mutual interest, are booming. And services like 43Things and Backpack, which help you tie all of these together, are starting to take off.
In training, we are seeing prosumer concepts like informal learning, workflow learning, and collaborative learning coming into vogue. These are still considered by mainstream learning professionals as interesting but impractical, largely because they are hard to conceptualize and harder still to manage. Yet there are so many reasons why we should be dedicating at least some of our resources to experimenting with them. One reason is that there are lots of technologies out there that, with a little imagination, could be used to make collaborative learning more practical. Another reason, the most important, is that people have demonstrated time and again that they like to interact with others, and that they find creating their own content motivating and compelling.
In the computer games industry, the conventional wisdom is that online games that have been massively successful have all allowed players to substantially influence their environment and leave their mark. Much as in the real world of clubs and associations, loyalty is sealed if the participant has invested time, energy, and creativity in building a presence that others can interact with and appreciate.
It’s time we stopped treating e-learners like members of an anonymous audience in a darkened theatre, and started inviting them all up on stage.
Original in TrainingZONE Parkin Space column of 26 August 2005
Thursday, August 25, 2005
Unless certification is involved, at the individual intervention level, at the strategic enterprise level, and at all points in between, the quality assurance processes applied to formal learning initiatives in most organizations are often rudimentary at best.
That does not mean that the quality of training is poor, just that we have no real data to support our feeling that we are doing a good job. Training departments are usually stretched thin, and don’t have the time or resources to do a “proper” quality assurance job at either the course level or at the aggregate departmental level.
Informal evaluation is done by the trainer, most of who know whether an activity is “going well” or not. Formal feedback is carried out at course end, and emphasizes learner reactions, likes and dislikes. And, if the activity involves testing, there are always the scores to look at.
These are all important elements in assessing the quality of a learning experience, and they provide valuable feedback to a trainer. But they are not enough, not by a long shot.
When left in the hands of trainers and instructional designers, the focus of evaluation is too micro, too inward looking. The purpose of training is to improve organizational performance through improving the performance of individuals and teams. Learning evaluation should serve that purpose, as a quality assurance tool. To do that, evaluation has to be pan-curricular, and must adopt a higher level perspective.
This “helicopter view” is hard to achieve if the responsibility for designing and implementing evaluation is too course-specific. Yet who has the time, or the mandate, to step back from a busy course development or training schedule and get strategic?
Only the largest firms have dedicated evaluation resources who know what they are doing and have the credibility to influence policy. And even those resources are becoming imperiled by the inroads that the LMS is making.
Does it matter? There are several reasons why it does.
Implementing a regimen that elevates the strategic importance of evaluation (across all levels) and places it on a more professional level will do two vital things. It will improve significantly the effectiveness and efficiency of all learning activities; and it will save a tremendous amount of unnecessary, un-useful, or redundant work.
My fear is that with the advent of LMS-based evaluation and record-keeping, the information we have about the quality of our learning activities is becoming more narrowly focused, and its usefulness is becoming further diluted. Just as LMS functionality tends to constrain the nature of our design of instruction, it constrains the nature of our inquiry into its impact.
I’d like to see more training departments creating evaluation units and staffing them with a trained expert or two who can help get past the simplistic "smile-sheet & ROI" approach and start building systems that put the important issues on the dashboards of individual trainers, instructional designers, and senior learning managers.
Some LMS tools claim to be able to do just that. But, as with all tools, without a trained and committed hand to guide them, they simply don’t get used.
Just as most of us never use more than 5% of the potential of our spreadsheet software, the potential of emerging tools does not get realized. Automation was supposed to help us do things better; in reality, it often makes us complacent. We dumb down our expectations, dumb down our evaluations, and ultimately dumb down the business impact of our training endeavors.
The “hot career” of the past five years was Instructional Systems Design. I’d like to see companies valuing Learning Evaluation professionals as highly. They can contribute substantially to the quality of training, and to the business results that it achieves.
Original in TrainingZONE Parkin Space column of 19 August 2005
Thursday, August 11, 2005
The response tells you unambiguously about the level of satisfaction of the learner, and any clarification offered tells you about the issues that really matter to that learner. That’s more than is called for at Level 1, especially if you have done a good job of testing your training intervention before rolling it out live.
It’s not always possible to reduce things to one question, but I see it as a starting point in the negotiation. I tend to be somewhat dismissive of Level 1 evaluations. That is not because they serve no purpose (they are vital), but because they attract way too much attention at the expense of business impact studies, and because they are often poorly designed and inaccurately interpreted.
Every training intervention needs some kind of feedback loop, to make sure that – within the context of the learning objectives – it is relevant, appropriately designed, and competently executed.
At Level 1 the intention is not to measure if, or to what extent, learning took place (that’s Level 2); nor is it intended to examine the learner’s ability to transfer the skills or knowledge from the classroom to the workplace (Level 3); nor does it attempt to judge the ultimate impact of the learning on the business (Level 4). Level 1 of Kirkpatrick’s now somewhat dated “four levels” is intended simply to gauge learner satisfaction.
Typically, we measure Level 1 with a smile sheet, a dozen Lickert-scaled questions about various aspects of the experience. At the end of the list we’ll put a catch-all question, inviting any other comments. I won’t repeat the reasons why the end-of-course environment in which such questions are answered is not conducive to clear, reasoned responses. But the very design of such questionnaires is ‘leading’ and produces data of questionable validity, even in a calm and unhurried environment.
Far too many of the smile sheets that I see put words or ideas into the mouths of learners. We prompt for feedback on the instructor's style, on the facilities and food, on the clarity of slides. The net effect is to suggest to respondents (and to those interpreting the responses) that these things are all equally important, and that nothing outside of the things asked about has much relevance. By not prompting respondents you are likely to get to those things that, for them, are the real burning issues. Open questions are not as simple to tabulate, but they give you an awful lot to chew on.
Now the one-question approach does not necessarily give you all the data that you need to continuously fine-tune your training experience – but neither does the typical smile sheet. Trainers need to understand that sound analytical evaluations often require multi-stage studies. Your end-of-course feedback may indicate a problem area, but will not tell you specifically what the problem is. A follow-on survey, by questionnaire, by informal conversation, or by my preferred means of a brief focus group, will tell you a great deal more than you could possibly find out under end-of-course conditions.
The typical smile sheet is a lazy and ineffective approach to evaluating learner satisfaction. It may give you a warm and comfortable feeling about your course or your performance as a trainer, or it may raise a few alarm flags. But the data that it produces is not always actionable, is rarely valid, and often misses the important issues.
In market research, or any statistical field for that matter, there are two important errors that good research tries to mitigate. Known as Type One and Type Two Errors, they measure the likelihood of seeing something that is not there and the likelihood of missing something important that is there. I have never heard anyone address these error types in their interpretation of Level 1 results.
We see in our smile-sheet results what we want to see, and react to those things that we regard as relevant. If we are so smug in our knowledge that we know what is going on anyway, why do we bother with token smile sheets at all?
Original in TrainingZONE Parkin Space column of 5 August 2005
Tuesday, August 02, 2005
We struggle for hours (often for days or weeks) to come up with the recipe for “learning ROI.” The formula itself is simple, but the machinations by which we adjust and tweak the data that go into that formula are anything but simple. Putting a monetary value on training’s impact on business is fraught with estimation, negotiation, and assumption – and putting a monetary value on the cost of learning is often even less precise.
Yet when was the last time you saw an ROI figure presented as anything other than an unqualified absolute? If you tried for statistical accuracy and said something like, “this project will produce 90% of the desired ROI, 95% of the time with a 4% error margin,” you’d be thrown out of the boardroom. You simply can’t use real statistics on an accountant, because the average bean-counter can’t tell a Kolmogorov-Smirnov from an Absolut-on-the-rocks. Don’t tell us the truth; just give us numbers that conform to our unrealistic way of measuring the business.
We spend way too much time trying to placate financial people by contorting our world to fit their frame of reference, and we allow them to judge and often condemn our endeavors according to criteria that are irrelevant or inappropriate. Perhaps there is some comfort in knowing that the problem is not unique to training. In a couple of decades in marketing, I have seen plenty of good brands ruined by ill-conceived financial policies, usually to the long-term detriment of the company as a whole.
But you don’t need to be a statistician or an accountant to make a strong business case based on logic and deduction, and there is no need to be pressured into using the preferred descriptive framework of a book-keeper. The pursuit of the measurement of ROI in training is often a red herring that distracts from the qualitative impacts that our work has on the performance of the business. ROI is typically not the best measure of that, and, after making all of the heroic assumptions and allocations needed to arrive at it, that magic ROI figure may well be a false indicator of impact.
Unfortunately, the indicators that are useful and reasonably accurate are often hard to convert to financial data, so they do not get taken seriously. And, compounding the problem, training managers themselves often ignore these indicators because they are not captured at the course level. Our focus too often is on the quality of courses rather than on the quality of our contribution to the business in total.
We need to widen the focus. While learner satisfaction, test results, and average cost of butts-on-seats are useful metrics, it is only after our learners have returned to work that we can begin to see how effective the learning experience really was. What are some of the indicators that let us know how we are doing? Many of them are produced already, often by the financial people themselves, and tracking them over time gives good insights into where we are doing well and where we might need to pay more attention.
Some of those metrics include:
- Training costs per employee.
- Enrolment rates and attendance rates.
- Delivery modes, plans against actuals.
- Percentage of target group that is “compliant”.
- Time from eligibility to compliance, or to proficiency.
- Percentage of workforce trained in particular skill areas.
- Learning time as percentage of job tenure.
- Availability, penetration, and usage rates of help systems.
- Skill gap analyses tracked over time.
- Productivity (such as, for example, number of new clients per 100 pitches).
- Attrition rates.
There are many, many more. Metrics such as these let us put on the manager’s dashboard indicators of performance in areas such as operational performance, compliance, efficiency, effectiveness, and workforce proficiency, as well as harder to capture dimensions such as motivation and readiness for change. Training departments need to think “outside the course” and come up with ways to derive the right indicators in a way that is inexpensive and unobtrusive.
One of my frequent recommendations is that training departments learn something about data collection from their marketing colleagues, and set up the ability to run surveys and focus groups, to investigate learner satisfaction, customer attitudes, job impacts, attitudes, and manager perceptions. This skill is often absent in training departments, which is a pity because these methods can produce great insights and save money and time. If you build this capacity into your training organization, getting a read on Levels 3 and 4 can become as much a part of your evaluation regimen as gathering smile sheets. You don’t have to interrogate the universe if you can pick a small sample. And you can produce real data and real trends that go down very well in the boardroom.
Original in TrainingZONE Parkin Space column of 22 July 2005
Tuesday, July 26, 2005
Think in terms of bricks, rooms, and houses. Bricks can be interchanged without affecting the harmony of a house design. Rooms cannot. The smaller your learning objects become, the easier it is to slip them in to other uses without creating any major disruption, but the less “meaningful” they are. The larger the objects become, the less re-usable they get, because they become more context-rich. But you get to a point where the size of the object is large enough to be self-fulfilling and truly meaningful, usually at the level of a house, or whole course.
Which is why, in practice, most learning objects are no smaller than a course. A course is not very re-usable, though you may fit it into different curricula, in the same way that universities fit different courses into different degree programs.
You don’t hear much about learning objects and sharable content objects any more, at least not in mainstream training circles. But there is still an enormous effort going on among learning technologists to make this idea more workable. Are these efforts rather like trying to build a better steam engine long after the internal combustion engine has gained popularity?
The original concept of sharable content objects (in a training context) assumed that learning is primarily content-driven, and that learning content can be decomposed into smaller and smaller components that still retain some inherent independent pedagogical value. If your objects are designed to “click” together, like a child’s construction toy, you can create lots of different learning experiences by clicking together the appropriate objects from your repository of already-created content. The argument went that while the initial cost of building a decomposable course might be higher than building a stand-alone course, the ultimate savings derived from reusing content would be significant.
The historical limitations of learning objects (particularly as manifested in early SCORM) have resulted in dire canned e-learning course design, or have led instructional designers to simply wrap a SCORM interface around whole courses. Today, the “object” is rarely smaller than an entire course. Effectively, interoperability of courses – the ability to run a course on any conformant LMS – has been the primary benefit of SCORM, rather than the reusability of smaller content objects.
The notion of reusable learning objects is in many ways counter to the web ideal of dynamic personalized content, the ISD ideal of internally consistent look, feel and sound to learning flows, and the pedagogical ideal of performance-objective driven custom-built learning experiences.
Re-usability implies that a learning object developed for use in one context can be simply plugged into another context. It also suggests that such an object ages slowly, staying relevant long enough to be reused often enough to justify its reusability. This is more true in some fields than in others. So, for example, you develop an object (say a lesson describing Maslow’s Theory) as part of a management course on motivation. Later, you develop a course on applied psychology and you don’t need to re-create the Maslow lesson because you can simply lift it from the earlier course and drop it into the later course. As the theory itself never changes, the object can be expected to have a long life -- so long as its content is devoid of context, which is typically pretty mercurial.
We have all taken short cuts in the classroom, pulling slides from one session and using them in another, even if they have different layouts or color schemes. That may work better in a classroom setting than online, because there is continuity in the primary medium: the trainer. But online, without the mediation of a live instructor, not only can you end up with all sorts of disconnects in terms of media (different voice, graphic style, look, fonts, pace, interactivity), you are also likely to get disconnects in terms of pedagogical approach. This is avoidable, but only by imposing content standards that are dull and uniform, so every course ends up looking, sounding, and running like every other.
Perhaps the most important obstacle to the success of sharable content objects is the fact that learning is not primarily about content, or about courses. Those who glibly pronounce that “content is king” really irritate me, because it is patently untrue. While content is obviously essential, context and process are more important to learning. But that’s a rant for another day.
Despite all of the reservations and difficulties, the idea of reusability has enduring appeal. But today, as collaborative community-based learning starts to take shape, perhaps we should be re-thinking what a learning object might be. Instead of looking at a learning object as a chunk of easily-connectable content residing on a server, maybe we should be looking at an easily-connectable person residing on a network...
Original in TrainingZONE Parkin Space column of 8 July 2005
I always find it a little disturbing when I come across yet another major corporation that does not have a defined learning evaluation strategy, or that has a strategy which everyone ignores. In fact, I’d say that nine out of 10 companies that I have worked with do not take learning evaluation seriously enough to have formalized policies and procedures in place at a strategic level.
A learning evaluation strategy sets out what the high-level goals of evaluation are, and defines the approaches that a corporation will take to make sure that those goals are attained. Without an evaluation strategy, measuring the impact and effectiveness of training becomes something decided in isolation on an ad-hoc, course-by-course basis.
Without an overall strategy to conform to, instructional designers may decide how and when to measure impact and what the nature of those measures will be, and will use definitions and methodologies that vary from course to course and curriculum to curriculum. When you try to aggregate that evaluation data to get a decent picture of the impact of your overall learning activity, you find that you are adding apples to oranges.
Most companies have done a fairly good job of standardizing Level One evaluations of learner attitudes (smile sheets). Most also collect basic data that let them track activity such as numbers of learners or course days. But these are merely activity-based measures, not performance-based measures. They tell you little about the quality or impact of the training being provided.
Once you start to look at Level Two and up, each course tends to run its own evaluation procedures. In the absence of strategic guidelines or policies, those evaluation procedures can be token, invalid, meaningless, or inadequate – if they exist at all. Even a good instructional designer with a good grasp of evaluation practice may structure measurement measures that, while superb within the context of the specific course, are impossible to integrate into a broader evaluation picture. And the more individual courses require post-training impact measurement, the more irritating it becomes for learners and their managers.
There are many approaches to measuring the impact of a company’s investment in learning that go beyond course-level evaluation. In fact, for the bigger issues, the individual course or individual learner are the least efficient points of measurement. You may decide, for example, that surveying or observing a sample of learners is more efficient than trying to monitor them all; you may decide that a few focus groups give you more actionable feedback than individual tests or questionnaires; you may choose to survey customer attitudes to measure the impact of customer service training, rather than asking supervisors their opinions; or you may opt to select a few quantifiable data points such as sales, number of complaints, production output per person, or staff turnover as key indicators of training success. Your strategy would set out, in broad-brush terms, which of these approaches would be used.
A learning evaluation strategy is not enough, of course. You have to make sure that all of those involved in training design, implementation, and analysis understand the strategy and are able to implement it in their day-to-day work. I have found that the best tool you can give people is a company-specific “learning evaluation bible” that not only lays out the bigger picture, but also provides common definitions, norms, standards, baselines, and guidelines for developing and applying measurement instruments, and for interpreting the resulting data. (I used to call this a Learning Evaluation Guide, but the acronym was an open invitation for way too many jokes).
This document should be a practical guide, rich in examples and templates, that makes it easy for everyone to conform, at a course, curriculum, or community level. The last thing the bible should be is a four-binder bureaucratic manual that looks like it was produced by an EU subcommittee. Rather it should be more like a set of practical job aids, or the guides that are published to help learner drivers prepare for their test.
Without an evaluation strategy, we are left floundering every time someone asks what our return on investment (ROI) on training is. I agree that calculating ROI is problematic, especially at the individual course level, and is often unnecessary. But if you are spending millions on completely revamping your sales-force training curriculum, you’d better be thinking about how to measure ROI, and build those measurement requirements in up front. You would not invest hundreds of thousands in an LMS unless you were convinced that the investment would bring an acceptable return, and you naturally will want to measure how well the investment performs against plan.
At a departmental level, an evaluation strategy helps us to answer that awkward ROI question, not with defensive rationalizations, but with coherent, consistent evidence that supports the contention that our training investment is indeed achieving its desired results.
Original in TrainingZONE Parkin Space column of 8 July 2005
The recent admission that 40 million customers of all the major credit card companies have had their data hacked is the latest in a mounting wave of failures to secure customer privacy. In the past few months alone, the confidential information of tens of millions of people has been “let go” by household-name companies in banking, finance, insurance, education, and retailing.
The criminals get the bad press. But these are corporate outrages, because in all cases they could have been avoided had the companies to whom customers entrust their data not been so inept, or worse indifferent, about their responsibilities. And they continue to get away with it because their customers themselves are ignorant or indifferent. Or, in the case of credit and debit cards, customers have no choice – if all card companies are equally bad, and all continue to have the same data intermediaries in common, you either accept the risks or try to live plastic-free.
What does this have to do with training?
First, those who are providing training in any subject, either in-house or as a vendor, online or in class, need to review their own procedures and policies for securing the personal data of their learners.
Whether they are paying customers or not, with the help of learning management systems we are gathering more and more intimate details about each learner. Those details need to be secured, not just from outside hackers but from any internal management and training staff who do not explicitly have a right to access. And learners need to know that they are secure.
You have a significant moral and motivational obligation to guard the privacy of your learners. Once you post your privacy statement, you are legally bound to adhere to it. When learners think that potentially every keystroke, decision, response, and test result can be tracked, they get nervous. When they think that their managers have access to those details, they worry. And if they think that their data may become publicly available, they may rebel.
There are many simple things you can do to make learners more comfortable. Among them:
- Tell them what information is collected and what it is used for. Tell them who can access it and under what circumstances. Tell them how it is secured both in databases and in transit.
- In online courses, let learners pick their own user ID and password, rather than automatically allocating them their name or e-mail address. Unless it contravenes a learning objective, give them an option in chat rooms or threaded discussions to use anonymity.
- Overtly make use of encryption where learners have to provide any personal information.
- Destroy learner data as soon as it is no longer needed. Archive needed learner data offline or securely behind firewalls. Lock up your back-up tapes.
- And let learners view their own learner record at any time, so they can see what data is actually available to authorized parties.
The second aspect to privacy is this: there is an urgent need for training in security and privacy for all personnel in any business that accepts and uses customer information. That’s just about every business. (In my view, such training should be government mandated, but then I’m a vendor).
Most security breaches are not the result of hi-tech brilliance on the part of the thieves, but of human weakness on the part of company employees. All employees need to understand the risks and learn the operating habits that mitigate them. They need to appreciate that technology is not in itself an adequate protection, and must be trained to develop the “street smarts” that will help them avoid the common behavior pitfalls so often exploited by villains.
Managers need to get their heads around the policies and procedures that will protect their customers, and must regard these with as much earnestness as those that protect their company.
Privacy and data security are not an IT-only responsibility, nor are they issues that you can deal with after the fact. Training has an important role to play. Get it right at the planning stage and you will be fine. Get it wrong, and you could be in big trouble.
Original in TrainingZONE Parkin Space column of 24 June 2005
Tuesday, June 14, 2005
For several years now, Training Departments have been transfixed by the evolving internet in the same way that dinosaurs were probably awe-struck by the approaching comet. So what does the future hold? I’m happy to report that learning will thrive, but trainers will have to merge back into operational roles. Oh, and Training Departments are dead, at least as we know them. As are Learning Management Systems and any other relics of centralized distribution of learning. Learning that is informal, collaborative, contextual, real-time, and peer-generated, will be the mode of tomorrow.
It seems counter-intuitive that military types whose culture is defined by command and control hierarchies would advocate devolution of learning to the swab on the deck-plates or the grunt in the foxhole, but that was the gist of what was being said. Admittedly, it was not being said by the JAG look-alikes or their entourages, but by the civilian gurus who write their white papers for them. And devolution of learning does not necessarily mean relinquishing control – in fact there are some very scary big-brother systems being deployed that (allegedly) will tell anyone with access pretty much what any individual sailor anywhere in the world had for breakfast last Tuesday and, to five decimal places, what his or her competency rating is on any given skill. It is hard to reconcile what they are saying with what they are doing, until you realize that, because these systems are so vast, they take a long time to build and deploy. So at any point in time the military are rolling out systems and policies that have long since been abandoned for something new – which may not see the light of day for a decade.
I was mainly interested in hearing what folks like Jay Cross, Clark Aldrich, Harvey Singh and Ben Watson had to say about workflow learning, collaboration, and simulations. However, in amongst their sessions was a real eye-opener from a VP at IBM. IBM used to be a blue-suit red-tie operation as monolithic as a bank, but it has been doing a lot of shape-shifting in recent years. These days any organization that is unwilling or unable to do that is unlikely to be around very long. It’s Darwinian – those who can adapt most readily are most likely to survive in times of rapid change. IBM’s consulting wing, adrenalised a couple of years ago by their acquisition of Price-Waterhouse Coopers consulting, is doing what big consulting firms rarely do – they are advocating unique solutions that they don’t already have parked in a truck around the corner.
Here’s a quick version of the IBM line on “embedded” or workflow learning:
The most profound shift that will take place in training over the next three years is a movement away from traditional, formal, course-based learning (classroom or online) and towards clever integration into the workflow of learning-enabling tools like Instant Messaging and informal collaboration processes. As we move learning from its “separate service” role to a more integrated coal-face role, one of the biggest obstacles is the political question of who owns it. The other is the need for a deeply rooted culture of collaboration throughout the organization.
A simple example of workflow learning in action: Tom in Finance gets an urgent request to authorize foreign travel funds for an executive. He learned how to do that in a training course last year, but has never needed to do it in practice, so he’s lost. The help system, typically, doesn’t. The FAQ gives no guidance either. So he sends out a broadcast Instant Message to a small group of SMEs and experienced practitioners asking for help. So far this is not a lot different from “prairie dogging” – popping your head up above your cube divider and yelling “Does anyone know how to…” But here is where it gets interesting. Jill, an experienced practitioner in another city, responds to the message. She remotely takes control of Tom’s computer and talks to him as he watches her go through the steps on his screen. She identifies that the help system, the FAQ, and possibly the original training are inadequate, and updates the FAQ in wiki-like fashion. Then she identifies a group of Tom’s peers who might benefit from knowing what Tom now knows, and sends them an announcement of a ten-minute webinar for later that week. During the webinar, she records the session, and saves it to the system where those who could not make it, or those who may encounter the problem in the future, can easily find and watch it. Then she notifies those responsible for basic training, and those responsible for the help system, that they might need to pay attention to the issue. Tom, in the meantime, evaluates the help he has received, and his ratings and comments get added to Jill’s profile for reference by future aid-seekers, and her management.
The technology is not complex, or even expensive. Most people have it on their computers already. Aspects of this are widely used already in e-commerce and e-customer support. Individuals already learn this way intuitively. What is hard is achieving the mindset and the culture that allows and encourages this to happen across an organization.
There is nothing revolutionary in the IBM vision. If you have followed those who advocate informal learning and collaborative learning (and indeed many of my own rants), you will realize that the ideas are not new. But, for me, the amazing thing was to hear them coming from IBM. If Big Blue is advocating this approach, and is actively setting about trying to get it to work in its clients’ cultures as well as its own, then there is something serious going on. Workflow learning has moved from the drawing board to the boardroom. They say that in theory there is no difference between theory and practice, but in practice there is. IBM is taking its theories on the road, and, in practice, is being taken seriously.
Original in TrainingZONE Parkin Space column of 10 June 2005
Monday, June 13, 2005
Accenture recently surveyed several hundred employees who qualified as “approaching retirement” to find out what, if anything, their employers were going to do about retaining their knowledge before they left. (I personally found the sample demographics a little alarming: since when are people between the ages of 40 and 50 approaching retirement, unless of course they are in US government jobs?)
It seems that companies are doing very little to capture that knowledge. Could it be that companies simply don’t value what their “ageing” employees contribute? There’s plenty of anecdotal evidence to support that view. If faced with recruiting or promoting either an inexperienced 25-year-old MBA or a 50-year-old veteran, the MBA is always the odds-on favourite – even in a country where overt ageism is illegal.
What opportunities are offered to imminently retiring employees to pass on their wisdom? According to Accenture, one company in four makes no effort whatsoever to capture the workplace knowledge of retirees, and a further 16% of companies expect retirees to have an informal chat with colleagues before leaving. That’s more than 40% of companies that have no formal processes for retaining expertise.
When you think of the money and time that has been put into training and developing the expertise that is apparently seen as disposable, you have to wonder how serious companies are about the value they place on “human capital”. After all, what is human capital if not expertise? If it is disposable, it has no significant value. And if the expertise of your most experienced people has no significant value, why on earth are you wasting your time bandying about training ROI calculations? At the end of the day, the return on all that investment, in Kirkpatrick Level Four terms, is assumed to be not worth the trouble of securing. Or so your accountants will tell you.
Now, you can look at the other side of the coin and say that more than half of all companies do make an effort to hang onto that expertise. In fact 20% of companies claim to put their retirees through a knowledge transfer process that lasts several months. But I suspect that this is only in exceptional cases for particularly high-ranking employees.
It may be that the failure to make an effort to preserve workplace knowledge is not because such knowledge is undervalued but because the extent of the problem is not realised. In that case, how do we get senior management to sit up and take notice? And, once they do appreciate the scale of the impending problem, what can be done to move awareness to effective action? Perhaps this responsibility falls more squarely on the shoulders of HR management and human capital strategists, rather than on trainers. Or perhaps it is the responsibility of those charged with knowledge management.
Knowledge management (KM) has had a bit of a rough ride over the past 10 years, having made many of the same mistakes that e-learning made. Their focus was initially on building technology-based tools to extract, retain, and retrieve what was in the heads of employees, and their focus was on explicit knowledge rather than tacit knowledge. Nowadays tacit knowledge is recognised as being more relevant to performance, even if it is harder to capture. KM people are working more with informal approaches such as story-telling to capturing what this less tangibly expressed expertise.
The problem is that the less formal our processes become – in both training and knowledge capture – the less easy they are to sell to the corporate bean-counters. And the less tangible our activities are to those who like dealing with hard numbers, the less value they are assumed to have.
I sometimes think that accountants are the biggest obstacles to progress in corporate learning and knowledge management. They love structure and hierarchy and abhor ambiguity and fuzziness. If they can’t measure it, it has no value. How 20th century! We have all run more than our share of the mandatory “Finance for non-financial managers” courses. Perhaps it is time to lobby for some mandatory “knowledge for non-knowledge managers” courses…