Tuesday, July 26, 2005

The Learning Object Paradox

Parkin's Learning Object Paradox (PLOP) states that: "The more reusable a learning object becomes, the less useable it is." This is because the usability of a learning object varies in direct proportion to its size while its reusability varies in indirect proportion to its size.

Think in terms of bricks, rooms, and houses. Bricks can be interchanged without affecting the harmony of a house design. Rooms cannot. The smaller your learning objects become, the easier it is to slip them in to other uses without creating any major disruption, but the less “meaningful” they are. The larger the objects become, the less re-usable they get, because they become more context-rich. But you get to a point where the size of the object is large enough to be self-fulfilling and truly meaningful, usually at the level of a house, or whole course.

Which is why, in practice, most learning objects are no smaller than a course. A course is not very re-usable, though you may fit it into different curricula, in the same way that universities fit different courses into different degree programs.

You don’t hear much about learning objects and sharable content objects any more, at least not in mainstream training circles. But there is still an enormous effort going on among learning technologists to make this idea more workable. Are these efforts rather like trying to build a better steam engine long after the internal combustion engine has gained popularity?

The original concept of sharable content objects (in a training context) assumed that learning is primarily content-driven, and that learning content can be decomposed into smaller and smaller components that still retain some inherent independent pedagogical value. If your objects are designed to “click” together, like a child’s construction toy, you can create lots of different learning experiences by clicking together the appropriate objects from your repository of already-created content. The argument went that while the initial cost of building a decomposable course might be higher than building a stand-alone course, the ultimate savings derived from reusing content would be significant.

The historical limitations of learning objects (particularly as manifested in early SCORM) have resulted in dire canned e-learning course design, or have led instructional designers to simply wrap a SCORM interface around whole courses. Today, the “object” is rarely smaller than an entire course. Effectively, interoperability of courses – the ability to run a course on any conformant LMS – has been the primary benefit of SCORM, rather than the reusability of smaller content objects.

The notion of reusable learning objects is in many ways counter to the web ideal of dynamic personalized content, the ISD ideal of internally consistent look, feel and sound to learning flows, and the pedagogical ideal of performance-objective driven custom-built learning experiences.

Re-usability implies that a learning object developed for use in one context can be simply plugged into another context. It also suggests that such an object ages slowly, staying relevant long enough to be reused often enough to justify its reusability. This is more true in some fields than in others. So, for example, you develop an object (say a lesson describing Maslow’s Theory) as part of a management course on motivation. Later, you develop a course on applied psychology and you don’t need to re-create the Maslow lesson because you can simply lift it from the earlier course and drop it into the later course. As the theory itself never changes, the object can be expected to have a long life -- so long as its content is devoid of context, which is typically pretty mercurial.

We have all taken short cuts in the classroom, pulling slides from one session and using them in another, even if they have different layouts or color schemes. That may work better in a classroom setting than online, because there is continuity in the primary medium: the trainer. But online, without the mediation of a live instructor, not only can you end up with all sorts of disconnects in terms of media (different voice, graphic style, look, fonts, pace, interactivity), you are also likely to get disconnects in terms of pedagogical approach. This is avoidable, but only by imposing content standards that are dull and uniform, so every course ends up looking, sounding, and running like every other.

Perhaps the most important obstacle to the success of sharable content objects is the fact that learning is not primarily about content, or about courses. Those who glibly pronounce that “content is king” really irritate me, because it is patently untrue. While content is obviously essential, context and process are more important to learning. But that’s a rant for another day.

Despite all of the reservations and difficulties, the idea of reusability has enduring appeal. But today, as collaborative community-based learning starts to take shape, perhaps we should be re-thinking what a learning object might be. Instead of looking at a learning object as a chunk of easily-connectable content residing on a server, maybe we should be looking at an easily-connectable person residing on a network...


Original in TrainingZONE Parkin Space column of 8 July 2005

Learning evaluation strategies take the mess out of measurement

Why do we spend so much money, and a substantial amount of time, on training? What is the point? Do we just do it because we have always done it, or because everyone else is doing it? Is it because we just like our people to be smarter? Or are we expecting that training will somehow help our company perform better? If it is the latter, how do we know what impact we are having?

I always find it a little disturbing when I come across yet another major corporation that does not have a defined learning evaluation strategy, or that has a strategy which everyone ignores. In fact, I’d say that nine out of 10 companies that I have worked with do not take learning evaluation seriously enough to have formalized policies and procedures in place at a strategic level.

A learning evaluation strategy sets out what the high-level goals of evaluation are, and defines the approaches that a corporation will take to make sure that those goals are attained. Without an evaluation strategy, measuring the impact and effectiveness of training becomes something decided in isolation on an ad-hoc, course-by-course basis.

Without an overall strategy to conform to, instructional designers may decide how and when to measure impact and what the nature of those measures will be, and will use definitions and methodologies that vary from course to course and curriculum to curriculum. When you try to aggregate that evaluation data to get a decent picture of the impact of your overall learning activity, you find that you are adding apples to oranges.

Most companies have done a fairly good job of standardizing Level One evaluations of learner attitudes (smile sheets). Most also collect basic data that let them track activity such as numbers of learners or course days. But these are merely activity-based measures, not performance-based measures. They tell you little about the quality or impact of the training being provided.

Once you start to look at Level Two and up, each course tends to run its own evaluation procedures. In the absence of strategic guidelines or policies, those evaluation procedures can be token, invalid, meaningless, or inadequate – if they exist at all. Even a good instructional designer with a good grasp of evaluation practice may structure measurement measures that, while superb within the context of the specific course, are impossible to integrate into a broader evaluation picture. And the more individual courses require post-training impact measurement, the more irritating it becomes for learners and their managers.

There are many approaches to measuring the impact of a company’s investment in learning that go beyond course-level evaluation. In fact, for the bigger issues, the individual course or individual learner are the least efficient points of measurement. You may decide, for example, that surveying or observing a sample of learners is more efficient than trying to monitor them all; you may decide that a few focus groups give you more actionable feedback than individual tests or questionnaires; you may choose to survey customer attitudes to measure the impact of customer service training, rather than asking supervisors their opinions; or you may opt to select a few quantifiable data points such as sales, number of complaints, production output per person, or staff turnover as key indicators of training success. Your strategy would set out, in broad-brush terms, which of these approaches would be used.

A learning evaluation strategy is not enough, of course. You have to make sure that all of those involved in training design, implementation, and analysis understand the strategy and are able to implement it in their day-to-day work. I have found that the best tool you can give people is a company-specific “learning evaluation bible” that not only lays out the bigger picture, but also provides common definitions, norms, standards, baselines, and guidelines for developing and applying measurement instruments, and for interpreting the resulting data. (I used to call this a Learning Evaluation Guide, but the acronym was an open invitation for way too many jokes).

This document should be a practical guide, rich in examples and templates, that makes it easy for everyone to conform, at a course, curriculum, or community level. The last thing the bible should be is a four-binder bureaucratic manual that looks like it was produced by an EU subcommittee. Rather it should be more like a set of practical job aids, or the guides that are published to help learner drivers prepare for their test.

Without an evaluation strategy, we are left floundering every time someone asks what our return on investment (ROI) on training is. I agree that calculating ROI is problematic, especially at the individual course level, and is often unnecessary. But if you are spending millions on completely revamping your sales-force training curriculum, you’d better be thinking about how to measure ROI, and build those measurement requirements in up front. You would not invest hundreds of thousands in an LMS unless you were convinced that the investment would bring an acceptable return, and you naturally will want to measure how well the investment performs against plan.

At a departmental level, an evaluation strategy helps us to answer that awkward ROI question, not with defensive rationalizations, but with coherent, consistent evidence that supports the contention that our training investment is indeed achieving its desired results.


Original in TrainingZONE Parkin Space column of 8 July 2005

Privacy is every trainer's business

Some years ago I spoke on the topic of managing privacy in e-learning at a large learning conference. Only three people showed up. Two of them were expecting a session on how to keep distractions away from employees trying to e-learn in a busy office environment. I’m not sure that the awareness or concern of trainers has been raised much since then, but it needs to be.

The recent admission that 40 million customers of all the major credit card companies have had their data hacked is the latest in a mounting wave of failures to secure customer privacy. In the past few months alone, the confidential information of tens of millions of people has been “let go” by household-name companies in banking, finance, insurance, education, and retailing.

The criminals get the bad press. But these are corporate outrages, because in all cases they could have been avoided had the companies to whom customers entrust their data not been so inept, or worse indifferent, about their responsibilities. And they continue to get away with it because their customers themselves are ignorant or indifferent. Or, in the case of credit and debit cards, customers have no choice – if all card companies are equally bad, and all continue to have the same data intermediaries in common, you either accept the risks or try to live plastic-free.

What does this have to do with training?

First, those who are providing training in any subject, either in-house or as a vendor, online or in class, need to review their own procedures and policies for securing the personal data of their learners.

Whether they are paying customers or not, with the help of learning management systems we are gathering more and more intimate details about each learner. Those details need to be secured, not just from outside hackers but from any internal management and training staff who do not explicitly have a right to access. And learners need to know that they are secure.

You have a significant moral and motivational obligation to guard the privacy of your learners. Once you post your privacy statement, you are legally bound to adhere to it. When learners think that potentially every keystroke, decision, response, and test result can be tracked, they get nervous. When they think that their managers have access to those details, they worry. And if they think that their data may become publicly available, they may rebel.

There are many simple things you can do to make learners more comfortable. Among them:
  • Post prominently a well-formulated privacy policy. For each course, restate this policy and provide any specific elaborations or differences for that course. Have each learner accept that policy as part of course registration.
  • Tell them what information is collected and what it is used for. Tell them who can access it and under what circumstances. Tell them how it is secured both in databases and in transit.
  • If you change your privacy policy, let every current and past learner know about the change and its implications.
  • In online courses, let learners pick their own user ID and password, rather than automatically allocating them their name or e-mail address. Unless it contravenes a learning objective, give them an option in chat rooms or threaded discussions to use anonymity.
  • If you use cookies, use one-time self-terminating session cookies so you are not placing track-able cookies on their PC.
  • Overtly make use of encryption where learners have to provide any personal information.
  • Destroy learner data as soon as it is no longer needed. Archive needed learner data offline or securely behind firewalls. Lock up your back-up tapes.
  • And let learners view their own learner record at any time, so they can see what data is actually available to authorized parties.

The second aspect to privacy is this: there is an urgent need for training in security and privacy for all personnel in any business that accepts and uses customer information. That’s just about every business. (In my view, such training should be government mandated, but then I’m a vendor).

Most security breaches are not the result of hi-tech brilliance on the part of the thieves, but of human weakness on the part of company employees. All employees need to understand the risks and learn the operating habits that mitigate them. They need to appreciate that technology is not in itself an adequate protection, and must be trained to develop the “street smarts” that will help them avoid the common behavior pitfalls so often exploited by villains.

Managers need to get their heads around the policies and procedures that will protect their customers, and must regard these with as much earnestness as those that protect their company.

Privacy and data security are not an IT-only responsibility, nor are they issues that you can deal with after the fact. Training has an important role to play. Get it right at the planning stage and you will be fine. Get it wrong, and you could be in big trouble.


Original in TrainingZONE Parkin Space column of 24 June 2005