Friday, September 30, 2005

Learning from vaporware: how small is the future?

Small is the new big. Nanotechnology is the next big thing. Micro-payment transactions proliferate. From the business-card sized iPod Nano to the 100-minute bible, everything is being reduced to smaller or faster objects of consumption. This should be familiar to trainers who for years have been under pressure to reduce course length, cut times to competency, and walk on water. But some things can’t get smaller: you can shrink a Wurlitzer jukebox to something you could slip in your wallet, but you can’t make the songs it plays any shorter.

Can traditional learning experiences be decimated and served up in nano-chunks without losing effectiveness? Or might they actually be enhanced? Unlike music, where the song is the focus, learning is all about impact on performance – the ‘course’ itself doesn’t matter. If we are to take custom-tailoring of learning experiences to heart, the more granular our solutions, the more accurately we can fit each learner’s individual needs.

Smallness is one of the key characteristics of the content of Internet 2.0 and, in turn, of what is becoming known as “e-learning 2.0”. Big chunks of solidified content are simply not easy to look into or combine. With sandstone blocks you can build pyramids; with sand you can build anything from beaches to hourglasses.

Perhaps one of the greatest shortcomings of the early web and the applications that have to date been enabled by it is that it has primarily been about recording, organizing, and making accessible "stuff" from the past. There's great value in that. But the necessary emphasis on the here-and-now, and on the future, is rapidly coming into focus. It's not content, or even context, but process that gets us where we are going. We are not what we have done, but what we are trying to do. All of the diverse components of e-learning 2.0 aim to fill the vast white spaces between the legacy content piles with dynamic processes that can instantly mine the relevant diamonds from those piles, and pull them together in real-time into something unique and immediately useful. Some of those processes are lively discussion, synergistic collaboration, spontaneous project work, and nano-customization. Just as micro-transactions are starting to make e-commerce really different from traditional commerce, micro-learning experiences will make e-learning 2.0 really different from Model-T e-learning. Or that’s the theory.

Smallness is increasingly important in all data flows, and learning is simply another kind of dataflow. If learning is water, old-school SCORM learning objects are ice-cubes: uniform, predictable and transportable, but they melt and lose their usefulness rapidly. What we really need to do is vaporize the water. Knowledge vapor is simply learning liberated, in its smallest possible components – unlike learning objects, if you do not contain it, it disseminates itself far and wide, except where the circumstances are created to condense it back into liquid, or ice, again. The best technology available right now for vaporizing and liberating learning and for finding, filtering, and condensing it as needed, is the human mind. Of course lots of cool web technologies are emerging to help facilitate that process (e-mail was probably the first; blogs and wikis are getting there, as are communities of practice; mobile broadband helps). But technology is not the real key to the success of e-learning 2.0. What connects all our small pieces of learning is not a technology, but our humanity.

As far as jargon goes, I must confess to having major objections to the term "2.0" as applied to the web and to e-learning. It suggests a formal release of a new beta-tested version under some kind of planned production process. That's not the way things work any more. The evolution of technology and our ability and willingness to use it creatively has outpaced our ability to manage it, at least in any traditional sense of the word "manage." Trying to "manage" internet-enabled progress is increasingly delusional, like King Canute trying to command the tide not to come in. Agility, opportunism, plasticity, and instinct need to replace outmoded notions of structure, hierarchy, and traditional planning and financial controls, not to mention the sacred cow of centralized "managed" corporate learning.

But since (largely thanks to stock markets, tax authorities, and standardized accounting practices) our current economies rest on those rotting bureaucratic timbers, the bigger corporations may not be able to change, to take the right actions, in time to stay relevant and save themselves.

Google rose from nothing to become the new Microsoft so rapidly it looked like a conjuring trick. China makes 80% of everything sold by Wal-Mart, the world’s largest retailer, a transformation that took less than five years. Before the ink was dry on most American e-learning IPO’s, Indian companies were eating their lunch. It may be only months before the next new megacorp bursts from the web and makes even Google look tired and outdated. What role will training play in the success, or failure, of those endeavors? Can you train an organization to be agile, instinctive, anticipatory, and adaptive? Or can we merely work to remove the containment of knowledge and facilitate the vaporization of learning?

Best practices in questionnaire design

Recently so many people have been asking me to review their questionnaires and surveys that I thought I’d update a document I first created several years ago which sets out some essential best practices for creating good questionnaires. While written for training evaluation, the guide is applicable to any surveys.

1. Ask: “Why are we doing this?”

  • What do we need to know?
  • Why do we need to know it?
  • What do we hope to do when we find out?
  • What are the objectives of the survey?

2. Ask: “What are we measuring?”
In training evaluation, what you measure can be influenced by the learning objectives of the course or curriculum you are measuring:

  • Knowledge
  • Skills
  • Attitudes
  • Intentions
  • Behaviours
  • Performance
  • Perceptions of any of the above

Your questions, and possibly your survey methods, will differ accordingly.

3. Be aware of respondent limitations.

  • Where possible, pilot your questionnaire with a sub-group of your target audience.
  • The complexity of your questionnaire and its language should take into account the age, education, competence, culture, and language abilities of respondents.

4. Guarantee anonymity or confidentiality.

  • Confidentiality lets you follow up with non-responders, and match pre- and post studies.
  • Confidentiality must be guaranteed within a stated policy.
  • Anonymity prevents you from doing follow-ups or pre-post studies.

5. Select a data collection method that is appropriate.
Consider the speed and timing of your study, the complexity and nature of what you are measuring, and the willingness of respondents to make time for you. Options:

  • E-mail – fast, inexpensive, not anonymous, requires all respondents have e-mail.
  • Telephone – time consuming, not anonymous, may require skill, has to be short.
  • Face-to-face interview – slow, expensive, requires skill, best for small samples, qualitative studies.
  • Web-based – fast, inexpensive (if you use services like Zoomerang), can be anonymous, best for large surveys.

6. Write a compelling cover note.
Where appropriate introduce your questionnaire with a brief but compelling cover note that clarifies:

  • The purpose of study and why it is worth giving time to.
  • The sponsor or authority behind it.
  • Why you value the respondent’s input.
  • The confidentiality or anonymity of the study.
  • The deadline for completion.
  • How to get clarification if necessary.
  • A personal “thank you” for participating.
  • The signature or e-mail signature of the survey manager (or, ideally, of the sponsor).
  • If sending an e-mail, have it come from someone in authority who will be recognised, use a strong subject line that cannot easily be ignored, and time it to arrive early in the week.

7. Explain how to return responses.
If not obvious, make it clear how and by when responses must be returned.

8. Put a heading on the questionnaire.
State simply what the purpose is, what the study is about, and who is running it.

9. Keep it short.

  • State how long completion should take and make sure that it does.
  • Make questionnaires as brief as possible within the time and attention constraints of your respondents (personal interviews can go longer than self-completion studies).
  • Avoid asking questions that deviate from your survey purpose.
  • Avoid nice-to-know questions that will not lead to actionable data.

10. Use logical structure.

  • Group questions by topic.
  • Grouping questions by type can get boring and cause respondents to skim through.
  • Number every question.
  • Where possible, in web-based surveys put all questions on one screen, or allow respondents to skip ahead and back track.

11. Start with engaging questions.
Many questionnaires are abandoned after the respondent answers the first few questions.

  • Try to make the first questions non-intimidating, easy, and engaging, to pull the respondent into the body of the piece.
  • Try to start with an open question that calls for a very short answer, and ties in to the purpose of the questionnaire.

12. Explain what to do.
Provide simple instructions, if not obvious, on how to complete a section or how to answer questions (circle the number, put a check mark in the box, click the button etc.)

13. Use simple language.

  • Avoid buzz words and acronyms.
  • Use simple sentences to avoid ambiguity or confusion.
  • If necessary, provide definitions and context for a question.

14. Place important questions at the beginning.

  • If a question requires thought or should not be hurried, put it at the beginning. Respondents often rush through later questions.
  • Leave non-critical or off-topic questions, such as demographics, to the end.

15. Select scales for responses.

  • Keep response options simple.
  • Use scales that provide useable granularity.
  • Make response options meaningful to respondents.
  • Make it obvious if open-ended responses should be brief or substantial by using an appropriate answer-box size.

16. Fine-tune questions and answer options.

  • Keep response options consistent where possible - don’t use a 5-point scale in one question and a 7-point in the next unless absolutely necessary; don’t put negative options on the left in one question and on the right in another.
  • Be precise and specific – avoid words that have fuzzy meanings (“rarely” or “often” or “recently”).
  • Do not overlap response options (use 11-20 and 21-30, not 10-20 and 20-30).
  • If you use a continuum scale with numbers for answer options, use a clear concept at the top and bottom of the scale (instead of “on a scale of 1 to 5, how good is it? : 1-2-3-4-5, use 1=very bad -2-3-4-5=very good).
  • Use scales that are centred– don’t have one “bad” answer option and four shades of “good”.
  • Don’t force respondents into either/or answers if a neutral position is possible
  • Allow for “not applicable” or “don’t know” responses.
  • Edit and proofread to make sure that answer choices flow naturally from the question.

17. Avoid leading or ambiguous questions.

  • Don’t sequence your questions to lead respondents to answer in a certain way.
  • Avoid questions that contain too much detail or may force respondents to answer “yes” to one part while wanting to answer “no” to another (e.g. “How confident do you feel singing and dancing?”).
  • Minimise bias by piloting your questionnaire before it goes live.

18. Use open-ended questions with care.

  • Open responses are difficult to consolidate, so use them sparingly.
  • They often provide really useful data, so don’t avoid them completely.
  • Doing a pilot or running a focus group before rolling out a survey can provide useful insight for creating more structured closed questions.
  • Provide at least one open question so respondents can express what is important to them.

19. Thank the respondent.

  • Thank the respondent once again. Reiterate why you value the input.
  • If you intend to feed back results, emphasize when and how they can expect to get them.
  • If you have offered an incentive, specify what the respondent has to do to claim or be eligible for it.

                              Original in TrainingZONE Parkin Space column of 2 September 2003

                Learner-created learning

                I have always been convinced that one key to future success in web-based learning lies in the notion of the prosumer. Prosumers produce what they consume, and it seemed to me back in the mid 1990’s that creating their own content was something that people really enjoyed doing. I saw this in the fact that e-mail was the most-used application of the Internet. I saw it in the fact that AOL members spent more online time in chat rooms than in any other activity. I saw it in the enormous popularity of online massively multiplayer role-playing games, where, though the environment is provided for them, players create their own unique characters and pursue their own adventures. And I saw it in the early rise to prominence of special interest online communities such as Parent Soup where, much like trdev today, members created content for each other, with some editorial guidance.

                In the early days of web-based learning, before the oppressive influence of standardized Learning Management Systems, before SCORM took the spark out of ISD, in amongst the prevailing stand-alone conversions from CD-ROM there was a lot of interesting experimental design going on. Much of that innovative learning design centered on using the web for what it did best – connecting people with people to share experience.

                When I started an e-learning company in 1998 to provide a project management curriculum online, I built community into the design, rather than simply replicating the tried-and-true classroom versions of the courses. While the content of the course was delivered in a relatively traditional way, it was structured to have learners collaborate with each other, creating their own supplementary content. Every learner had a SME mentor, who was available by e-mail throughout the course. The pool of SMEs hosted online chat sessions around the clock, covering topics related to course content, where learners could exchange ideas, trade war stories, and get clarification on issues. Those sessions were all logged, scrubbed of proper names, and stored in a searchable online library. After tens of thousands of learners, that organically growing repository of community experience was a fabulous resource. The community was so valued by learners that many subscribed to it after completing their courses, so that they could continue to engage with each other and access the content.

                That booming company was acquired by a traditional learning business that had no time for such esoteric notions, and stripped the courses back to computer-pumps-it-at-you mode. They saw e-learning as a way to cut costs even at the expense of dumbing down learning effectiveness, and providing human interaction was considered counterproductive. Whether it was the result of ignorance, tunnel vision, technology standardization, or accountants getting the numbers wrong, such was the fate of most e-learning around the beginning of this century.

                But the prosumer market is still with us in other fields, and it is stronger than ever. Blogging is the obvious example, and it is till growing so fast that the statistics are out of date the moment they are published. Along with creating blogs, publishing your own photographs on the web has taken off, aided by free photo-hosting services like Flickr. Communities such as Del.icio.us, which are specifically designed for sharing information and links of mutual interest, are booming. And services like 43Things and Backpack, which help you tie all of these together, are starting to take off.

                In training, we are seeing prosumer concepts like informal learning, workflow learning, and collaborative learning coming into vogue. These are still considered by mainstream learning professionals as interesting but impractical, largely because they are hard to conceptualize and harder still to manage. Yet there are so many reasons why we should be dedicating at least some of our resources to experimenting with them. One reason is that there are lots of technologies out there that, with a little imagination, could be used to make collaborative learning more practical. Another reason, the most important, is that people have demonstrated time and again that they like to interact with others, and that they find creating their own content motivating and compelling.

                In the computer games industry, the conventional wisdom is that online games that have been massively successful have all allowed players to substantially influence their environment and leave their mark. Much as in the real world of clubs and associations, loyalty is sealed if the participant has invested time, energy, and creativity in building a presence that others can interact with and appreciate.

                It’s time we stopped treating e-learners like members of an anonymous audience in a darkened theatre, and started inviting them all up on stage.


                Original in TrainingZONE Parkin Space column of 26 August 2005