Friday, November 21, 2003

Course licensing

If you are licensing courses, you should find that the vendor is flexible about meeting your needs. Typically, you will buy a license for X number of enrollments (an enrollment being one learner in one course), and that license will have a time delimiter as well as the maximum enrollment delimiter. So, for example, you would license a course or package of courses for one year for a maximum of 400 enrollments. The effective cost per enrollment will drop depending on volume. Usually, you buy the license in advance, and it is a use-it- or-lose-it agreement.

You can also license on a pay-per-use basis, paying only every time a new enrollment takes place. Expect to pay a fixed cost per enrollment that is substantially higher than on a volume license. You would only opt for pay-per-use if you had very few learners -- course fees are relatively expensive, and you and the vendor incur more administrative load.

There are many in-between license negotiations that you might be able to do. A common one is to buy a small volume license with an agreement that when you exceed its ceiling you can roll into a second discounted license, or into pay-per-use at a discounted rate. Also, if you are interested in a collection of courses, you can buy different volume licenses for different courses depending on your expected usage, and opt for pay-per-use on those courses that are likely to be used by small numbers.

The key is to go into a negotiation knowing what your usage is likely to be, and get your vendor to work with you on building a package that works. Off-the-shelf courses should not have inflexible off-the- shelf license terms :-)

Thursday, November 13, 2003

RoboDemo v. Camtasia

RoboDemo is certainly more powerful (feature-filled) than Camtasia, so it is harder to learn. But it is a great product. If you are planning on serving your Flash over the Web then RoboDemo is probably a better bet -- not just because it runs easily with your LMS. The screen motion capture system in RoboDemo is fundamentally different to that of Camtasia. Camtasia records the entire capture area for each frame, whereas RoboDemo captures only what has changed in each frame (and frequently this is just curser location). So RoboDemo's files are considerably smaller than Camtasias. I have also found Camtasias post-capture edit facilities to be rather crude.

RoboDemo will probably do what you need, and a lot more, and it will be relatively quick to learn. Buy it before Macromedia assimilates it and doubles its price.

Tuesday, November 11, 2003

Graphics

When you need graphics or pictures in designing training, do you buy or make your own?

We typically do both: We buy royalty free stock photos and take some photos ourselves or commission someone to take them where necessary, depending on the need. We also create our own graphics or outsource to a contract graphic artist. Typically we will sketch out what is required and have a designer create the finished work. We would always outsource video work.

I use Illustrator and Photoshop a lot, because there is not much that you can't do with them and they work together well. For animated stuff, a combination of Premiere, Camtasia, Flash, and RoboDemo seems to cover most in-house needs. Learning curves are not too steep on Adobe tools if you are not trying to reach maistro level. For that we outsource, often offshore.

I can also recommend one of the best services on the Web - elance. Elance is like a marketplace for freelancers, many of them in Asia. You post a project description and a budget, and people bid on it within minutes. You don't pay till you are happy with the deliverable, and for a limited graphics project your credit card rarely takes more than a $50 hit. We have often posted a project online (e.g. turn these sketches into Flash animations, yesterday!) and had it executed perfectly in only a few hours. BTW, you can get anything from a simple drawing to a complete Website, from programming help to a complete software solution. Larger organizations could learn a huge amount about customer service and responsiveness from the individuals we have dealt with through Elance. Elance.com is the kind of networking that the Web was made for.

For stock photos, we usually start with corbis or StockMarketPhoto. The "completely free" graphics sites usually don't have much that appeals, and (IMHO) much of the free stuff does your end-product no favors in credibility terms. It's often the online equivalent of those irritating ant-people (were they called ScreenBeans?) that PowerPoint abusers used to stick everywhere a few years ago.

For self-shot photos, I normally use a Nikon 950, though I have contractors who use both higher-end and lower-end cameras. I recommend shooting at the highest resolution available, and then cropping and scaling down resolution in Photoshop. You use 72dpi for Web photos, and in outputting jpegs and gifs you need to try to keep them down to file sizes of less than 20-30kb -- though if the need warrants you can go up to 200kb or more. We try to keep total graphic load on a page to under 80kb, which is still rather fast on a slow dial-up. If your target audience is all on broadband, you have a lot more leeway. Photoshop lets you play with image optimization, and tells you the file size and load time while you are tweaking.

Monday, November 03, 2003

Critique of E-learning Consortium's research methodology

This study is full of holes, and the industry deservs better. Others have challenged the research design, pointing out that the sample is self-selected. But it is not just in sample selection that the original study about drop-out rates was flawed. The questionnaire design itself was flawed to the point where no analysis could produce meaningful results. When the rfesearcher posted his original solicitation for participation in the first study on the trdev list, I responded in some detail with my concerns (relevant sections are at the end of this message **).

Research that is not "statistically valid" in any pure sense is very common in the commercial world, and is used often where time and budget and study goals do not make "pure" research a sensible option. For each situation there is an acceptable research instrument that will best satisfy the need. That's fine, and usually those commissioning the work are aware of any limitations. But I share many peoples' concern that we are seeing more and more of what is effectively "pop-quiz" material being misrepresented to a wider audience as serious research with meaningful results. There is enough confusion and ambiguity in the e-learning world as it is, without such studies adding to the mythology.

** Extracts from my response of 6/3/03:

>From the questionnaire, it seems that little useful information can come from the information being asked for. I have objected in the past to "research" data that is meaningless because of its lack of specificity, the inability to identify and cross-tabulate causal data, or the dubious design of the sample. This study is no different. E-learning is not defined for respondents, yet respondents are required to answer questions about it, without themselves being asked to define what they mean. True, later in the questionnaire respondents get to list their own e-learning experiences, but the ambiguity and breadth of the list provided is astonishing - here it is:

* CD-Rom
* Live instructors with online materials
* Facilitated online instruction
* Self-paced online without facilitation
* Real time virtual classroom
* Video
* Other

The nature of the list aside, there is no way that data analysts will be able to meaningfully connect reasons for abandoning courses (selected from an earlier list) with even the broad type of experience selected from the list, because it is all "select all that apply". Far better to have asked respondents to pick one instance of abandonment and asked questions about that. Now that may have produced some useful numbers that may help us all do a better job.

All that can possibly come out of this is "X% of e-learners abandoned at least one e-learning course at some time" and -- as a completely separate data point that can not be mapped back to the drop-out figure -- "the most common reasons cited for abandoning are..". Garbage in, garbage out. But I am sure that the data will be spun in all sorts of fascinating ways for presentations and press releases, and some numbers will find their way into training mythology. The aggregated answers to the following will probably get a lot of play, too:

* My estimate of the average rate of completion for e-learning in my organization is (select a percentage range).

What does this question mean? Rate of completion could be percentage of people who never drop out, percentage of courses completed, percentage of people scheduled to take courses who have already finished them, percent of total company training hours that has so far switched to e-learning... And, more importantly, what qualification or data does the average respondent have to substantiate his/her estimate?

In the past, such studies have sought to determine the percentage of COURSES that are abandoned, where this seems to be seeking the percentage of LEARNERS who have ever abandoned a course. I would imagine that to be 100% (that video aerobic dance program that I started five years ago really didn't get me hooked -- yes, video is classified by the study as e-learning). And if it is learners, not courses, that are the focus of the study, should we not be trying to find out more about them than a few basic demographics?