Monday, November 03, 2003

Critique of E-learning Consortium's research methodology

This study is full of holes, and the industry deservs better. Others have challenged the research design, pointing out that the sample is self-selected. But it is not just in sample selection that the original study about drop-out rates was flawed. The questionnaire design itself was flawed to the point where no analysis could produce meaningful results. When the rfesearcher posted his original solicitation for participation in the first study on the trdev list, I responded in some detail with my concerns (relevant sections are at the end of this message **).

Research that is not "statistically valid" in any pure sense is very common in the commercial world, and is used often where time and budget and study goals do not make "pure" research a sensible option. For each situation there is an acceptable research instrument that will best satisfy the need. That's fine, and usually those commissioning the work are aware of any limitations. But I share many peoples' concern that we are seeing more and more of what is effectively "pop-quiz" material being misrepresented to a wider audience as serious research with meaningful results. There is enough confusion and ambiguity in the e-learning world as it is, without such studies adding to the mythology.

** Extracts from my response of 6/3/03:

>From the questionnaire, it seems that little useful information can come from the information being asked for. I have objected in the past to "research" data that is meaningless because of its lack of specificity, the inability to identify and cross-tabulate causal data, or the dubious design of the sample. This study is no different. E-learning is not defined for respondents, yet respondents are required to answer questions about it, without themselves being asked to define what they mean. True, later in the questionnaire respondents get to list their own e-learning experiences, but the ambiguity and breadth of the list provided is astonishing - here it is:

* CD-Rom
* Live instructors with online materials
* Facilitated online instruction
* Self-paced online without facilitation
* Real time virtual classroom
* Video
* Other

The nature of the list aside, there is no way that data analysts will be able to meaningfully connect reasons for abandoning courses (selected from an earlier list) with even the broad type of experience selected from the list, because it is all "select all that apply". Far better to have asked respondents to pick one instance of abandonment and asked questions about that. Now that may have produced some useful numbers that may help us all do a better job.

All that can possibly come out of this is "X% of e-learners abandoned at least one e-learning course at some time" and -- as a completely separate data point that can not be mapped back to the drop-out figure -- "the most common reasons cited for abandoning are..". Garbage in, garbage out. But I am sure that the data will be spun in all sorts of fascinating ways for presentations and press releases, and some numbers will find their way into training mythology. The aggregated answers to the following will probably get a lot of play, too:

* My estimate of the average rate of completion for e-learning in my organization is (select a percentage range).

What does this question mean? Rate of completion could be percentage of people who never drop out, percentage of courses completed, percentage of people scheduled to take courses who have already finished them, percent of total company training hours that has so far switched to e-learning... And, more importantly, what qualification or data does the average respondent have to substantiate his/her estimate?

In the past, such studies have sought to determine the percentage of COURSES that are abandoned, where this seems to be seeking the percentage of LEARNERS who have ever abandoned a course. I would imagine that to be 100% (that video aerobic dance program that I started five years ago really didn't get me hooked -- yes, video is classified by the study as e-learning). And if it is learners, not courses, that are the focus of the study, should we not be trying to find out more about them than a few basic demographics?

No comments: