Tuesday, August 02, 2005

Meaningful metrics beyond ROI

There is a common misconception in business that, because they work with them all the time, financial people understand numbers. They like to reduce everything to money – what did it cost or what did it make? They insist on dealing in certainties and absolutes, where every column balances to the penny. But the real world does not work like that. The real world is characterized by imperfections, probabilities, and approximations. It runs on inference, deduction, and implication, not on absolute irrefutable hard-wiring. Yet we are constantly asked to measure and report on this fuzzy multi-dimensional world we live in as if it were a cartoon or comic book, reducing all of its complexity and ambiguity to hard financial “data.”

We struggle for hours (often for days or weeks) to come up with the recipe for “learning ROI.” The formula itself is simple, but the machinations by which we adjust and tweak the data that go into that formula are anything but simple. Putting a monetary value on training’s impact on business is fraught with estimation, negotiation, and assumption – and putting a monetary value on the cost of learning is often even less precise.

Yet when was the last time you saw an ROI figure presented as anything other than an unqualified absolute? If you tried for statistical accuracy and said something like, “this project will produce 90% of the desired ROI, 95% of the time with a 4% error margin,” you’d be thrown out of the boardroom. You simply can’t use real statistics on an accountant, because the average bean-counter can’t tell a Kolmogorov-Smirnov from an Absolut-on-the-rocks. Don’t tell us the truth; just give us numbers that conform to our unrealistic way of measuring the business.

We spend way too much time trying to placate financial people by contorting our world to fit their frame of reference, and we allow them to judge and often condemn our endeavors according to criteria that are irrelevant or inappropriate. Perhaps there is some comfort in knowing that the problem is not unique to training. In a couple of decades in marketing, I have seen plenty of good brands ruined by ill-conceived financial policies, usually to the long-term detriment of the company as a whole.

But you don’t need to be a statistician or an accountant to make a strong business case based on logic and deduction, and there is no need to be pressured into using the preferred descriptive framework of a book-keeper. The pursuit of the measurement of ROI in training is often a red herring that distracts from the qualitative impacts that our work has on the performance of the business. ROI is typically not the best measure of that, and, after making all of the heroic assumptions and allocations needed to arrive at it, that magic ROI figure may well be a false indicator of impact.

Unfortunately, the indicators that are useful and reasonably accurate are often hard to convert to financial data, so they do not get taken seriously. And, compounding the problem, training managers themselves often ignore these indicators because they are not captured at the course level. Our focus too often is on the quality of courses rather than on the quality of our contribution to the business in total.

We need to widen the focus. While learner satisfaction, test results, and average cost of butts-on-seats are useful metrics, it is only after our learners have returned to work that we can begin to see how effective the learning experience really was. What are some of the indicators that let us know how we are doing? Many of them are produced already, often by the financial people themselves, and tracking them over time gives good insights into where we are doing well and where we might need to pay more attention.

Some of those metrics include:

  • Training costs per employee.
  • Enrolment rates and attendance rates.
  • Delivery modes, plans against actuals.
  • Percentage of target group that is “compliant”.
  • Time from eligibility to compliance, or to proficiency.
  • Percentage of workforce trained in particular skill areas.
  • Learning time as percentage of job tenure.
  • Availability, penetration, and usage rates of help systems.
  • Skill gap analyses tracked over time.
  • Productivity (such as, for example, number of new clients per 100 pitches).
  • Attrition rates.

There are many, many more. Metrics such as these let us put on the manager’s dashboard indicators of performance in areas such as operational performance, compliance, efficiency, effectiveness, and workforce proficiency, as well as harder to capture dimensions such as motivation and readiness for change. Training departments need to think “outside the course” and come up with ways to derive the right indicators in a way that is inexpensive and unobtrusive.

One of my frequent recommendations is that training departments learn something about data collection from their marketing colleagues, and set up the ability to run surveys and focus groups, to investigate learner satisfaction, customer attitudes, job impacts, attitudes, and manager perceptions. This skill is often absent in training departments, which is a pity because these methods can produce great insights and save money and time. If you build this capacity into your training organization, getting a read on Levels 3 and 4 can become as much a part of your evaluation regimen as gathering smile sheets. You don’t have to interrogate the universe if you can pick a small sample. And you can produce real data and real trends that go down very well in the boardroom.

Original in TrainingZONE Parkin Space column of 22 July 2005

3 comments:

Anonymous said...

Hi Godfrey,

Super article - monetized ROI is just one dimension of something much larger.

Your list of differnt metrics is intriguing. Not sure what yo uare getting at with "learning time as percentage of job tenure>"

Also, for "usage rates of help systems" - do we want it high (our help is really popular, but unfortunately our system requires a lot a help because it's so poorly designed) or low (because users are knowledgeable and don't need help or because they need help and the help system isn't giving them the answers they need so they must contact the expert in the next cubicle?)

My company is figuring out hte best kind of dashboard for our clients, and these are exactly the issues we're considering. Nice job!

Matt Adlai-Gail
Chief Innovation Officer
EduNeering, Inc.
Princeton, NJ

Godfrey Parkin said...

Hi Matt,

Thanks for your comments.

With "learning time as percentage of job tenure" I am trying to get at two things: how much training it takes to get someone to a 'real' promotable state; and how sustained our training involvement is as people become more senior. Most companies 'front-load' their training for new recruits or new appointees, and virtually ignore those who have been around a long time.

As for "usage rates of help systems" - that can indicate a number of things, as you suggest (accessability and usefulness of support tools; deficiencies in formal training; redundancy of support tools etc.) - but it's an indication only. It simply raises a flag, saying 'we need to investigate this further.' In training, we tend to want to do one-stage data collection. In the real world, the first stage often only identifies the questions, rather than the answers!

jackvinson said...

Interesting stuff, Godfrey. Two comments:

One. I heard a guy from IBM last year suggest that when ROI gets into the discussion, that the project is doomed. Projects that management want will always be approved. Projects that management doesn't want will be rejected.

Two. The difficulty with ROI for these kinds of effort (training, learning, etc) is that we traditionally don't know how to connect to the real bottom line of the company. How will X increase throughput? How will it reduce operating expenses or investment? What happens to these metrics if we don't do X?

- Jack
http://blog.jackvinson.com