Gauging the Impact of Learning

The Talent Development Optimization Council (TDOC) releases information about its promising Impact Optimization Model.

The Talent Development Optimization Council (TDOC), lead by Kent Barnett, CEO of Performitiv, is beginning to roll out it’s Impact Optimization Model (IOM) which aims to be “a comprehensive and practical approach to address the issues of impact, business results, ROI and optimization for programs.”

They have published a white paper that outlines the framework for the IOM (it’s free if you provide your information) and a podcast interview with Kent Barnett by Conrad Gottfredson of The Five Moments of Need.

The TDOC is an all-star panel, including Dr. Gary Becker, Dr. Nick Bontis, Dr. Jac Fitz-enz, Fred Reichheld, Jack and Patti Phillips, and Matt Barney.

The goal of the model is “to help professionals create an automated, repeatable process to tell their story of impact, by demonstrating value, while identifying improvement opportunities in their programs.” There are four components to IOM:

  1. Evangelizing the Story from Learning to Impact.
  2. Connecting Learning to Business Results
  3. Linking Learning’s Value to the Business
  4. Continuous Improvement

Overall, the model looks quite good overall. It draws from non-learning focused business models like Total Quality Management (TQM), Six Sigma, Net Promoter Score (NPS) and others.

The white paper outlines a 5 step process for moving from Descriptive Analytics to Predictive to Prescriptive.

  1. upgrade the traditional learning evaluation process to gather the right data
  2.  identify measurable business outcome indicators (BOIs) tied to the desired outcomes of each strategic program
  3. gather and process the data to ensure the programs are operating within acceptable impact ranges.
  4. analyze the prescriptive data to find ways to improve impact
  5. take action on the findings of your analysis and monitor the link to business result from your original performance to goals

The white paper goes through these steps in some detail. Overall, the model stands upon various measures and models that are standard business practice for most organization. This is a great step forward for Learning and Development in its effort to show its value to the business.

However, there is a significant, and disappointing, amount of data and analytics that still rests upon opinion, self-reporting and feelings. The authors of the report are clearly aware of this as they provide a weak defense of having done so:

Some people may complain that is self-reported data, but so is NPS, and this is much more insightful. NPS does not tell you if learning was applied, what business result was impacted or if it created value.

Impact Optimization Model, p.10

The “NPS uses self-reported data” argument is a strawman defense. NPS (Net Promoter Score) is a measure intended to report a group’s perception and opinion of a company, program or product. By definition it is based upon self-reported attitudes. Most of the measures that the IOM leans upon opinion and perception to generate data for evaluation could easily be replaced by behavior observation measures.

For example, the authors suggest “Are you a more effective manager after this program” is a valid measurement of impact. 1) research clearly shows that we are terribly bad at self-assessing our own performance and 2) there is no assurance that the manager and the L&D team share the same definition of effectiveness. On the other hand, if we have crafted the learning intervention well, we should understand the behavioral changes we were trying to create. If this is the case, then there should be a measurable behavior changes that will demonstrate impact.

This last concern that I raise may be in the process of being dealt with by the Council. In a new post on LinkedIn, Barnett explains they have identified 8 metrics that will support the IOM. The look great but they are working out the details. If the TDOC can incorporate more financial, business result, and behavioral data and steer away from opinion, self-reported and perceptual data, they will have a model that could indeed change the game for L&D as a strategic function within organizations.

PLEASE SHARE YOUR THOUGHTS IN COMMENTS BELOW

Do you think these regulations will change anything? Will they drive greater support for data collection in learning? Motivate more collaboration between the business units and L&D?

Feature Photo by Ryan McGuire provided by Gratisography.

Net Promoter Score

One metric that you should be borrowing from your social media marketing colleagues is Net Promoter Score (NPS).  NPS is a broad measurement of an audience’s view of an organization or product.  In business it is used to gauge the reputation of a company’s brand or how loyal customers are to a product.  Two things it would be great to understand about your community or your learning and development offerings.

NPS is calculated from the answers to the question, 

“how likely is it that you would recommend [product/brand] to a friend or colleague?”

An 11-point Likert scale is used to enable customers to rate their satisfaction with their experience.Net Promoter Score

Scores are gathered and totaled with answers between 0-6 being labeled distractors, 7-8 labeled passive and 9-10 as promoters.  If you subtract the distractors from the promoters, the result is your NPS.  It reflects the overall satisfaction with, in this case, your community.  The raw number (in the example, -2.7%) is not that valuable.  But by watching the movement of your score over time you can effectively gauge the relative health of your community.

You can also look at the number of promoters vs the number of detractors for insights as well.  Because of the behaviors of promoters, passives, and detractors, the NPS has been shown to be predictive of business growth in three areas. Because they are behavioral, they can be indicative of engagement and loyalty and thus applied to our communities.

Higher Margins and Spend

At first, this may not seem applicable, but let’s parse it out a bit.  If you do have a revenue line (ie, membership fees or products/courses that your members purchase), promoters are less-price sensitive while detractors are price conscious.  Promoters will be more likely to buy more and less likely to complain about the membership fee.

Higher Retention Rate

Detractors defect from an organization or brand at a higher rate than promoters.  Thus membership will drop. Interest in participating in learning activities will suffer.

Greater Word of Mouth

Promoters account for most referrals.  This will lead to more new members and greater retention of more current member.  It’s the promoters who convert Linkers and Lurkers to Learners in our communities.  Detractors are responsible for negative word of mouth which can lead to members leaving the community and overall negativity.

But What Does it Mean?

In my example above, we found an NPS of -2.7.  So what does that mean? In and of itself it honestly doesn’t mean a thing.  NPS is not a fixed measurement. -2.7 could be good or bad, depending on what your NPS has been in the past. 

If our NPS has traditionally been in the +10 to +15 range, then this -2.7 indicates that something negative happened in the last period.  If the NPS rebounds to it’s traditional level, then the event was one that came and went and your members have let it pass.  If however it remains at the lower level, you have a more systemic issue to deal with. 

If on the other hand, your NPS has been in the negatives for a long time, the -2.7 NPS could be indicating that your efforts to improve your community and engage your members and customers is working.

Net Promoter Score is not a silver bullet metric, but it should be in your list of key metrics. It can clearly alert you to whether your members and customers are happy or not. It can help you understand if new engagement initiatives are working. It can sound the alert that something is not sitting right with your members and customers.

PLEASE SHARE YOUR THOUGHTS IN COMMENTS BELOW

Do you think these regulations will change anything? Will they drive greater support for data collection in learning? Motivate more collaboration between the business units and L&D?