Learning Analytics are about to Go Big League

Will new SEC regulations regarding disclosure of human capital investments have an impact on learning and development?

On August 9, the US Security and Exchange Commission (SEC) adopted new rule amendments which, among other things, now require all publicly trade corporations to report on human capital metrics and objectives “as far as they are material to the success of the business.”

The interesting aspect of the new regulations is that the disclosures are not prescriptive and do not follow a framework. Rather the regulations take a “principle-approach” to what must be disclosed. What this means is that the company is required to disclose and human capital measures that will reflect the impact that human capital impacts the organization’s ability to generate profit or enables growth. This approach acknowledges that appropriate measurements may vary based on any number of variables including type of business, regionality, seasonality, etc.

These regulations have been officially under discussion for well over a year in the US. But the International Office of Standardization issued voluntary standards in 2018, which companies have been utilizing. While I haven’t yet seen the metrics the SEC suggests as part of the amendments to Item 101(c), the IOS guidelines prescribe 10 human capital measures of which 2 are L&D related: percentage of employees who have completed training on compliance and ethics and development and training costs. In an article in CLO Magazine in April of this year, Dave Vance argues that these regulatory changes are the catalyst that is going to ignite greater and greater demand for and scrutiny of learning metrics and reporting.

While the SEC’s principles-based approach leaves much room for interpretation, especially in early implementation, I think Vance is right that it will lead to an expectation of a culture of measurement in workplace learning and the standard will be what is the financial impact on the business. Sound familiar?

But this time, Wall Street will be watching. That’s the Big League. Are you ready to play ball?


Do you think these regulations will change anything? Will they drive greater support for data collection in learning? Motivate more collaboration between the business units and L&D?

Gauging the Impact of Learning

The Talent Development Optimization Council (TDOC) releases information about its promising Impact Optimization Model.

The Talent Development Optimization Council (TDOC), lead by Kent Barnett, CEO of Performitiv, is beginning to roll out it’s Impact Optimization Model (IOM) which aims to be “a comprehensive and practical approach to address the issues of impact, business results, ROI and optimization for programs.”

They have published a white paper that outlines the framework for the IOM (it’s free if you provide your information) and a podcast interview with Kent Barnett by Conrad Gottfredson of The Five Moments of Need.

The TDOC is an all-star panel, including Dr. Gary Becker, Dr. Nick Bontis, Dr. Jac Fitz-enz, Fred Reichheld, Jack and Patti Phillips, and Matt Barney.

The goal of the model is “to help professionals create an automated, repeatable process to tell their story of impact, by demonstrating value, while identifying improvement opportunities in their programs.” There are four components to IOM:

  1. Evangelizing the Story from Learning to Impact.
  2. Connecting Learning to Business Results
  3. Linking Learning’s Value to the Business
  4. Continuous Improvement

Overall, the model looks quite good overall. It draws from non-learning focused business models like Total Quality Management (TQM), Six Sigma, Net Promoter Score (NPS) and others.

The white paper outlines a 5 step process for moving from Descriptive Analytics to Predictive to Prescriptive.

  1. upgrade the traditional learning evaluation process to gather the right data
  2.  identify measurable business outcome indicators (BOIs) tied to the desired outcomes of each strategic program
  3. gather and process the data to ensure the programs are operating within acceptable impact ranges.
  4. analyze the prescriptive data to find ways to improve impact
  5. take action on the findings of your analysis and monitor the link to business result from your original performance to goals

The white paper goes through these steps in some detail. Overall, the model stands upon various measures and models that are standard business practice for most organization. This is a great step forward for Learning and Development in its effort to show its value to the business.

However, there is a significant, and disappointing, amount of data and analytics that still rests upon opinion, self-reporting and feelings. The authors of the report are clearly aware of this as they provide a weak defense of having done so:

Some people may complain that is self-reported data, but so is NPS, and this is much more insightful. NPS does not tell you if learning was applied, what business result was impacted or if it created value.

Impact Optimization Model, p.10

The “NPS uses self-reported data” argument is a strawman defense. NPS (Net Promoter Score) is a measure intended to report a group’s perception and opinion of a company, program or product. By definition it is based upon self-reported attitudes. Most of the measures that the IOM leans upon opinion and perception to generate data for evaluation could easily be replaced by behavior observation measures.

For example, the authors suggest “Are you a more effective manager after this program” is a valid measurement of impact. 1) research clearly shows that we are terribly bad at self-assessing our own performance and 2) there is no assurance that the manager and the L&D team share the same definition of effectiveness. On the other hand, if we have crafted the learning intervention well, we should understand the behavioral changes we were trying to create. If this is the case, then there should be a measurable behavior changes that will demonstrate impact.

This last concern that I raise may be in the process of being dealt with by the Council. In a new post on LinkedIn, Barnett explains they have identified 8 metrics that will support the IOM. The look great but they are working out the details. If the TDOC can incorporate more financial, business result, and behavioral data and steer away from opinion, self-reported and perceptual data, they will have a model that could indeed change the game for L&D as a strategic function within organizations.


Do you think these regulations will change anything? Will they drive greater support for data collection in learning? Motivate more collaboration between the business units and L&D?

Feature Photo by Ryan McGuire provided by Gratisography.

Retooling for the Future

Connie Malamed does a nice job of defending we Learning and Development professionals in her blog post, Retooling Instructional Design: 7 Ways Learning Professionals are Preparing for the Future. There has been a massive wave of change that has often left us subject to criticism that we’ve fallen behind or obsolete. Connie points out that many of us have been working at changing our methods, approaches and tools in order to prepare for new ways of doing what we do.

She provides a list of 7 ways learning professionals have been working to meet the demands of the modern workplace that is evolving quickly.

  1. Acceptance of Evidence-based Strategies
  2. Focus on Human-centered Design
  3. Adopting UX Techniques
  4. Use of Agile Models
  5. Creating Learning Journeys
  6. Applying Learning Analytics
  7. Designing for Newer Technologies

I wholehearted agree with Connie on these 7 trends that are at the core of what learning professionals will be doing now and in the future. I do feel she slightly missed the mark on #6 and #7. And I would add a #8 to the list.

Applying Learning Analytics

While she does indicated we are making more data driven decisions, she only mentions “the value of learning analytics for continuous improvement.” While this is true, it’s not a huge change from what we’ve always done in evaluating the effectiveness programs. Big data is enabling faster, more responsive analysis, but it’s not the game changer when it comes to Learning Analytics.

The real power of Learning Analytics comes in our ability to use data to:

  • make predictions of what is needed and what will work,
  • we can combine learning data with business data to determine true business value from learning activities, and
  • we can use data in real-time to provide truly personalized learning experiences in the flow of work.

These are the game-changing promises of Learning Analytics that will enable us to get in-sync with our business unit colleagues and finally demonstrate our real value to the organization.

Designing for Newer Technologies

Here I feel like Connie over simplified by limiting her discussion to the impact that virtual reality (VR), augmented reality (AR), and conversational interfaces (I’m guessing she is referring to Chatbots and other tools that take advantage of voice recognition and Natural Language Analysis) are having on.

She is right that learning professionals are leveraging the latest technologies. I’d even argue that this is a trend we’ve been at the forefront of for decades dating back to pre-internet days.

She correctly points out that we have an awareness that “a new tool will not magically solve a performance problem.” Yet we fall for the bright shiny new toy as quickly as others. There are all kinds of new technologies emerging (artificial intelligence, machine learning, xAPI, geo-presence, sensors and other internet of things devices, image recognition, pattern recognition, robotics, and more) and the “old” technologies are still viable (ink on paper remains a great, cost effective delivery mechanism for learning) depending on the solution needed.

Designing for Newer Technologies really points to the necessity to determine which technology:

  • resonates with our learners (irregardless of whether it is a “learning” technology or not),
  • can deliver the best learning experience for the given need, and
  • does so in a cost effective manner.

Be Marketers of Learning

Connie does touch on a bit of this trend when she discusses using personas and conducting learning campaigns. But I believe it should be called out separately. One, because there are numerous learning professionals and organizations who are starting to do this and, two, I believe it is vital to our successful transition into our future state.

We need to be champions of individual and organizational learning. The evangelists of a new learner centered, lifelong culture of learning that is supported by senior leadership and frontline managers. The learning journeys that Connie discusses need to supported with well articulated marketing campaigns.

Like our Marketing colleagues, we need to have an intimate knowledge of who our audience is. Who are the thought leaders? Who are the saboteurs? Who are the influencers? Who are the campaigns of change? What social networks already exist? Can we leverage them to help or will they resist?

Finally, we need to target managers and provide them with the meta-learning tools and the evidence that they are working that will lead to a conversion experience about learning.


Do you think these regulations will change anything? Will they drive greater support for data collection in learning? Motivate more collaboration between the business units and L&D?

Net Promoter Score

One metric that you should be borrowing from your social media marketing colleagues is Net Promoter Score (NPS).  NPS is a broad measurement of an audience’s view of an organization or product.  In business it is used to gauge the reputation of a company’s brand or how loyal customers are to a product.  Two things it would be great to understand about your community or your learning and development offerings.

NPS is calculated from the answers to the question, 

“how likely is it that you would recommend [product/brand] to a friend or colleague?”

An 11-point Likert scale is used to enable customers to rate their satisfaction with their experience.Net Promoter Score

Scores are gathered and totaled with answers between 0-6 being labeled distractors, 7-8 labeled passive and 9-10 as promoters.  If you subtract the distractors from the promoters, the result is your NPS.  It reflects the overall satisfaction with, in this case, your community.  The raw number (in the example, -2.7%) is not that valuable.  But by watching the movement of your score over time you can effectively gauge the relative health of your community.

You can also look at the number of promoters vs the number of detractors for insights as well.  Because of the behaviors of promoters, passives, and detractors, the NPS has been shown to be predictive of business growth in three areas. Because they are behavioral, they can be indicative of engagement and loyalty and thus applied to our communities.

Higher Margins and Spend

At first, this may not seem applicable, but let’s parse it out a bit.  If you do have a revenue line (ie, membership fees or products/courses that your members purchase), promoters are less-price sensitive while detractors are price conscious.  Promoters will be more likely to buy more and less likely to complain about the membership fee.

Higher Retention Rate

Detractors defect from an organization or brand at a higher rate than promoters.  Thus membership will drop. Interest in participating in learning activities will suffer.

Greater Word of Mouth

Promoters account for most referrals.  This will lead to more new members and greater retention of more current member.  It’s the promoters who convert Linkers and Lurkers to Learners in our communities.  Detractors are responsible for negative word of mouth which can lead to members leaving the community and overall negativity.

But What Does it Mean?

In my example above, we found an NPS of -2.7.  So what does that mean? In and of itself it honestly doesn’t mean a thing.  NPS is not a fixed measurement. -2.7 could be good or bad, depending on what your NPS has been in the past. 

If our NPS has traditionally been in the +10 to +15 range, then this -2.7 indicates that something negative happened in the last period.  If the NPS rebounds to it’s traditional level, then the event was one that came and went and your members have let it pass.  If however it remains at the lower level, you have a more systemic issue to deal with. 

If on the other hand, your NPS has been in the negatives for a long time, the -2.7 NPS could be indicating that your efforts to improve your community and engage your members and customers is working.

Net Promoter Score is not a silver bullet metric, but it should be in your list of key metrics. It can clearly alert you to whether your members and customers are happy or not. It can help you understand if new engagement initiatives are working. It can sound the alert that something is not sitting right with your members and customers.


Do you think these regulations will change anything? Will they drive greater support for data collection in learning? Motivate more collaboration between the business units and L&D?

Ask Why before Measuring

Recently I had the following scenario put forward to me:

If someone asked you: “How do I get started thinking about and moving towards a measurement-based approach to training?” How would you answer them?


My first question to them would be “Why?”
“Why are you feeling the need to move towards a measurement-based approach to training?”
To be honest, it’s a trick question.  Seeking to have a measurement-based approach to training is wrongly focused.  Training (and all learning) should be focused on improving specific and measurable organizational and/or individual performance.  Measurement (and data collection, metrics, and learning analytics) is in service of a performance-based, data-driven approach to training and learning.
However, there is value in the question of why a measurement-based approach to training is desired.  The answers, some worse than others, will reveal how much work is to do be done.

Nice Try, But Try Again

The really bad answers will be along the lines of:

  • My supervisor asked me to look into it
  • Every other department has great looking graphs and charts
  • At DevLearn/ATD-ICE/HR Tech Conference everyone was talking about measurement and metrics
  • xAPI enables us to measure all kinds of

The problem with these answers should be obvious.  Taking on any initiative to “keep up with the Joneses” or to meet an unrationalized task given by a supervisor is a waste of time and money.  Without a purpose, you’re simply taking stabs in the dark hoping to land on something valuable.   My response to these answers is “don’t waste your time. ”
Go back to the drawing board and determine if there is a business need to drive your exploration of measurement or forget about the initiative completely.  With no real rationale, it’s doomed to failure.

That’s Better, but…

There is a set of “middle ground” answers that are headed in the right direction.  But yet they fall short of providing an answer to the question “Why?”

  • The stakeholders for our new sales training want to measure the success of our solution by sales metrics (percentage of deals closed in first three months, overall increase of sales per representative)
  • We know we aren’t capturing Fitzpatrick Level 3 and 4 data and have to figure out how to do that quickly.
  • Finance won’t accept our budget requests without an explanation of how we are going to determine if each program is meeting its operational and financial goals.
  • We keep reading industry reports that say senior leadership of most organizations feel they aren’t getting adequate data from learning and development.  We approached our senior management and found out they feel this way as well.  But we don’t know what data we need to gather to satisfy them.

At this level of understanding,  the push for measurement is coming from external (to L&D) agents – various stakeholders, industry thought leadership,  organizational gatekeepers (finance/IT), and others.  These answers still reflect a reactionary stance regarding how we report on our initiatives.  Input from these external agents is important.  But it should be input, not strategic direction.  We need to synthesize this input and build a coherent and achievable strategy for projects and learning as a whole.

This may seem a little overblown, but generally, it’s not.  We are just at the beginning of the transformation of L&D to being evidence-based and data-driven.  Most of us don’t yet understand the nuances of performance measurement tied to business objectives

Purposeful Strategy

Asking “why?”  to this point in answering the original question has been posed in order to identify 1) a lack of true business goals, 2) a scattered, unfocused approach to data collection and measurement, and 3) to unearth the potential resources and roadblocks to performance-based, data-driven reporting and decision making.

But the real “why?” (or why’s) gets at the heart of the purpose of each initiative and the desired change in organizational and/or individual performance.  Once this purpose is fully understood and a preliminary learning strategy and supporting measurement strategy can be developed.  Data collection, measurement, metrics, performance evaluation, and learning analytics are in support of the overall learning strategy. They are the means to an end, not an end in themselves.

It Comes Down to the Data

With all of this said, I still haven’t answered the original question, which re-written to address my initial concern would be:

If someone asked you: “How do I get started thinking about and moving towards a performance-based, data-driven decision and reporting system to support learning here at XYZ Corporation?” How would you answer them?

In general, I advocate choosing one or two new projects that are small-to-medium in scale to serve as a pilot and/or guinea pig.  One, if something goes haywire, it will have less of an impact. Two, you’ll be able to cycle through it faster – validating your new approach more quickly – so you can replicate your success rapidly.

On a more specific level, my response would be: Do you have a Data Strategy?

Do you know the answers to the following questions?

  1. Figure out who are the stakeholders in your project’s success. What role will they want to play in the project? Who is ultimately responsible for achieving the desired performance change?
  2. Make sure that the requested intervention has a clearly identifiable expected impact upon the business. If the stakeholders can’t define the benefit the change will have on the business, how will you create appropriate learning or performance management experiences?
  3. Understand how the stakeholders for this intervention expect to determine the project’s success or failure. What is the measure of success? Some may be qualitative.  Some may be quantitative.
  4. With your stakeholders, determine how each measure of success should be measured and set a SMART goal. If the measure of success is “increased sales” will it be measured by region? individual? company wide? Will it be recorded in units? currency? signed contracts?  Will the goal be an increase? a raw number?  a percentage over last period? 
  5. Determine the specific data that is needed to complete each measurement. What is it? Where in the process would it occur? Is it quantitative or qualitative? How might you measure it? Does the measure of success have component points of data that much be collected separately then calculated together? This should be done in an “ideal world” exercise.  Don’t worry about technologies, policies, collection methodologies, etc. at this point. What data would you need to provide the best information possible?

At this point, if not before, I’d stop and let them know that while there is much more beyond this, I’m guessing that their head is reeling.  My point is, it all has to begin with the business purpose behind the initiative which needs to be analyzed down to the data points needed.

The five steps I’ve outlined above, are required if you want to establish valid measurements that meet the business objective(s) of the initiative.  You will have powerful stakeholder buy-in and a foundation for valid and accepted reporting.  You’ll have the basis for ROI figures that are supported by senior management.  You also will be on your way to becoming a trusted business partner.

Determining the data you need to collect before you begin the design phase of your project is crucial, otherwise your design may leave out critical moments that are needed to generate the correct data.

SECRET: You Already Know How to Do This

You are doing stakeholder analysis already.  You’re already talking to your stakeholders doing a needs analysis, you’ll just add a few questions about their dreams and aspirations (and ask them to quantify them if they haven’t already).  You do task analysis of the process to be taught. You’ll add a few columns to your task analysis table for information regarding the related data.  You already know how to set SMART goals.

What do you think? Is this doable? Do you agree that sweating the data is worth it?  What would you change?  Why?  Please add to the conversation in the comment section below.

Start with Assessment

Beyond the employment interview, how often do we assess employees as they move through their time with the organization?

Stephen Kelner of Spencer Stuart discusses the need for assessment early in the development process for leaders in his blog post, Why Effective Executive Development Begins with Assessment.  I think his insights into the role of assessment in executive develop are just applicable to the rest of the workforce and should be part of any L&D analytics and assessment plan.

In a nutshell, Kelner’s argument can be broken down into a basic statement of ‘how can you arrive if you don’t know where you’re starting from.’  He says that executives crave feedback on their performance “Rich, specific feedback tied to the demands of current and future roles enables leaders to improve and develop important new capbilities.”

Current, event-based leadership training and coaching have similar effectiveness ratings as the rest of training.  Kelner says that the widely accepted research on executive training shows only 25% of participants make any behavioral changes at all.  “Some categories of executive training have been found to have zero impact.”

A major factor in this poor performance is that what guidance/training is provided is done with little understanding of the individual contexts the leader is in, what he/she needs to meet the challenges of a specific role, and how change is impacting all of that.

Without a through assessment/analysis of these factors, there is little chance of impacting the behavior of executives and thus their ability to perform better.

Kelner provides three guidelines when assessing executives prior to initiating a development plan:

  • Measure the capabilities that are central to effective executive leadership – for the future as well as present leadership demands
  • Consider the relevant leadership context – including environmental changes, and remembering that current performance is not the same as future potential
  • Evaluate potential with a development lens =- but be sure to ask “potential for what?” and “is the leader motivated to develop in that direction?”
  • Embrace multiple methods and perspectives for precision

Hey goes into each of these points in good detail, but I won’t repeat it here.

I agree with Kelner’s closing statement that, “Assessment provides the starting benchmark and identifies the best opportunities fo a leader to grow and change, and thus enables the growth of a company’s leadership overall.”

But throughout organizational learning efforts, the is very little effort made to understand where individuals are starting from.  Sure, in our instructional design models there is some attention given to understanding context and doing gap analysis regarding current and desired future states.  But in practice, little information is gathered – pre-design or built into the current project.  We talk about the Kirkpatrick and other post-instruction assessment, but how often do we assess “what’s next?” or “what else does this employee need?”

Ideally, we’d be developing a portfolio or helping the employ to develop one, which would provide an understanding of what she/he needs to advance her/his career.

Assessment should be ongoing as a part of the organization’s performance management effort and a robust learning culture.  Unfortunately, often, neither of these is the case.

Your turn.  What do you think?  Do we assess learners pre-training?  What types of assessments do you use or do you think are useful in benchmarking where learnings are starting from?

Feature photo by SportSuburban via Flicker.  Creative Commons Attibution License 2.0.

Leading the Target: Challenges for 2017

Having grown up in a hunting family, Bill Brandon’s article in Learning Solutions Magazine entitled Leading the Target: Challenges for 2017 caught my attention.  His forecast for 2017 is on the mark.

When you are seeking to hit a moving target with a rifle or bow and arrow, the first thing you are taught is that you must “lead the target.”  If you aim for where the target is when you fire, your shot will be woefully behind the target when it gets there.

If there were ever a moving target in Learning and Development, what we will need to accomplish in 2017 is one.  Brandon points out that while the challenges we face are not new to us, the coming year will be focused on 4 key topics:

  • Learning Analytics
  • Bring Your Own Device and Security
  • Authoring Tools
  • Accessibility

They are being enabled and/or changed by a number of innovations in concepts and technologies:

  • xAPI and cmi5
  • The “realities”: virtual, augmented, mixed
  • Cloud uses for learning
  • Multimodal learning
  • Mobile microlearning
  • Spaced learning
  • Personas

And for good measure, he throws in the fact that the Gig Economy will likely be a disrupting factor in how we go about doing all of this.

I’m in agreement with Brandon in both the fact that none of this is new.  But I also agree that all of this coming together is going to make 2017 a difficult year to hit this moving target.

2017 is going to be all about agility.

Take a look at your personal learning network now.

  •  Are you in touch with the thought leaders in your areas of specialization?
  •  Do you know how the concepts and technologies mentioned above will impact your work?
  • What do you need to learn to be better prepared?

Don’t look at what is being done today.  Lead your target so that you’ll be ready to do what needs to be done tomorrow.

2017 Learning Trends: Custom, Adaptive, Anywhere, and Impactful

Jeff Carpenter, CEO of Caveo Learning, emphasizes the move to measuring performance improvement rather than training results in his post 2017 Learning Trends: Custom, Adaptive, Anywhere, and Impactful on Caveo’s blog.  The trends outlined in his post are based on feedback from over 100 CLO’s and other learning leaders.

  1. Deliver Learning on the Learner’s Terms
  2. Create “impact,” Not Merely “Learning”
  3. The Rise of Adaptive Learning
  4. Curation of Customization
  5. Learning Experience Gets Customized
  6. Interactive Video Pushes into the Mainstream
  7. Measure Twice, Train Once

None of the trends, in and of themselves is shocking.  Although the prediction that Gamification has had it’s time in the spotlight and will now become just another tool in the L&D toolkit is sure to stir some controversy.

But the seven predicted trends are what the profession has been working on for some.  We’re on the right path.   Individualized, anytime/anywhere, performance-focused learning has been the brass ring we been reaching for for a long time.  2017 we will have a chance to grab it.

The underlying trend in both Carpenter’s post yesterday and Josh Bersin’s post of Bersin by Deloitte’s predictions last week is urgency.  This may be L&D’s only chance to grab that brass ring.  There may not be another time around the carousel if we miss this opportunity.  Our organizations have been patient as we’ve dealt with massive changes in the workforce and emerging technologies, but it is time for us to step up and delivery.

Fortunately, there are reasons to believe we can succeed in 2017 and radically change how we deliver value to the organization. I was glad to see Caveo mention xAPI.  I believe it is going to have a greater impact in 2017 than most people are aware of today.

The challenges are real and we will have to once again stretch ourselves again.  But grabbing the brass ring is possible.  Get ready……here it comes……

(photo by camknows on Flickr.  Creative Commons — Attribution-NonCommercial-ShareAlike 2.0 Generic — CC BY-NC-SA 2.0 licence)

Bersin by Deloitte: 2017 – the Year Everything Goes Digital

Bersin by Deloitte has just released its Predictions for 2017: Everything Is Becoming Digital report.  Amongst the 11 predictions for HR there are specific predictions for Learning and Development.

2017 will be a tipping point for L&D and we will see dozens of world-class “digital learning” solutions all over the world.

Overall, the report says that, due to the rapid and non-stop changes in  technology and expectations from the business, Learn and Development has fallen behind and will continue to struggle in 2017.  The challenges L&D faces will be in a context of Massive HR transformation to a performance oriented, employee responsive digital culture that can address the impact of future work.

They predict that (comments in blue are mine):

  • Real-time feedback and analytics will explode in maturity.  While L&D isn’t mentioned specifically in this prediction, it does discuss HR as a whole.  With the advent of xAPI and cmi5 to better report today’s ongoing and often social learning focused learning experiences, we should be able to make a remarkable move forward in this area.
  • 2017 will see the launch of new and highly restructured LMS’s.  This will be refreshing for the companies in a position to adopt new LMS’s as the current LMS’s aren’t built to deal with the challenges that social and blended learning ecosystems create.  In addition, a myriad of new social tools are being launched and improved as well.  Add xAPI to the mix and L&D technology and its impact will begin a radical change in 2017.
  • Video and self-directed learning will become the dominant factors in organizational learning moving forward.  The shift to a micro-learning, “always-on” model of learning will accelerate in 2017.  Social Learning and learning eco-systems design will also contribute a transformation of the work of Instructional Designers and Facilitators.
  • L&D functions will need to be rethought and restructured.  Bersin by Deloitte points out that while L&D is aware of this change and is driving the discussion, two-thirds of all corporate learning organizations are structured with a centralized training function based upon old models.  L&D personnel and competencies need to be built into the business units to become more responsive.  New competencies, some relatively alien to many L&D professionals, will be incorporated into existing jobs  and new roles will be created.
  • L&D needs to take the lead on driving a culture change in organizations to an always-on learning focus.  They task us with showing managers that they need to drive learning because it is tied to the success of business.  We need to tie learning inextricably to performance , leadership skills, and organizational success and become marketing stars.  
  • “2017 will be a tipping point for L&D and we will see dozens of world-class “digital learning” solutions all over the world.”   2017 will make heroes out of agile learning leaders who can drive innovative and creative solutions that change the  course of organizational learning.

What do you think?  Is Bersin by Deloitte on the mark in their predictions?  is your L&D organization ready to lead this massive change?  Are you ready?

You can download this report on a complementary basis, for a limited time.

Towards Analytics Literacy


In his post, Analytics Literacy is a Major Limiter of Ed Tech Growth, Michael Feldstein argues that there is a lack of basic literacy in the Ed Tech and Learning and Development communities.  He points out that analytics is as much about storytelling and sensemaking as it is about data.  We intuitively create stories about the data we see, it is the core of hypothesis creation.  Through repetitive and progressive testing of hypotheses, we come to trust the story that algorithms tell.  This builds analytic literacy.

Feldstein provides several excellent examples of the role of analytic literacy in evaluating student performance based upon logins to software,  weather forecasts of precipitation, and the recent US Presidential election.  When we lack the literacy to get the story right, we start to distrust the data or the analytics, not our literacy.

If we, as a culture, lack the basic literacy to have clear intuitions about what “a 70% chance” means, then how likely is it that we won’t have shocks that cause us to distrust our learning analytics because we didn’t understand their assumptions and limitations?

He uses the medical community’s move to scientific methodologies a century ago as an example of the transformation that the learning community now needs to undertake regarding analytics and performance.  But he also points out that we need to avoid placing all our analytical trust in various technology tools.

Using a personal story about a medical issue, he discusses how dependence upon various diagnostic tools didn’t reveal the cause of back pain he was experiencing.  It was a doctor touching his back and feeling the benign fatty tumor that was present before he was correctly diagnosed.

He finishes by concluding that the “training of learning and development professionals needs to make a radical change to  transform our teaching culture into one of learning science and data science literacy.”  While not losing the intuition and observation skills that have propelled our field to date.

I’m not sure that I agree with Feldstein that incorporating learning analytics into our profession is going to take a radical transformation.  15-20 years ago, in the early days of elearning, there were predictions that the advent of online, digital tools would decimate the L&D profession.  Which, supposedly was so rooted in brick and mortar, face-to-face training delivery that it wouldn’t be able to adapt.

Well, time has shown the world what we already knew about ourselves, we will adapt in whatever way we need to achieve our goal of helping people learn.  In the case of elearning, not only did we adapt, we thrived.  We turned the change to our advantage to improve learning across the board.

Analytics, statistics, and Big Data are a bit of a foreign language for most L&D pros, but it fits well with our trained skills of needs analysis and evaluation.  We cherish any information that will enable us to design better learning experiences.  There will be an additional benefit in that we will be able to demonstrate our link to not only the success of the businesses we work within, but we will resonate more closely with the management culture of our organizations.   Our evidence of success will look like their evidence of success.  Learning analytics has the promise of finally putting us in a position of being peers among peers in our organizations.  That may well be the best carrot for L&D professionals in taking on this new challenge.