xAPI Cohort – TorranceLearning

I’ve signed up for the Fall 2018 xAPI Cohort sponsored by TorranceLearning.  It’ll be my 2nd Cohort as an active participant – assuming I find a team to join.   Fall 2017 I was a “lurker” – meaning I was involved in a team, but because I had registered for the Cohort, I was able to attend any of the weekly sessions and to dip in and out of the project teams’ work that was in Slack.  Yes, this a planned role in the xAPI Cohort!

The Spring 2017 Cohort had 40 +/- active participants (if my recollection is right).  Last report was there are well over 400 signed up for the Fall 2018 Cohort!  But don’t let that intimidate you.  That’s 400 people who you can learn from by joining, forming, or following one or more projects!

And it’s all FREE!  The xAPI Cohort is an exploratory, experience-based learning community at it’s best.  Project teams form after the first weekly session and report out on their progress each week so that everyone learns from all of the projects.

In Spring 2017 I was on a small, but a dogged team that set out to explore different ways to use learning analytics and data visualization to utilize xAPI data provide learning insights.  To be honest, we failed miserably to meet the original goals of the group.  But fortunately, the primary goal of the xAPI Cohort is truly “learn something – together.”

I know I learned more applicable information regarding data collection, privacy, control, and governance; well as how Webhooks and API’s work and, oh yeah, how xAPI statements are well constructed (as well as how they can be poorly structured) than I likely would have in a traditional academic course.  Team Analytics met with and overcame a number of obstacles and, in the end, had a long list of “lessons learned” that we were able to share with the community.  Here is our report to the Cohort.

If you are interested in moving your knowledge and skills regarding xAPI forward, consider joining me starting September 7, 2018 and let’s learn together!

Learning by Obstacle: the xAPI Cohort

For the past three months, I participated in the Spring 2017 xAPI Cohort.  The Cohort is a hands-on project learning experience for people who are learning about xAPI and those who are looking to move the xAPI standard forward.  Participants form teams around projects they wish to work on together to get a deeper understanding of how xAPI works, the possibilities it creates, and, by my experience this spring, the obstacles it has yet to overcome.

My team’s learning in the cohort came from confronting obstacle after obstacle and looking for the lesson to be drawn from each roadblock.  Our final product was a set of lessons learned and a list of recommendations for L&D practitioners about working with Big Data and a few for the xAPI powers that be.

I signed on as a member of Team Analytics.  The initial idea behind the team was to explore the possibilities that data visualizations have for reporting results of L&D learning experiences.  But that requires one big assumption – having access to a large, quality dataset composed of xAPI statements.

After 2 weeks of administrative stuff and team formation, Team Analytics spent the next 6 weeks of the cohort trying to find a dataset we could use.  Issues of governance, ownership, privacy, irrelevance, and control of data kept us at bay.

We finally decided to use the xAPI statements being generated by the Cohort’s activities in Slack.   8 weeks into a 12-week course, we were ready to rock-and-roll.

Or so we thought.

Turns out the xAPI statements, while valid, were not well formed, the outgoing webhook Slack provides generate minimal data, and converting that data to xAPI statements is manual programming work.

Thanks to Will Hoyt from Yet Analytics and Matt Kliewer from Torrance Learning, we were able to figure this out and reconfigure the xAPI statements and actually do a bit of the work we initially thought we were going to be doing. But more importantly, because of the project-based, unscripted approach of the Cohort, we discovered issues we needed to learn about to overcome each obstacle.

The Spring 2017 xAPI Cohort was a tremendous learning experience.  If it sounds interesting, learning more about the Cohort and register for the Fall 2017 cohort at http://www.torrancelearning.com/xapi-cohort/
Here are the “Lessons Learned” from our experiences in the xAPI Cohort:

  • Ownership and Control of Data
    • who owns the data and/or who controls the business data you wish to
    • Protocols and approval processes are in place to protect the quality of existing data and control its use.
    • Clearing these hurdles requires stakeholder partnership, upper leadership buy-in, and clear planning of how data will be used.
    • All of which takes time.
  • Privacy and Accessibility
    • EULA (End User Licensing Agreement) or employment agreement dictate usage of data.
    • Does your company have a data usage policy?
    • Access to data may be limited, controlled, or data may be off limits.
  • Accessibility of Data
    • If a tool is not natively programmed with code to trigger the creation of xAPI statements or in the cases where xAPI statements are a limited subset of all activities, you’ll be limited to the data the tool provides via a Webhook and/or APIs.
  • Accuracy & Usability
    • Manual scripting process is not standardized, errors can be introduced
    • Poor data planning can lead to useless data
  • Resources Required
    • Programmer competent in writing API scripts and xAPI statements.
    • Time availability of said programmer
  • Data Mining vs Learning Analytics –
    • Data for data’s sake only creates noise that can overwhelm your efforts to clarify the impact of learning upon business results.  Collecting data without knowing why you are collecting it is a waste of time and resources – especially with the work required to implement xAPI.
  • Visualizations –
    • Visual components like size, color, positioning can render the best visualization useless by making the object illegible.

This Year’s Best LRS is….Hold on….

Recently, Craig Weiss published a ranking of Learning Records Stores (LRSs) on his blog – see LRS Rankings (Learning Record Store).  When I first saw this, I was excited that xAPI was getting such great coverage.  Craig’s rankings of eLearning technologies are well respected and his paying attention to LRSs is a great sign for the xAPI movement.

Unfortunately, Weiss missed the mark here.

I wish I could say that this error was along the lines of Steve Harvey’s Miss Universe mistake in 2016 or the Academy Award’s flub in naming Best Movie winner this year.

But Weiss’s errors in his assumptions for his rankings have to do with a fundamental misunderstanding of what an LRS is required to be under the ADL specification and a confusion between what the xAPI data specification delivers and what it might enable other tools to deliver.

LRSs are required to:

  • receive and store xAPI statement that complies with the xAPI technical specification.
  • reject statements which are not compliant
  • consider statements received to be immutable – ie, the cannot be deleted
  • provide access to data by other LRSs or 3rd party systems that request the data

That’s it.  LRSs are about verifying xAPI formatted data, storing it and sharing it.  Recently, ADL released a tool with which LRS vendors can verify that their LRS is conformant to ADL specification at https://lrstest.adlnet.gov/

A “by specification” LRS will likely never be seen by an end user – learners, instructors, data analysts.  Some LRSs are building “back door” basic analytic interfaces, but those are not required by the specification.  (More on analytics below.)

Weiss outlines several “Keys to Remember” when analyzing LRS (and by extension, xAPI).  The first is:

There are some standards in terms of what data is captured that is seen in most LRSs. These include top influencers, xAPI, statement activity, most popular content, search extraction, data visualization (but all over the place from that standpoint), and connectors.

LRSs only accept, store, and share valid xAPI statement.  xAPI statements, depending upon how they are constructed, may release the data needed to determine top influencers, most popular content, and search extraction information.  xAPI data, like any data, can be used for data visualization.  But none of these things are inherent in xAPI statements.  xAPI provides access to new data to be analyzed (more on this below). As to connectors, yes, xAPI and LRSs utilize standard API protocols for connectors to other systems.  This is not unique to xAPI – which is a powerful aspect of the specification.

Weiss’s second and third keys are true.  Some LMS vendors are incorporating LRSs into their architecture which has advantages and disadvantages.  This post by Rustici Software outlines the various configurations that can occur with an LRS.  He seems confused that some of the LRSs have both an open source and a commercial version.  xAPI is an open source standard.  The “instructions” for building an LRS are open source,

He seems confused that some of the LRSs have both an open source and a commercial version.  xAPI is an open source standard.  The “instructions” for building an LRS are open source. Anyone is welcome to build their own.  However, the reality, building your own LRS is not easy.  (there are 1300 tests to pass for an LRS to be determined conformant by ADL.)  So naturally, there will be many customers, or LMS vendors, who will be more interested in buying an already built, conformant LRS from a vendor than taking on the cost and effort to build their own.

Weiss then says that vendors have forgotten one of the “premises of LRSs”:

The premise (besides what it can do and its benefits) was that each learner has this data record and it captures everything (which it does), BUT and here is the kicker, if the learner leaves the company, school, etc., they take their data record with them.

This is and has never been a premise of the LRS nor the xAPI specifications.  What he is confusing here is an aspirational goal that might be achieved if xAPI is widely implemented.  The role of the LRS in this aspirational vision is that it is a vessel to hold and transfer xAPI statements which can be transferred to any other LRS with no worry about data configuration.  Thus, ideally, a student’s learning activity, once recorded in xAPI statements technically becomes portable – dependent upon either the student or their new school or employer having an LRS it can be transferred to.  These technologies are being developed.  There are issues of data ownership and privacy that will impact this vision, but that is not relevant to this current discussion.

He then makes an accusation that is totally false.

Some vendors though have changed the premise of the data record transference.  How?

  • They delete the record if the person leaves (regardless if they quit, fire, bolt, go the route of the school angle above, etc.)

As I stated above, a key feature of xAPI and LRSs is that statements are considered immutable once created.  They can not be deleted per the specification.  If an error has been made in the creation of a statement or statement, a second VOID statement can be generated to negate the first statement.  But this is a very laborious procedure and, as far as I know, is generally used only to negate test statements so that they won’t appear in any analytics.  Even in cases of test statements, I’ve been advised it is easier to simply create a new LRS and start over than to try to VOID all of the incorrect statements.   This is a strawman on the issue of why the aspirational goal of lifelong learning records.  Vendors are not deleting valid statements willy nilly.

Finally, the main basis on which Weiss founds his rankings on is the learning analytics interfaces that some of the LRS vendors have opted to add to their LRS offering.  This add-on interface is not part of the ADL specification conformance.  Why are the vendors doing it, then?

  1. Value.  As has been discussed, building an LRS is free to anyone who wants to take on the task, so there is little value-add in building one – at least not enough to entirely support a re-selling business model.
  2. Visibility.  A free standing LRS, if it were physical, is basically a box sitting there, running quietly, holding data.  I don’t want the job of trying to sell that to someone!  So there needs to be at least some minimal backend that shows the data is there and safe.
  3. Demonstration.  At the same time, until very recently, no one has been paying attention to learning analytics (except many of these vendors who have been at the heart of the xAPI movement) so demo’ing the learning analytics capability of xAPI in a value in pushing both xAPI and the LRS.

Some of the vendors clearly are aiming at participating in the Business Information and Analytics Marketplace and it shows in their “dashboard”.  Others as simply interested in providing some basic “control” information to show that the LRS is healthy and operating and will expect data will be drawn out to bigger beefier BI and Analytic tools.

The irony of all of this is if I had to rank the LRSs he has listed, I’d probably have the same top four.  But Weiss’s assumptions underpinning his selections misrepresent the role of LRSs and ultimately do a disservice to the aspirations of the xAPI standard.

What do you think?  Should LRSs be ranked?  On what criteria?  Please share your thoughts below in the Comments section.

 

Feature image by Ryan McGuire provided by Gratisography.

xAPI Resource Center Update

The xAPI Resource Center has been updated with new resources covering new developments  with the cmi5 profile (in particular a comparison chart with SCORM by the cmi5 workgroup), the new effort to revise the role of profiles to be more informative and clearer for adaptation by authoring tool vendors, and various other use cases, scenarios and descriptions of xAPI.

Please let me know what you think of this resource center.  One commenter warmed my heart with her comment:

…And although people in the tech industry are extremely smart, they sometimes have difficulty explaining certain things in a non-technical way. This website really helped me to understand what an xAPI is and what it can do. Thank you so much!

That’s exactly why I’ve created it.  To help L&D practitioners understand what xAPI can do and the opportunities it offers.  Glad to know it’s working!

xAPI Resource Center Update

I’ve added 10 new resources to the xAPI Resource Center, including a subsection on Talking to Your Techies on the Statements page.  Your IT contacts will be amongst your most important stakeholders on an implementation of xAPI.  The resources I’ve included are written to be a bridge between non-technical L&D folks and the technical professionals who will have to endorse projects like this in order for them to move forward.  These resources should get them to a point of feeling like they know what xAPI is and to make a decision of whether they are ready to dig into the technical side of the spec for you.

Several new resources regarding cmi5 profile for content update that section including the launch of the SCORM Cloud Testing Utility.  The remainder are various items I think fit the criteria for inclusion in the Resource Center.

I’m working on two other Resource Centers that I hope to launch this spring.  Watch for opportunities to help me with those as i have a couple of “Work out Loud” activities that I’ll be seeking input on.

As always, your thoughts on xAPI or suggestions for resources I should include in the Resource Center are welcome in the comments section below.

cmi5 in SCORM Cloud

Last week, Rustici Software launched cmi5 in their SCORM Cloud utility. While this isn’t the most scintillating news, it is a major step.  The SCORM Cloud implementation and support provides vendors and content developers with a place to test cmi5 launchable activities.  The ability to test in an environment like this is vital to assure that cmi5 and xAPI have been applied correctly in new tools and new functionality in existing tools.  For commercial vendors, this testing is vital.

Setting up a SCORM Cloud account is easy.  Check out details on xAPI on SCORM Cloud.  Initial use of SCORM Cloud is free.  The free version is great for individual testing and small implementations.

What is cmi5?

cmi5 is a profile that sits on top of the xAPI specification and helps control content in the xAPI ecosystem.  It allows content to be loaded to LMSs, but doesn’t require an LMS.  Many people short cut the explanation by saying it’s the SCORM replacement.  But that really limits the understanding of what it is.

Yes, cmi5 has all the capabilities that SCORM has to launch content in an LMS. But it goes well beyond what SCORM has been capable of delivering

ADL developed cmi5 with the following goals:
  • Interoperability – not only can cmi5 conformant content be launched in an LMS,  but it can be launched by various tools as long as they have been programmed to accept cmi5 data.
  • Extensibility – because it sits on top of xAPI, cmi5 extends the capability to collect data on learning experiences outside of the LMS and through the xAPI extensions, provide extensive detail on results and context of the activities within the course,
  • Mobile Support – cmi5 content can be accessed via mobile devices

The ADL cmi5 work group is developing a document which goes into detail regarding what cmi5 can do versus SCORM.  You can view their working document here.

A major benefit of cmi5 is that most of the attributes are content-specific.  The xAPI statements carry all of the information about the content with the content.  SCORM content depended on the LMS to keep it organized.  (cmi5 content is self-aware).  What this means is that the content doesn’t have to sit in the same place as the LMS.  In our cloud-based. distributed content world, this is huge.

With Rustici’s adding cmi5 to SCORM Cloud, we should see more and accelerated development of authoring tools that support the creation of rich xAPI/cmi5 content.

To learn more aboutcmi5, go to the xAPI Resource Center.

What do you think?  Have you explored cmi5 and/or xAPI?  What are your thoughts on cmi5?  Please share your thoughts by replying below in the comments below.

xAPI as lingua franca

As I’ve come to understand the xAPI standard for learning experience data interoperability I’ve found it interesting that many people misunderstand what exactly xAPI is and is not.

  • xAPI is not an instructional design methodology, although it will impact the ability of instructional designers to do their jobs better.
  • xAPI does not analyze or evaluate learning experiences, although it enables the creation of metrics and analytical tools that L&D has not had to date.
  • xAPI does not replace the LMS, although it enables learning done on any platform to be tracked and evaluated.

In my mind, it can be explained as two things:

  1. it is a technical standard to enable the creation of data about learning experiences
  2. it enables a common language(s), a lingua franca, to talk about that data

I’ll talk about #1 in a future post.  In this post, I’ll address #2 and why it’s important.

Wikipedia provides the following definition of a lingua franca:

A lingua franca (/ˌlɪŋɡwə ˈfræŋkə/),[1] also known as a bridge language, common language, trade language or vehicular language, is a language or dialect systematically (as opposed to occasionally, or casually) used to make communication possible between people who do not share a native language or dialect, particularly when it is a third language that is distinct from both native languages.[2]

via Lingua franca – Wikipedia

The key to this definition as applied to xAPI is the phrase “systematically used to make communication possible between people who do not share a native language or dialect”.

A lingua franca answers some of the key questions raised by skeptics of xAPI.

“Why do we need a standard like xAPI when various vendors are addressing or can address the analytics within their own system?”

Actually, there is no need at all for a lingua franca if you are going to work with tools all created by one vendor who has applied a common methodology across all their tools.  But in this BYOD (Bring Your On Device), self-directed learning reality of today’s workplace, the ability to merge data from various systems and devices is facilitated by a common set of descriptors.  To begin watching a video did you “start”, “initiate”, “play”, “begin”, “hit go”?  What verb tense would you use – play, played, playing?  In Big Data, these things matter and can be the difference between being able to build valid analytics or not.  (FYI: the xAPI video community prescribes “played” for having started watching a video.  All verbs in xAPI are in past tense.).

The same consideration goes for the programming language used to express the data.  If some data comes to you in HTML5, some in XML and various other languages each applied differently by each vendor, your chances of ever cleaning it up on an ongoing basis in order to do regular reporting is very slim.

An agreed upon set of vocabulary that is systematically applied enables data from multiple systems to be merged and analyzed quickly and accurately.  Ultimately, if well implemented widely, xAPI will enable industry-wide learning analytics.

“Why is it necessary to purchase a Learning Records Store in order to use xAPI data?”

There are open source LRSs that can be used for free.  Vendors can build LRSs as stand alone or part of their tools (ie, LMSs).   LRSs are built to assure that any data that resides in the LRS is in the form of valid xAPI statements.  If the preferred vocabulary for a learning experience has been used, the data extracted from an LRS for analysis will have a very high level of validity.  Validity is a major issue with Big Data.  The xAPI LRS addresses this issue.

Data can be exported from an LRS to any data storage or analytics tool that is being used.  Although many of the commercial LRSs available have analytics tools built in “out of the box”.

“How can a standard determine a singular vocabulary for all learning experiences?”

xAPI does not prescribe a single vocabulary.  This course of action was dropped at the end of 2015 because it was seen as being too restrictive.  In reality, the xAPI specification does not specify vocabulary.  It enables various communities of practice to establish a list of vocabulary that is appropriate for reporting data in their domains.  These vocabularies are listed by ADL and the Tin Can Registry as recommended vocabulary.  Users of xAPI are highly encouraged to

  1. use already established vocabulary whenever possible
  2. join or start a community of practice in creating domain specific vocabulary
  3. as a last resort, create their own vocabulary and share it via ADL/Tin Can Registry.

It is through this collaborative process that an appropriate, systematically applied vocabulary will be established.

The xAPI standard establishes a structure for the data and parameters for various components enabling flexibility for necessary variations from domain to domain or device to device.  This balance is the power behind xAPI.

“How will xAPI enable non-Learning data to be used in our analysis?”

With a common vocabulary established, data from non-xAPI systems can be easily mapped and connecting APIs can be written.  Many of the major business systems like Salesforce, Slack, and HRMS systems already have export APIs established.  xAPI can match up to these systems easily to create xAPI statements from their data and store them in the LRS.  Thus only one connector is needed for each external tool.

A final benefit in the xAPI standard is that it is being developed in JSON using human-readable language.  Built on common linguistic structure, it is understandable to non-technical practitioners of learning and development.

Establishing a common way of speaking about learning experiences, our lingua franca, will provide benefits to individual L&D departments, the organizations we serve, the industries we are part of.

An xAPI Resource Center for L&D Professionals

 

xapi-resource-center-image-small
http://neweelearning.com/xapi-resource-center

Over the past several months I’ve been learning about the xAPI standard for learning experience data interoperability that is gaining traction and is poised to replace SCORM.  This Resource Center is a result of my studies, conversations, and reflections on this exciting advancement for Learning and Development.

My xAPI journey began on September 13 when I saw a Twitter post for something called “xAPI Camp” which was being held that Friday at Lurie Children’s Hospital here in Chicago.  Having no plans, I checked out the link.  The price (free) was right so I sent off a hopefully request for a seat.  What a great happenstance.  The projects that were presented blew my mind.  All because of a standard based on the basic sentence “I did this”?!?!?

Back in the late 1990’s  my boss at Universal Learning Technologies, Barb Ross, was on one of the workgroups developing the IMS (then version 0.4) standard for interoperable content cartridges and she involved me in her review of the early specification.  I sat in that room at Lurie’s thinking, “They’ve finally figured out how to do what Barb and I were wanting way back then.”

Since then, I’ve thrown myself into understanding this new specification.  I’ve attended the xAPI Camp at DevLearn in Las Vegas (where I ended up winning WatershedLRS’s xAPIgo challenge).  I’m completing HT2Lab’s Learning xAPI MOOC (both the technical and non-technical tracks).  I’ve even had the opportunity to have lunch with Aaron Silvers to learn from him directly.  I’ll be participating in Torrence Group’s Spring xAPI Cohort beginning on February 9.

Of course,  I’ve also combed the web and curated what I’ve found.  This Resource Center is the product of that curation.   These pages are living documents.   I’ll be adding and deleting resources.  Please provide your feedback via the thumbs up/thumbs down poll associated with each item.  Let me know what you’d like to know more about via the comments at the bottom of each page or directly to me on Twitter, the Contact page here, or email me directly if you have my email.

I see this Resource Center as the first step in an effort to help the everyday L&D professional understand the power and potential of xAPI to drive true learning analytics that cover a far broader swathe of learning experiences than we’ve dreamed possible in the past.  If implemented correctly, xAPI will enable us to analyze targeted behaviors, to create learning experiences to affect the desired changes, and to measure whether we have met the organizational goals we set.  So click on the image

So click on the image or link at the top of this post and start your journey in xAPI!


twitterI have created a Twitter List of people and organizations that tweet about xAPI.  Please follow it.  If there is someone or an organization that tweets regularly about xAPI, please send me your suggestions (Direct tweet me, use Contact page here at new eelearning, or email me if you have my email.)


 

PLEASE SHARE YOUR THOUGHTS IN COMMENTS BELOW

Is there something you don’t understand about xAPI?  Questions about something said in one of the above resources? General thoughts on these resources?   Add a comment below.

If you have any ideas on resources you feel should be on this page or in this Resource Center, feel free to use the comment section below or contact me via the Contact page here at new eelearning.

xAPI Data Talks! Page Layout May Influence Interaction

In her post, Supporting Social Learning Through Page Design, on HT2Labs’ blog, Janet Laane-Effron talks about analysis she and her colleagues did on two of HT2Labs’ MOOCs.

The question is:

How can page design best support social learning?

The test:

Janet and her colleagues placed the comments section in one of their MOOCs below the content it was related to.  In another, the placed the comments section next to the content.

The result:

The two MOOCs had statistically the same number of total comments once moderators and other HT2Labs folks were removed from the data.  However, when they looked at whether the comments were original comments or replies to comments,  the MOOC with the comments section next to the content came out as the clear winner for interaction.  (The assumption here was that replies to a comment reflected interaction between participants.)

While Janet states in her post that this finding is not conclusive and there are other issues around UI and general layout for responsive design, it definitely suggests that there is more to consider on this question of the positioning of the comments section in relationship.

The xAPI win:

The only reason Janet and her colleagues were able to do this analysis was the MOOCs were created in Curatr, which creates xAPI statements.  In the xAPI standard for comments, original comments and replies to those comments generate statements with different verbs which can be sorted for.  In addition, the MOOC facilitators and other HT2Lab admins can be removed easily by sorting on the actors and the roles they have in the course.

Without xAPI, none of this data would have been created.  Sure, you could manually go in and created a data set my viewing each comment section and notating the comments in a spreadsheet.  But that would take far to long.

With xAPI, it would be very simple to expand this study to 10 or 100 MOOCs – if they are all set up in authoring systems that comply to xAPI.

Usage data on our learning designs can be at our fingertips with xAPI.

xAPI Approaches ‘The Chasm’

Before Thanksgiving, I attended eLearning Guild’s DevLearn 2016 conference and the xAPI Camp that was held the day before.  One of my primary goals was to add to my knowledge and understanding and to get a feel for the innovative products that are already implementing xAPI.

Three days of great conversations, a dozen presentations, and an equal number of demos with the vendors left me excited about the prospect of xAPI and the impact it should have on Learning and Development over the next 5-10 years. (A special thanks to Watershed for their xAPIGo game that made learning fun and provided a tremendous example of the power of xAPI.)

But in my conversations with the several dozen vendors and other professionals who are part of the xAPI community who were at DevLearn, I began to come to the conclusion that xAPI is fast approaching “The Chasm.”

Diffusion of Innovation Theory

In the 1960’s Everett Rogers developed the Diffusion of Innovations Theory

that describes the different classifications of people when deciding to adopt a new product or idea.  These five groups (innovators, early adopters, early majority, late majority, and laggards) communicate and adopt innovation in a rather rigid sequence, depending on the prior group for the assurances they need to jump on the bandwagon.  Each group has a responsibility to “sell” the following group on the innovation.  Because each group has very different values regarding the technologies they use, the communication between the groups can be challenging.

that describes the different classifications of people when deciding to adopt a new product or idea.  These five groups (innovators, early adopters, early majority, late majority, and laggards) communicate and adopt innovation in a rather rigid sequence, depending on the prior group for the assurances they need to jump on the bandwagon.  Each group has a responsibility to “sell” the following group on the innovation.  Because each group has very different values regarding the technologies they use, the communication between the groups can be challenging.  The most difficult transition is between the Early Adopters and the Early Majority.

Crossing the Chasm

In his 1991 book, Geoffrey Moore defines this transition as “Crossing the Chasm.”  It is chossing-the-chasm-coverthis point when success or failure of an innovation will most likely occur.  To successfully move from an idea adored and championed by the innovators and early adopters to a marketplace leader views as the new status quo, an innovation must meet the following challenges:

A single company launching an innovative product finds crossing the chasm a massive challenge.  In the case of an industry standard like xAPI, there are scores of different companies, organizations, and individuals with varied interests and competing models for success in collaboration and opposition to each other to move the adoption of the specification forward.

The Early Majority doesn’t like ambiguity.  They want things to work the way they are supposed to.  They have very little tolerance for innovation they don’t understand.  The “what’s in it for me” mindset must be heeded.

Is xAPI ready?

Moore points out that early attention to preparing to cross the Chasm during the innovation and early adoption stages eases the crossing.  Here xAPI is in good shape.  The community of individuals and organization that has built up around xAPI is robust, passionate, and open.   Finding the right way to incorporate the Early Majority into the community without alienating them yet remaining a focus of passion for the Innovators and Early Adopters will be the key.

A cautionary message is necessary around the conceptualization of the product positioning, the whole product, and the marketing strategy.  My experience of the overall messaging coming from those who were at DevLearn was too technically focused.  Valid statements that are generated in compliance with xAPI are truly things of beauty if you know anything about coding.  But the continual, “and this is what the statement looks like” will be a barrier to L&D Directors, Line of Business Managers, and the Executive Suite.  We need to create a message of business solutions and better learning outcomes.

Another obvious challenge is going to be not overwhelming Early Majority citizens with more new data then they are ready to receive.  If you think in terms of the 70:20:10 model, we could be expanding learning 9-fold as we implement solutions to reach informal and social learning.  As we build xAPI into our learning designs, the amount of data that can be generated is astronomical.  L&D folks are not currently equipped to absorb all this data effectively.  To help cross the Chasm, we need to:

  • Model implementation strategies that throttle back the amount of data thrown at them for analysis, so they can adapt to the future of big data,
  • Advocate education around big data and learning analytics,
  • Provide analytics tools that not only crunch the data, but also teach the operator about what they are doing.

Overwhelming Early Adopters is a guaranteed way to get them to start shutting down.

Other challenges are easier, but still need to be attended to:

  • Providing tools and guidance in moving SCORM based materials to xAPI will be vital
  • A clear understanding of what tools create xAPI statements, what a Learning Records Store is and is not, and simple, but powerful analytics tools will ease adoption.
  • Proof cases that demonstrate the abilities of xAPI conforming experiences, business results that can be displayed because of xAPI data, and ease of implementation will easy the minds of the Early Majority
  • Pricing models need to be tested and adapted to meet the expectation of L&D, Business Partners, and IT.

All of these are in process already.  Again, successful output of the xAPI community.  The activities that DISC (Data Interoperability Standards Consortium) has on the roadmap for 2017 address many of the challenges that will be faced in crossing the Chasm.  But, there is a tremendous amount of work that needs to be done to assure a safe crossing.

If successful, xAPI will dramatically change the nature of Learning and Development and it’s role in the organization.  We will be able to measure our work with a rigor and accuracy we’ve only dreamed of to date.

(Photo by Blake Richard Verdoon provided by unsplash.com)