When Good Social Networks Go Bad

Not all social network are healthy. What is the impact of unhealthy social networks?

Harold Jarche has a great blog post about trust and social networks entitled, “when trust is lost.” In this post he points to how social networks in China erupted when Doctor Li Wenliang who had identified the corona virus and was reprimanded by the Chinese government for going public about the virus died from the virus. The government had tried to tamp down the news by not sharing information and silencing whistle-blowers.

But once the news of the virus was out and information began to be shared, the world community rallied and seems to be containing the virus’s spread.

Jarche makes he point that trust is vital for social networks to thrive and when it is present, networked learning increases trust.  I agree with Harold on this.

I think the real lynchpin is in the information allowed into the system and the social validity that information can achieve. Unfortunately, we can see a negative version of that being played out here in America. If you can subvert or at least call into doubt information – say, it was Ukraine, not Russia who tampered with US elections – and you have enough of the social network that will repeat this information as being true, you can subvert the network effect that normally would hone down falsehoods to leave the truth standing free.

Social networks honed down the falsehoods China was building to hide the epidemic, but it’s also clear that it was close to succeeding if it weren’t for Doctor Li’s death. Unfortunately, social networks can give credence to falsehoods and erode trust.

Social networks enable knowledge-sharing but don’t guarantee that the knowledge shared is truthful. Healthy social networks with authentic, service-oriented leaders; that welcome dissent and questioning of current knowledge; and are open to change will tend to weed out falsehoods, build trust in the network and its members, and provide knowledge that can be trusted to the point when other networks may test it and revise it.

But there are social networks whose leaders are self-serving; whose members fail to question “known” knowledge – either out of convenience or by coercion; and are resistant to change what they hold to be true. These networks will seldom issue information that is “true” but with propaganda, diffusion, and bluster to cover the fact that they haven’t vetted this information against all other information. They put on a charade of network behavior that seems to generate truth.

Unhealthy social networks aren’t necessarily negative or nefarious. There are plenty of well meaning groups who leaders who prefer to lead from authority, not with authenticity; who’s members cling to the “truths” they know; who reject any dissent from group norms. These groups too generate erroneous “truths”.

Jarche’s model holds well with the assumption of healthy social networks.  What it doesn’t address is when the system has been corrupted and unhealthy social networks begin to change the equation. The challenge is how can we stem the influence of unhealthy social networks – without trampling on their rights to believe what they believe.


Do you think these regulations will change anything? Will they drive greater support for data collection in learning? Motivate more collaboration between the business units and L&D?

Gauging the Impact of Learning

The Talent Development Optimization Council (TDOC) releases information about its promising Impact Optimization Model.

The Talent Development Optimization Council (TDOC), lead by Kent Barnett, CEO of Performitiv, is beginning to roll out it’s Impact Optimization Model (IOM) which aims to be “a comprehensive and practical approach to address the issues of impact, business results, ROI and optimization for programs.”

They have published a white paper that outlines the framework for the IOM (it’s free if you provide your information) and a podcast interview with Kent Barnett by Conrad Gottfredson of The Five Moments of Need.

The TDOC is an all-star panel, including Dr. Gary Becker, Dr. Nick Bontis, Dr. Jac Fitz-enz, Fred Reichheld, Jack and Patti Phillips, and Matt Barney.

The goal of the model is “to help professionals create an automated, repeatable process to tell their story of impact, by demonstrating value, while identifying improvement opportunities in their programs.” There are four components to IOM:

  1. Evangelizing the Story from Learning to Impact.
  2. Connecting Learning to Business Results
  3. Linking Learning’s Value to the Business
  4. Continuous Improvement

Overall, the model looks quite good overall. It draws from non-learning focused business models like Total Quality Management (TQM), Six Sigma, Net Promoter Score (NPS) and others.

The white paper outlines a 5 step process for moving from Descriptive Analytics to Predictive to Prescriptive.

  1. upgrade the traditional learning evaluation process to gather the right data
  2.  identify measurable business outcome indicators (BOIs) tied to the desired outcomes of each strategic program
  3. gather and process the data to ensure the programs are operating within acceptable impact ranges.
  4. analyze the prescriptive data to find ways to improve impact
  5. take action on the findings of your analysis and monitor the link to business result from your original performance to goals

The white paper goes through these steps in some detail. Overall, the model stands upon various measures and models that are standard business practice for most organization. This is a great step forward for Learning and Development in its effort to show its value to the business.

However, there is a significant, and disappointing, amount of data and analytics that still rests upon opinion, self-reporting and feelings. The authors of the report are clearly aware of this as they provide a weak defense of having done so:

Some people may complain that is self-reported data, but so is NPS, and this is much more insightful. NPS does not tell you if learning was applied, what business result was impacted or if it created value.

Impact Optimization Model, p.10

The “NPS uses self-reported data” argument is a strawman defense. NPS (Net Promoter Score) is a measure intended to report a group’s perception and opinion of a company, program or product. By definition it is based upon self-reported attitudes. Most of the measures that the IOM leans upon opinion and perception to generate data for evaluation could easily be replaced by behavior observation measures.

For example, the authors suggest “Are you a more effective manager after this program” is a valid measurement of impact. 1) research clearly shows that we are terribly bad at self-assessing our own performance and 2) there is no assurance that the manager and the L&D team share the same definition of effectiveness. On the other hand, if we have crafted the learning intervention well, we should understand the behavioral changes we were trying to create. If this is the case, then there should be a measurable behavior changes that will demonstrate impact.

This last concern that I raise may be in the process of being dealt with by the Council. In a new post on LinkedIn, Barnett explains they have identified 8 metrics that will support the IOM. The look great but they are working out the details. If the TDOC can incorporate more financial, business result, and behavioral data and steer away from opinion, self-reported and perceptual data, they will have a model that could indeed change the game for L&D as a strategic function within organizations.


Do you think these regulations will change anything? Will they drive greater support for data collection in learning? Motivate more collaboration between the business units and L&D?

Feature Photo by Ryan McGuire provided by Gratisography.

living in the past

I’ve started a somewhat daunting task. I am in the process of adding my posts from eelearning – my first blog – to new eelearning.

Unfortunately, I can’t import them over via WordPress because I can’t remember my user name and password from 10 years ago. Nor can I produce m bill for the domain name. So WordPress won’t give me access to my old account. Understandable.

Knowing that old, inactive blogs eventually get taken down by their hosts, I’ve decided to do the next best thing – cut and paste. Fortunately, posts can be predated so I can post them “on the date” they were posted on eelearning. I’m including comments to the original blogs in the body of the posts here.

I know that such activity could bring action by the copyright holder. But since he is me, I think I’m safe from his wrath.


Do you think these regulations will change anything? Will they drive greater support for data collection in learning? Motivate more collaboration between the business units and L&D?