The Hidden Risk: The Feeder Pool Problem

June 8th, 2011

As if the challenges of finding and attracting qualified talent and dealing with the Boomer retirement tsunami weren’t bad enough, companies are now facing an even bigger and more imminent threat – the problem in the middle.

One of my clients had this not so pleasant discovery recently. They were confident they had covered the front end challenge and were attracting and training new talent pretty well. And they were aware of critical knowledge in the heads of people soon to retire and taking steps to capture and transfer that knowledge. What they weren’t addressing, in fact didn’t know about, was the problem in the middle. When they put pen to paper (or numbers into a spreadsheet) and began to look at the middle tier or feeder pool for critical senior level roles, they were astounded to discover, they had a major gap.

The potential repercussions of this talent gap are enormous. These are the people and jobs that make up the most critical tier of the organization, the people and jobs that are responsible for the organizations growth and ultimately, their survival.

And this is not a gap you can easily fill. You can’t hire it from the outside because so much of the most important knowledge is proprietary. And you can’t grow it quickly. You can’t take a good engineer with a good base of knowledge and put them into a senior role and magically expect them to move from competent to expert. You get to expert by doing, by applying knowledge in different contexts, by making mistakes and learning from them.

In the old days we had time to grow our experts. We had time to provide them with the opportunities they needed to develop. We threw them into learning environments and they learned.  But somewhere along the line, perhaps in the challenging economic environment of the last few years, we stopped doing that. At the same time, change sped up, complexity increased and in the blink of an eye, we got further and further behind.

So how do we solve this problem?  Admittedly, it’s not an easy one. In fact, solving this problem may be one of the key challenges of modern times.

First step, identify the organization’s most critical skills.  What must you do well in order to remain competitive? What are your core competencies? What separates you from the competition?  Take a look at this not only in today’s terms, but also in tomorrow’s terms. How are things changing and what new skills and competencies must you have in order to compete tomorrow?

Second, identify gaps and risk areas. What areas are supply and demand challenged. In other words, where are you competing for talent or at risk of losing talent?

Third, prioritize. There is only so much time and money, so it is essential you prioritize opportunities. Identify criteria that will help you determine where you’re most at risk and where you’ll get the biggest payoff. Look for synergies between projects.

Fourth – rethink the job/role.  If you have a gap or supply/demand inequity, how might you redefine the role or the ideal candidate to improve the odds of filling the gap.

Fifth, and perhaps the most critical (and also the most difficult), how can you compress the time it takes to move people from novice to competent and competent to expert. Research suggests, that small movements in compressing the competency curve result in big savings and even bigger impact.

We’ve covered a lot in this post and only scratched the surface. These are big challenges with important payoff. Stay tuned for more on this subject and please share your ideas.

Job Complexity and Knowledge Needs

May 23rd, 2011
Quick presentation on identifying work complexity and its impact on staffing, training and knowledge management
More to come….

The Knowledge Funnel

August 6th, 2010

A helpful way to think about knowledge in an organization is using the metaphor of a funnel.  At the highest level, you have a wide variety of disparate ideas, concepts and tacit knowledge that have not yet been made explicit. There is a lot of good intelligence and know-how in this collection, but it isn’t very useful to anyone other than those who hold the knowledge in their heads.

With some expert help (say by someone on the Knowledge CATS team), you draw out this tacit knowledge and make sense of it. You package it into models or heuristics and in so doing you move it from tacit to explicit and now other people can use it. At the Models & Heuristics stage, the knowledge or know-how is in guideline form.  In other words it is principles and rules-of-thumb that the expert uses to make decisions quickly and efficiently.  Typically cues or signals are also included at this stage, so it’s not just the heuristic or rule-of-thumb, but it’s also knowledge about when to use it.

The bottom of the funnel is Systems and Procedures. This is where knowledge is distilled into uber explicit procedures. Step 1, do this, Step 2 to this. The ultimate path from this point forward is from procedures to systemization or into code.

There is increased usability and cost savings as knowledge moves down the funnel.  But there are also risks. Once you capture in depth knowledge and put it into procedures, you now have to keep up those procedures. Models and heursitics don’t require regular updates, but procedures do, because so many of the variables (equipment, people involved, contexts, etc.) change frequently.

Secondly, once proceduralized or systematized, the know-how can be easily copied.  As my friend Adrian Davis says – once systemitized, it can be commoditized!  Which means it no longer offers you any sort of competitive advantage.

Bottom line – not all knowledge should be pushed down into the Systems & Procedures part of the funnel. We’ll go into how to determine which should and when you should stop at Heuristics in a subsequent post.

The Problem with Procedures

July 11th, 2010

It’s not that I’m against procedures.  No, in fact, I’m a big fan.  I started my professional career in 1980 as a technical writer, so in some ways procedures have helped make me what I am today.  🙂  The problem is that we often expect them to do too much. We expect procedures to be “enough”.  We mistakenly believe we can break down the elements of our complex world into parts and pieces and neatly assemble them into step-by-step instructions.

And we can’t.  Although there are places where procedures and checklists make sense (Recent studies speak to how checklists help hospital personnel, pilots and others in critical roles reduce errors and perform more effectively), we must also be aware of the limitations of procedures.  The reality is that much of what those pilots, doctors, and nurses deal with on a day-to-day basis cannot be proceduralized.

One of the primary problems with procedures is they assume you can know and realistically cover every eventuality.  In the late 1980s I worked with expert systems and decision support teams.  My job was to work with the SMEs and capture the rolodex of if/then configurations. “If this happens under these conditions, do this…”

Although I loved the work, I could see the flaw back then – that we could never really cover all the bases, that we would constantly be “chasing our tails” so to speak.  Which I think explains in part why those technologies were never very successful.

Procedures often lack context.  They assume that things can be done the same way in every situation. For those of us who recognized the importance of context and attempted to include it, we ran into two problems – an unwieldy confusing mess of a document, and continually needing to add new contextual elements.

Another major problem with procedures is they quickly get out-of-date.  And once they are out-of-date they are beyond worthless and into the realm of dangerous. Greg Jamieson and Chris Miller (Exploring the Culture of Procedures,2000) studied four petrochemical refineries in the US and Canada to see how they managed their procedures.  In none of the four cases did the workers ever completely trust the procedural guides and checklists because they never knew how updated the guides were.

What did they do instead?  The learned workarounds.  They used their experience to adapt, just as we would expect in complex domains.  And it is exactly here, in this state of adaptation that we should focus our energy (but I’ll save that for another post).

Staying with the problems of procedures for now, perhaps the biggest problem with procedures is that they can lull people into a passive mindset of just following the steps and not really thinking about what they’re doing.  They stop trying to understand the situation or look for different ways to solve the problem.  In a complex environment, this mindlessness is exactly the opposite of what you want.  You want people to be continually mindful and aware, proactively anticipatory.  You want them to look for clues and signals, and continually develop richer mental models.

Procedures do a lot of good things.  They help ensure consistency, help beginners learn the ropes, help more seasoned professionals remember the essential steps.  They are an important and necessary element of our cognitive support toolbox.  But they are only one tool in the toolbox.  We’ll talk next time about some of the other tools.

Lessons Learned from BP Disaster – What to change and improve?

June 4th, 2010

As I reflect on the BP disaster and how it will change the way not only BP does business but other companies as well, I worry that the response will be overly “thing” versus “people” focused. What I mean by that is we will be quick to address the problem with more  rules, procedures, checks and controls.  We’ll provide more information, more tools and more technology.

Information pushers will be quick to say “we just need more information.” Process gurus will want to re-engineer the process.  Technologists will suggest building bigger and more complex systems.  BP CEO Tony Hayward has noted on more than one occassion that BP lacked the “tools” to deal with situaitons like this.

It’s not that I’m against adding a few more checks and balances or that I don’t appreciate the value of information and technology.  It’s just that we seem to be overlooking a really important element here – namely people!

At every step of the way prior to, in the midst of the blowout, and in the clean-up efforts, decision making (by humans) stands front and center.  There was no shortage of information about BPs safety record, so one could argue that it’s not about adding more measures but ensuring the people looking at those measures are paying attention to the right things and acting when they need to.

The information technology that supports a deep water drilling rig is mind-boggling. 3D and now 4D seismic surveys, MWD (measurement while drilling) tools, and data transfer at rates unheard of.  I’m not an expert here, but I’m guessing the issue is not one of information or technology, but rather one of making sense of it and being able to use it to make the right decisions.

For my money, I vote to focus on better enabling human beings to make sense of data and make better decisions. From a lessons learned perspective, let’s look at critical decisions throughout the debacle and try to decipher what went wrong.  Let’s look at signals missed or misinterpreted, what people chose to pay attention to and what they filtered out and why.  Let’s look at cultural and communication variables and how these influenced decisions.

We may ultimately end up at the same place, with more rules, more information and more tools, but at least we will ensure that these “things” have some connection to the people that use them.

Are we really getting the payoff from KM that we expected?

June 3rd, 2010

There have been two interesting topics of late in the KM communities on LinkedIn – one, a request for evidence of the payoff of KM initiatives, the second, a discussion about what happened to all the CKO (Chief Knowledge Officer) jobs that were once prominent in organizations.

I think these two items are related and both point to the failure of Knowledge Management to achieve the return we expected.  I, like most of the other people in the community, can provide stories about how my KM initiatives had an impact, even show real dollar savings, but when it comes down to it, I find myself wondering, “But did they really have an impact?”  Did the extraction of this SME’s knowledge or the rich dialogue in the community really have the payoff we’d expected?  Did any of these initiatives fundamentally improve people’s ability to respond? Sadly I think the answer is no.

This is not to say that our efforts were in vain, they certainly were not.  The clue, I think, is that they were not enough.  As good as I am at eliciting and packaging knowledge from subject matter experts, this by itself is not enough.  It is in the use of this knowledge in decision making that we find the real value.

Perhaps the reason we aren’t seeing the great ROI stories or aren’t finding the CKO jobs is because we’ve inadvertently stopped short of achieving what we might have. Isn’t the goal not only to arm people with the knowledge and information they need to do their work, but also the critical thinking and decision making skills to know how to effectively use the knowledge and information?

Perhaps if we were to now focus an equal amount of time on the thinking and decision-making side. we’d see the kind of value add we need from KM. As I say this I am concerned that we need a different skill set of people involved as we transition into the second phase.  Maybe that speaks as well to the reason we aren’t seeing the KM initiatives or jobs today.  Likewise, we need a longer time horizon and opportunity to use and grow the knowledge in real time.  We need sponsors and managers who understand that it’s not about adding more documents or discussion or data, it’s about providing exactly what people need (and no more) when they need it and equipping them with the skills and intuition to make sense of it, pick out what they need, ignore the rest, and quickly and effectively respond to the challenge in front of them.

Where did judgment and decision making fall down in rig fiasco?

May 13th, 2010

Senior BP and Transocean executives told lawmakers Wednesday that discrepancies in key pressure tests on the afternoon of the explosion should have raised alarms according to a Wall St. Journal article  Red Flags Were Ignored by Doomed Rig. They should “lead to a conclusion that there was something happening in the well bore that shouldn’t be happening,” Transocean CEO Steven Newman told the House Energy and Commerce Committee.

So, where did things go wrong?  Was it a single bad decision or several errors in judgment?  Was it an issue of lack of knowledge or purposefully ignoring signals that might have led the team down a different path?  Was it simply a high level cost vs risk call (still a judgement and decision making error).

At first we thought the problem might have been that adequate tests weren’t done, but now it appears it might be a matter of interpretation of tests. It might also be an issue of poor cost-risk analysis on the part of of senior managers. Let’s review. This sequence of events is taken from the above referenced Wall St. Journal article.

At 8 p.m. Halliburton workers pours cement into the pipe to fill in the space between the outside of the pipe and the rock. The cement was laced with nitrogen to help seal out gas. Two hours later, workers started pumping in the heavy mud which was supposed to push the cement down and out of the pipe, and it also would serve to keep anything from flowing upward.

By 12:35 a.m. there was cement around the pipe. If all had gone well, the only pathway for oil, gas or fluids to move to the surface was through steel pipe. Right at the end, Halliburton poured a cement plug to seal the very bottom of the well.

Before the well could be called complete, several pressure tests had to be run. The first occurred the afternoon of April 20, and all indications were that the well and cement were working as expected, company records show. But while all this was going on, according to a memo from BP to congressional investigators, hydrocarbons were entering the well. Most likely, these were a combination of natural gas and condensate, a petroleum liquid.

At 5 p.m., workers ran another key test called a negative pressure test, used to determine if the well had been properly cemented. The pressure in the well was lowered to see if gas could enter. Test results were at best “inconclusive” and at worst “not satisfactory,” Mr. Dupree, the BP Senior Vice President for the Gulf of Mexico, said, according Mr. Waxman. It appeared the cement job hadn’t sealed off the well and a gaseous mixture was leaking into it.

A second test was run. Mr. Dupree said its results could indicate that natural gas was building up inside the well, according to Mr. Waxman.

At 8 p.m., less than two hours before the blast, BP officials decided that additional tests “justified ending the test and proceeding,” Mr. Waxman said, attributing this information to a communication from a BP lawyer. The congressman said information reviewed by his committee “describes an internal debate between Transocean and BP personnel about how to proceed.”

One course would have been to try to shore up the cement. As the cement contractor, Halliburton could have shot a hole through the pipe and squeezed more cement in between the pipe and the rock. A new section of pipe would then have had to be installed to replace the pierced piece, industry officials explain. This would have taken a week to 10 days, says one industry veteran. Between the cost of hiring the rig and the subcontractors, this maneuver could have cost BP $5 million to $10 million, according to industry estimates.

This extra work, however, wasn’t pursued. Instead, BP forged ahead. Workers began to remove the mud. A log provided by investigators shows significant volumes of the heavy fluid coming out between 8:10 p.m. and 8:30 p.m.

But as workers took out the heavy mud and lighter seawater flowed in, gas began to rise. It still isn’t clear if the gas came up the pipe or came up the outside of the pipe and then entered the pipe around the seafloor.

As gas flowed up 5,000 feet of pipe from the sea floor to the surface, it got warmer and expanded, pushing drilling mud and seawater ahead of it. The blowout had begun, setting off the fire that sank the Horizon.

So, what’s your “read” at this point?  Where did things fall down?  What were the “signals” that might have helped the team know what was going on?  Was there a problem with interpretation of the tests?  Should there have been more or different tests?  Should there have been more input from those on the front line?  Was it a breakdown in communication between contracting groups?

We’ll continue to use the BP story to try to better understand knowledge and judgment issues.  After all, that’s really what knowledge transfer is all about – identifying the kind of critical knowledge that needs to be transferred in order for people to be able to make important decisions like those mentioned above.

Signals, instincts and other tacit knowledge that might have prevented the big spill

May 9th, 2010

I find myself thinking a lot about the BP Horizon rig explosion and what if anything might have been done to prevent it.  The tendency is to blame it on faulty equipment or processes or unpredictable flukes of nature “No way to predict it, no way to prevent it” but I don’t buy that. It’s not that I’m eager to blame a human or several humans, in my experience it always comes down to people.

As much as I’m interested in where and how things broke down in this scenario, I’m also interested in the comparison between this catastrophe and others like it and the countless times when a blowout didn’t occur.  In the same way that I look to humans in the above scenario, I would look to them again in this one.

In an interesting twist, I started reading Gary Kein’s latest book Streetlights and Shadows just after April 20.  In chapter 2: A Passion for Procedures, he tells the story of The Bubble, an eerily similar scenario on an oil rig but in this case one that had a happy ending.

The manager of an offshore oil drilling rig was awakened at 1:30 am by a telephone call reporting a blocked pipe. A bubble of natural gas had gotten trapped and was rising through the pipe.  This posed a risk to operations. Fortunately, the company had a standard procedure for these cases – inject heavy mud into the pipe to counterbalance the pressure. The night crew was ready and waiting for him to give the order. Be he didn’t give it.  Something didn’t feel right. He had a gut instinct. So he got dressed and helicoptered over to the rig. By that time it was daylight. They searched for ignition sources, leaks, anything that might pose a problem for the procedure.  And then they found it. The relatively small amount of of natural gas at a depth of 4500 meters was under 15,000 psi of pressure. As it rose, and the pressure diminished, it would expand to 10,000 cubit feet by the time it reached sea level. It would flow through the pipes and processors in the oil right until it finally reached the separator.  But the 100,000 cubit feet was far too great for the limited capacity of the separator, which would undoubtedly explode, blowing up the rig and killing everyone on board. The manager’s intuition avoided a disaster.

So what was different between this scenario and the BP one on April 20?  Was it a case of missing knowledge, the ability to pick up on tiny, obscure signals and to know what to do once you got the signal?  Was it one mistake – not double-checking the seals for example or the blowout preventer equipment or was it a series of mistakes?  If procedures were followed, is it the fault of the procedures?  Does that simply mean you need better procedures and more checks and balances?

Would BP and the entire Gulf Coast been better served if the manager and workers on the rig could “read between the procedures” and instinctively know when to do something different?  And had the know-how to actually accomplish the task?


Strategic Knowledge Transfer

May 5th, 2010

In order to get maximum return on your knowledge transfer investment, it is essential that KT initiatives tie directly to larger organizational strategies and initiatives. There should be linkage in two ways: 1) knowledge determined as critical is directly tied to strategic objectives and 2) knowledge transfer itself is strategically structured utilizing a variety of tools, approaches and media that synergistically work together to achieve strategic objectives.

The most sustainable approach is an organic one supported by a variety of organic processes and an overarching strategically-focused methodology. By organic, I mean one that evolves naturally. Knowledge Transfer becomes a natural end result of not only training and more formal KT processes but of day-to-day work and communication. In other words, it becomes a part of the culture and happens naturally. In order to “get there”, specific cultural elements must be identified and behavioral aspects included in methods and processes. At the end of the day, knowledge transfer is about people and without proper attention to these people aspects, it is likely to fail.

Knowledge Transfer must also be “owned” and managed just like any other critical strategy. Results should be measured and participants held accountable for meeting goals and objectives.

Critical Questions to Help You Get Started

It is important to ask the right questions at each step along the way. As you start to formulate your strategy and what will you need, consider these questions.

  • What methods do we have for identifying the knowledge that’s most important to our organization’s survival and growth?
  • Once we identify the critical knowledge, how will we identify and evaluate where we are most at risk? Where is the knowledge? Who’s head is it in? What’s the risk of their leaving or of us losing access to that knowledge?
  • Once we identify the critical knowledge and where it resides, how will we identify the best means of capturing and transferring it?
  • What are our options and how do we evaluate options and choose the one(s) that are right for us?
  • Once we decide on an option, what resources should we use to implement? Should we try to do it internally or hire outside consultants?
  • How do we ensure all KT initiatives are working synergistically toward a common goal?
  • How will we measure results?

    Knowledge Transfer Options and Strategies

    May 5th, 2010

    The need for effective ways to transfer knowledge in short time frames is critical. Although there are a variety of options and strategies to choose from, decisions around “which to employ when” can often be challenging.

    There are many things to consider when choosing the right vehicle or combination of vehicles and the complexity of the challenge continues to grow.

    Understanding Your Options

    In order to map out the best way forward, let’s look at a typical situation and the different ways organizations are handling it today.

    John Smith is a 35-year veteran and key employee who is scheduled to retire in 9 months. Much of what he knows or knows how to do has not been made explicit, it resides inside his head. There are others in the organization with bits of pieces of John’s knowledge, but no one with the complete picture or depth of knowledge that John has. You have limited resources and a ticking clock. What do you do?

    There are several ways companies typically deal with an impending knowledge loss like this one.

    • Do nothing and hope for the best (hope is not a strategy)
    • Hire the SME back as a consultant (rarely cost effective and only prolongs the problem)
    • Hire someone new who has the knowledge (good in theory, but outsiders typically lack the organizational knowledge important to getting work done and are unlikely to be able to perform at the same level – at least not quickly)
    • Have the SME “write down everything they know about x and y”(unstructured and unfacilitated ‘knowledge dumps’ rarely result in quality information)
    • Have the SME work directly with potential replacement and others on the team in a mentoring capacity (great concept but only effective if structured and focused and provides critical learning experiences; it is also time intensive)
    • Restructure the job giving various tasks to others in the organization (still doesn’t solve the problem of knowledge transfer)
    • Redesign the work to make the SME’s job primarily one of Knowledge Transfer (great in concept, but suffers from the same problems as #5).
    • Capture critical knowledge via facilitated interviews (one of the better solutions but comes with it’s own complexity and challenges). More on this in Part III – Capturing Critical Knowledge Via Facilitated Interviews.

    Identify which knowledge is most critical

    Any of these approaches (except maybe #1) is a step in the right direction, if for no other reason that it means the organization has at least recognized the problem. Recognition is an important first step, but it too is more complicated than it appears.

    Recognition is the first step. Step 2 is a clear understanding of the problem. Clear understanding of the problem involves being able to identify the knowledge in your organization that’s most critical. The reality is you can’t, nor should you try to, transfer knowledge from every seasoned employee who’s about to retire.

    Not all knowledge is of equal value to your organization. In order to get strategic value and maximum return on your investment, you must begin by clearly identifying what’s most important. Details on how to do this are included in Part II of this series: Identifying and Prioritizing Critical Knowledge.