2015-02-24 16.23.42.jpg

Hi.

Welcome to my website. This is where I write about what interests me. Enjoy your visit!

Overhaul 17a: Undeniable Truth of Overhaul 22

Overhaul 17a: Undeniable Truth of Overhaul 22

Introduction

Crew overhaul errors are ubiquitous and inevitable, as I have claimed in previous posts. While high performance is always expected for warship crews, preventing errors isn’t realistic in ship overhaul. What is realistic is error management: focusing on keeping error small (error trapping) when it occurs and rapidly learning to reduce repeat problems.

This is the first of two posts describing Undeniable Truth 22: Learning rapidly is a superpower. Learning rapidly in any context is important, but few environments punish slow or ineffective learning from experience more ruthlessly than ship overhaul. This post begins with the bold statement that leaders of ships in overhaul should expect errors. It continues with the observation that problems and errors are valuable. They are the path to more profound insight into system defects just below the surface of “they pushed the wrong button.” The section that follows, Learning Requires Unlearning, emphasizes that ineffective learning follows from unquestioned assumptions and practices that feel right, but don’t map well to overhaul. Part 1 of Undeniable Truth 22 concludes with the necessary balance between backward-looking and forward-looking accountability.

Expecting Error

In a previous post, I explained why errors (particularly by the ship’s crew) are inevitable. Most errors in complex systems result from systemic characteristics beyond the control of operators and the inherent limitations of human cognitive processes. Memorizing facts like relief valve set points is easy compared to the impossibility of error-free performance.

There are two important questions after identifying a performance problem. The first is, “What will the crew learn from this?” The focus is primarily on the operators who made errors, with secondary emphasis on those with similar roles and likely to be in similar situations in the future. What senior leaders, those not directly involved in the issue, learn about the training and qualification programs and past corrective actions for which they are responsible is seldom considered. This probably isn’t a conspiracy to cover up problems, but rather a consequence of their firm belief those parts of the system are sound, so the operators must be the source of the defect. I once heard a senior leader at a critique say, without a trace of irony, “Our procedures are sound; people just aren’t following them.”

The second important question after a problem is, “How badly will the corrective actions chosen by senior leaders delay the schedule?”

When faced with a human error problem you may be tempted to ask ‘Why didn’t they watch out better? How could they not have noticed?’ You think you can solve your human error problem by telling people to be more careful, by reprimanding the miscreants, by issuing a new rule or procedure. They are all expressions of the ‘Bad Apple Theory’ where you believe your system is basically safe if it were not for those few unreliable people in it. This old view of human error is increasingly outdated and will lead you nowhere. – Sidney Dekker (2017)

There’s nothing like a crew performance problem to make everyone except shipyard managers forget that the goal of overhaul is to get the ship back to sea mission capable on schedule. The only people who even think about this question are shipyard managers, but they seldom say it in mixed company because those who dare to do so quickly learn that it is an undiscussable of Navy ship overhaul. Recall that undiscussables are topics or issues within an organization that create threat or embarrassment and are consequently avoided or left unaddressed (Argyris, 1980). Undiscussables are significant obstacles to learning and improving performance in organizations.

Like many biases and misconceptions, even if it weren’t an undiscussable, the Bad Apples approach to human error is difficult to discard because it feels right. Hindsight, backward accountability, is so clear. It’s what we’ve always done. Leaders see what went wrong, point to the people who “touched it last,” tell them or make them admit what they did “wrong,” and tell them not to do it again. Fix the people and move on to the next issue. That’s learning, right?

Error is Valuable - Don’t Waste It

Of course people in organizations can learn from finding, training, and making examples of Bad Apples. People do change their behavior from name, blame, shame, and retrain cycles. If you truly want to improve performance and safety, however, it is not enough.

In the systems view, problems are indications, sometimes weak, of deeper issues. This is what makes them valuable. The leadership challenge is that what you can learn isn’t self-evident or as easy as identifying the operator and telling them to do better. Learning from problems, mistakes, errors, and “near hits” (Rigot) requires that leaders make the effort to understand how operators’ assessments and actions made sense at the time given the circumstances they confronted. Ex: How were they trained? How were they prepared? Who assigned them to that situation? What else was going on?

The most important question to ask after problems are discovered isn’t “Who’s to blame?” It is “What went wrong and why?” This starts the process of identifying deeper issues with the system that contributed to what went wrong.

Problems and near hits (aka “near misses”) are indications of problems in your system, like:

* gaps in qualification

* gaps in error trapping (monitoring, error checking)

* deficiencies in mentoring junior operators

* interruptions and giving people too many things to do at once

* not learning from past problems (on the current ship or past ship experience non-learning carried forward)

* operational mindsets (lineups normal, conditions stable, self-correcting) inappropriately applied to overhaul (nothing lined up, nothing stable or self-correcting)

* weaknesses in preparation (training, briefs, walkthroughs, allocation of supervision)

* ineffective supervisory practices (not “more supervision”— think of your supervisors as three wishes from a genie, use them wisely)

Focusing on things that are only “obvious” with hindsight that the guilty parties lacked, leaders get tricked into being satisfied with partial learning: inaccurate assessments (“loss of situational awareness”), wrong decisions, and bad judgments. Stopping problem investigations at “those idiots did these bad things” wastes opportunities for learning about deeper issues that may reproduce the error in the future.

Learning Requires Unlearning

Learning from systems problems to reduce future error requires unlearning: unlearning superficial searches for a “root cause” (a persistent Navy organizational myth), unlearning blaming Bad Apples for bad outcomes (the Fundamental Attribution Error), and unlearning presumptions about the value of existing procedures and practices (Overconfidence bias).

Reminding people to “maintain situational awareness” and follow procedures doesn’t make the system safer.

Discarding obsolete and ineffective approaches to understanding what went wrong after problems is just as important as learning what makes operators less vulnerable to error in the first place. You can’t develop better understandings about problem causes without questioning deeply held beliefs and assumptions about error.

Unlearning old practices like superficial searches for a “root” cause or questioning assumptions is painful because they feel like they work. More correctly, they mostly work. Leaders get fooled into relying on old practices because many Navy systems (not all) have robust designs, are stable most of the time, and operational practices help keep errors small in the right contexts.

Unlearning is hard because people get angry and defensive when their core beliefs are challenged. Our brains are supercharged self-justification machines.

"Faced with the choice between changing one's mind and proving that there is no need to do so, almost everybody gets busy on the proof." —Galbraith, 1971, p.50.

People should get angry about unlearning, but not when someone questions their beliefs. The proper targets of their anger are the people who taught them false beliefs and themselves for accepting them without question. If questioning assumptions, developing new ways of “seeing,” and unlearning aren’t upsetting and painful, you’re not doing them right.

A learning crew is skilled at creating, acquiring, and transferring knowledge, and at modifying the behavior of its members to reflect new knowledge and insights. Crew members are self-aware and introspective. They constantly scan for overhaul challenges and ways to learn from them. The primary responsibility of the crew’s leaders (“khaki” in the Navy) is to create and foster a climate that promotes learning. In the absence of deeper learning, responses to error are cosmetic. Improved performance is either lucky or short-lived.

What About Accountability?

A frequent objection to systems approaches to error (i.e., human error is a symptom of trouble deeper inside the system: engineered, organized, and socialized) is that they ignore accountability. This is too narrow. It reflects a view of accountability that is backward-looking. Finding “who did it” and labeling, with hindsight, “what they did” as an error only makes things a little safer. It might teach the last person, but what about the next one?

It is a mistake to see responsibility as a dichotomy: either people or the systems in which they operate. Rather, it is more productive for improving safety to think about errors as a result of people in systems. No system is inherently safe. Safety and effectiveness (you need both) in complex systems is created by people through practice at every level of the organization.

The “last person to touch it” bears some, but usually not all the responsibility when things go wrong. Errors result from conditions systematically connected to the work (Dekker & Leveson, 2014). Therefore, accountability is a balance between the personal and the system. Failures can only be understood by looking at the whole system in which they took place. Human behavior is inextricably linked to features of tools, tasks, and environment.

The systems approach to error uses "forward-looking accountability" (Dekker, 2012, p.83). Accountability that is forward-looking looks for opportunities for change that:

  • Acknowledges mistakes and bad outcomes,

  • Identifies opportunities for systemic changes (qualifications, training, mentoring, application of supervision, etc.),

  • Addresses knowledge weaknesses and skill deficiencies

  • Assigns responsibility for designing, implementing, and evaluating improvements.

The last point is fundamental. Without plans to assess the effectiveness of changes, all you have is a hope that things will get better. You will find out eventually if they do, but it might be painful.

Part 1 Wrap Up

The key ideas of this post are:

*Crew error in overhaul is inevitable.

*No crew enters a prolonged maintenance period with its members knowing what they need to know.

*Leaders support or hinder learning with their responses to it.

*Questioning and unlearning deeply held beliefs, myths, and assumptions isn’t easy (regulators can be particularly skeptical about this) or pleasant. If it isn’t painful, you’re not doing it right.

*Problems and errors are valuable, but only if the crew’s leaders shift their view of accountability from backward- (“Who did it?”) to forward-looking (“What went wrong and why?”).

In the next post, I’ll describe the learning and error management strategies that my team and I used on a CVN in Refueling Complex Overhaul.

References

Argyris, C. (1980). Making the undiscussable and its undiscussability discussable. Public Administration Review, 40(3), 205-213.

Dekker, S. (2012). Just culture: Balancing safety and accountability. CRC Press.

Dekker, S. (2017). The Field Guide to Understanding Human Error. CRC Press.

Dekker, S. W., & Leveson, N. G. (2014). The bad apple theory won't work: response to ‘Challenging the systems approach: why adverse event rates are not improving’ by Dr Levitt. BMJ Quality & Safety, 23(12), 1050-1051.

Galbraith, J.K. (1971). A Contemporary Guide to Economics, Peace, and Laughter. Houghton Mifflin Company.

Overhaul16: Undeniable Truth of Overhaul 21

Overhaul16: Undeniable Truth of Overhaul 21