The focus of this post is what can interfere with the support the ship’s crew should provide to the SY in an overhaul.
Welcome to my website. This is where I write about what interests me. Enjoy your visit!
The focus of this post is what can interfere with the support the ship’s crew should provide to the SY in an overhaul.
Providing work and test support in overhaul is challenging for the crew. This post defines good shipyard support.
Overhaul is hard for the crew because of the knowledge gap between ship’s force and shipyard managers about the work and schedule.
This is the first of a series of posts explaining why sailors don’t like ship overhaul. I use the analogy of getting a filling, routine teeth maintenance, as an illustration.
While they can seem like chaos, overhauls have three major parts: setting plant conditions, repairs, and testing the repairs. These parts can be divided even further. Some aspects of overhaul are subtle or fiendishly counterintuitive.
Ship overhaul is hard. It’s even harder when you fall into the trap of fighting against its realities. This is an introduction to my new series on ship overhaul.
While it is adaptive for humans to distort reliability and view events with cognitive biases, it isn’t for HRO. This post discusses reasons why people and organizations avoid accountability, the role accountability plays in HRO, and why it is essential for being highly reliable.
This post and the next focus on the last of the HRO principles that Weick and Sutcliffe missed in their canonical description, accountability. In this post, I discuss what accountability is and the precursors necessary to make it effective.
In this second of two posts on assessment, I argue that assessment contributes to resilience by identifying problems with procedural compliance, records, and performance. I identify the components of HRO assessments. Assessments are a ruthless test of the adequacy of a management system and its design assumptions.
BLUF: Assessment is necessary for learning if a system or process is functioning the way you expect, including its management. Audits evaluate the compliance to procedures, effectiveness of training and qualification, and diligence regarding maintenance of safety-critical systems. In the first of two posts on assessment, I address what it is and why it is necessary for High Reliability Organizing.
BLUF: After cheekily declaring that the canonical five principles of High Reliability Organizing articulated by Weick and Sutcliffe (2007) are insufficient, I assert that three additional principles are essential for HRO. The first of these is training based on rigorous standards for technical knowledge and performance.
BLUF: In their hallmark book, Managing the Unexpected (2007), Weick and Sutcliffe distilled five principles of High Reliability Organizations. I contend that these principles are necessary, but not sufficient for understanding High Reliability Organizing. In this post, I explain why more principles are necessary, the role of management in HRO, and articulate four criteria that additional principles must meet.
Questioning attitude is a skill to constantly collect data and a decision and speak up when it doesn’t make sense. There are strong, unconscious social and psychological pressures to conform to the majority view that must be overcome for questioning attitude to improve resilience and decision making. Questioning attitude has both technical and social bases.
A questioning attitude is essential for mindfulness and resilience. It can be expressed in the form of dissent for enhancing safety and making better decisions. This post defines questioning attitude and argues that it makes accessible divergent thinking, enhances ability to “listen” to weak signals of danger, and has much in common with Weick’s seven traits of sensemaking.
Blind spots emerge in the course of doing work. Blind spots are function of both the situation and the observer. You can’t search for them, but you can do things to make encountering them more likely such as create reporting systems and conduct independent audits. You can’t manage blind spots, but you can learn to improve your how your workers and managers react when they discover them.
BLUF: This post uses the Challenger launch decision of January 1986 as a case study in blind spots. It has been analyzed in many books and articles, but my take is unique because of its focus on blind spots.
This post is a return to important HRO concepts. Organizational blind spots are a fact of life. They are safety vulnerabilities that can exist for long periods below an organization’s awareness. Blind spots are a risk for high reliability because an organization doesn’t know where to look for them and they don’t look like problems when encountered.
This is my final post in the JSM-ALNIC collision series. It is an overview of the mindset that I encourage others to adopt for learning from official accident reports.
This is the second post of conclusions from the collision between JSM-ALNIC Aug 2017. Like the Navy report, I concluded that there was no single party responsible, but I propose my own version of causality for the accident. While many parties external to the ship bear responsibility, I restricted my analysis to the actions of the crew and the conditions on the ship because that is the only data available.
I’ll publish the conclusion to my series on the August 2017 collision between JSM and ALNIC in three parts due to length. This is part 1. It isn’t easy to learn how to be more reliable by reading accident investigation reports. Even when they are well-written (rare), it takes work and clear thinking (rarer still) to avoid mental biases like thinking, “those idiots!” Each investigation report is a case study that readers have untangle to understand and apply to their own circumstances.