PAASTUB Email Process-Step 7: BLUF-ing

BLUF: The Bottom Line Up Front (BLUF) describes the essence of your email in three sentences, plus or minus one. Reading the BLUF alone should be sufficient to understand both the action and its importance. Creating a good BLUF also helps you clarifies your thoughts about the message.

HRO 9e Collision at Sea-Sequence of Events4, Value Conflicts2

BLUF: This is Part Four of the sequence of events associated with the collision of the USS JOHN S MCCAIN (DDG 56) with Motor Vessel ALNIC MC on 21 August 2017 in the Straits of Singapore. It is part of a series devoted explaining key concepts of HRO in context. This is the second of two posts that uses the sequence of events to examine value conflicts associated with Highly Reliable Organizing.

HRO 9d Collision at Sea-Sequence of Events3, HRO Value Conflicts1

This is Part Three of the sequence of events associated with the collision of the USS JOHN S MCCAIN (DDG 56) with Motor Vessel ALNIC MC on 21 August 2017 in the Straits of Singapore. It is part of a series devoted explaining key concepts of HRO in context. This is the first of two posts that uses the sequence of events to examine value conflicts associated with Highly Reliable Organizing.

HRO 9a Applied HRO-Collision at Sea

The USS JOHN S MCCAIN (DDG 56) collided with Motor Vessel ALNIC MC on 21 August 2017 in the Straits of Singapore. Ten Sailors died, forty-eight more were injured, and both ships were damaged (DDG 56 seriously). This is the first in a series of posts devoted to the application of HRO for analyzing the accident.

HRO8 HRO Roles

The complexity and risks involved in safety-critical work are managed with role systems. The work is divided among groups or separate organizations with defined roles. Each is important to the mission of the organization. A role consists of defined behaviors and responsibilities required of people because of their position in the organization.

HRO3 A Fire Aboard Ship Can Ruin Your Day ...

There are few facts available about the fire aboard the USS BONHOMME RICHARD (LHD 6), but lots of assumptions and opinions. This post provides an HRO introduction to the fire. It starts with the basic facts associated with Navy ship fire safety. It continues with some observations causes of serious shipboard fires. It concludes with the need to treat shipboard fire safety as a system.

HRO2 Introduction to HRO

This post provides an overview of High Reliability Organizing (HRO) and the research on it. I distinguish high reliability organizing, the principles and practices that yield superb performance, from high reliability organizations, collections of people working to create high reliability. I focus on the principles and practices of organizing to achieve high reliability (High Reliability Organizing) what the people in the organizations DO and WHY rather than on the organizations themselves. This blog is about the organizational design practices for active management to reduce failure and increase the reliability of important outcomes.

HRO1 My HRO Blog

This post is an introduction to my writings on High Reliability Organizing (HRO). I have a different perspective than others because I am both an organizational scholar and have decades of experience as an HRO practitioner. The blog is a way for me to explore those different ideas and share them with others.

Summary: This is part three in my series of posts on human error. This post is again based heavily on Chapter 3 of “To Err is Human” by the Institute of Medicine. This post reviews some ideas on the nature of safety, observes that some types of complex systems are more prone to error/failure than others, and introduces the term “naturalistic decision making.”
This is part two in what I believe will be a five-part series of posts on human error. This post is based heavily on Chapter 3 of “To Err is Human” by the Institute of Medicine. The book reviews the current understanding of why medical mistakes happen and its approach is applicable to other high hazard industries as well. A key theme is that legitimate liability and accountability concerns discourage reporting and deeper analysis of errors--which begs the question, "How can we learn from our mistakes?" This post covers why errors happen and distinguishes between active and latent errors.
To paraphrase Mark Twain, it is not what people know about accident investigations, causality, and corrective actions that gets them into trouble (or leads to weak corrective action and thus the same problems over and over again), it is what they know that just is not so. This post is based on Sidney Dekker’s “The Field Guide to Understanding Human Error.” Only read further if you dare to have your worldview of critiques and corrective action challenged. You may conclude that everything you think you know about human error investigations (also known as critiques and fact findings) is wrong or in need of serious revision. Part 1 focuses on same basics of human error that more people should know and part 2 will recommend things you can do to get better at managing human error (strange as that may sound). This post is a little longer than normal, so make sure you have some extra time to read it.