Summary: This is part three in my series of posts on human error. This post is again based heavily on Chapter 3 of “To Err is Human” by the Institute of Medicine. This post reviews some ideas on the nature of safety, observes that some types of complex systems are more prone to error/failure than others, and introduces the term “naturalistic decision making.”
This is part two in what I believe will be a five-part series of posts on human error. This post is based heavily on Chapter 3 of “To Err is Human” by the Institute of Medicine. The book reviews the current understanding of why medical mistakes happen and its approach is applicable to other high hazard industries as well. A key theme is that legitimate liability and accountability concerns discourage reporting and deeper analysis of errors--which begs the question, "How can we learn from our mistakes?" This post covers why errors happen and distinguishes between active and latent errors.
To paraphrase Mark Twain, it is not what people know about accident investigations, causality, and corrective actions that gets them into trouble (or leads to weak corrective action and thus the same problems over and over again), it is what they know that just is not so. This post is based on Sidney Dekker’s “The Field Guide to Understanding Human Error.” Only read further if you dare to have your worldview of critiques and corrective action challenged. You may conclude that everything you think you know about human error investigations (also known as critiques and fact findings) is wrong or in need of serious revision. Part 1 focuses on same basics of human error that more people should know and part 2 will recommend things you can do to get better at managing human error (strange as that may sound). This post is a little longer than normal, so make sure you have some extra time to read it.