“Preventing process accidents requires vigilance. The passing of time without a process accident is not necessarily an indication that all is well and may contribute to a dangerous and growing sense of complacency. When people lose an appreciation of how their safety systems were intended to work, safety systems and controls can deteriorate, lessons can be forgotten, and hazards and deviations from safe operating procedures can be accepted.
Workers and supervisors can increasingly rely on how things were done before, rather than rely on sound engineering principles and other controls.
People can forget to be afraid!”
James A. Baker III.
This is an excerpt from The Report of the BP U.S. Refineries Independent Safety Review Panel lead by James A. Baker III.
I found this statement from the panel to be simple and to the point, and yet very effective. Let’s translate this statement to any of our workplaces to see if it is applicable? In this context, what does ‘forgetting to be afraid’ mean?
In the associated graphic I demonstrate what it means to me. To me ‘forgetting to be afraid’ is synonymous with the concept of ‘normalization of deviance’. Diane Vaughn coined this term as a member of the Challenger investigation team.
When we deviate from an accepted standard, usually in the form of a short-cut due to perceived time pressures, and nothing bad happens…then we accept that as the new norm or ‘practice’. After all, we saved some time off the task (and increased production) and there were no negative consequences. This is one of those ‘hero or zero’ moments. This is because if the short-cut works, we get pats on the back and our supervision condones with their silence. However, if it didn’t work at some point, there will be hell to pay for violating that standard and then all of a sudden, selective memory sets in!
This process of ‘normalization of deviance’ become iterative. This means if we got away with it once, we are likely to do it again, to save precious time and satisfy the operational gods. The problem eventually will be when the gap between standard and practice continues to widen and it migrates towards the boundaries of what is considered ‘safe’. When exactly is that point reached? I claim one really doesn’t know until they cross it, a failure occurs and selective memory sets in during the hunt for ‘whodunnit’.
This gradual slide into the ‘unsafe known’ is being referred to these days as ‘Drift‘ by the likes of noted Safety researchers like Dr. Sidney Dekker. It is during this gradual drift into the Unsafe Zone that we ‘forget to be afraid’. We are used to the new practices and we are used to deviating from accepted norms…we are being conditioned to not be afraid under such circumstances…WHEN WE SHOULD BE AFRAID!
How Do We Prevent This?
Our oversight mechanisms should include the detection of such drifting behaviors and correcting them to stop the drift. However, oftentimes this is not the case because the new practices are working and production records are being set, so who wants to be the ‘bad guy’ and reset behaviors to align with standards?
It takes courage to make such unpopular decisions in the best interest of the Safety of our personnel. Whenever one wonders if it’s the right thing to do, remember the families of the 15 people killed in the Texas City Explosion and if they would have wanted their loved one’s co-workers to ‘forget to be afraid’, while supervision looked the other way? Do you ever want to be in the position of facing the families of your peers under such conditions?
Do you see these behaviors at your place? At your facility, what systems are in place to prevent people from forgetting to be afraid?
2.4.19 Update: In response to the feedback I received on this expression (per my request), I revised the y-axis. It originally was labeled as ‘Compliance’ and I agreed with the debate that just because one may be compliant, does not mean they were effective. So I re-labeled to a much broader term reflecting an acceptable standard of performance. Thanks for all the great, constructive debate on this topic.
About the Author
Robert (Bob) J. Latino is former CEO of Reliability Center, Inc. a company that helps teams and companies do RCAs with excellence. Bob has been facilitating RCA and FMEA analyses with his clientele around the world for over 35 years and has taught over 10,000 students in the PROACT® methodology.
Bob is co-author of numerous articles and has led seminars and workshops on FMEA, Opportunity Analysis and RCA, as well as co-designer of the award winning PROACT® Investigation Management Software solution. He has authored or co-authored six (6) books related to RCA and Reliability in both manufacturing and in healthcare and is a frequent speaker on the topic at domestic and international trade conferences.
Bob has applied the PROACT® methodology to a diverse set of problems and industries, including a published paper in the field of Counter Terrorism entitled, “The Application of PROACT® RCA to Terrorism/Counter Terrorism Related Events.”
Recent Posts
A Step-by-Step Guide to Using Root Cause Analysis Tools for Improved Reliability
How to Choose the Right Root Cause Analysis Tool for Your Reliability Program
How to Perform Root Cause Investigations?
Post-Incident Analysis for Enhanced Reliability
Root Cause Analysis Software
Our RCA software mobilizes your team to complete standardized RCA’s while giving you the enterprise-wide data you need to increase asset performance and keep your team safe.
Request Team Trial
Root Cause Analysis Training
Your team needs a common methodology and plan to execute effective RCA's. With both in-person and on-demand options, our expert trainers will align and equip your team to complete RCA's better and faster.
View RCA Courses
Reliability's root cause analysis training and RCA software can quickly help your team capture ROI, increase asset uptime, and ensure safety.
Contact us for more information: