One lonely cinnamon roll doesn't mean it is old

When conducting investigations, we often allow our assumptions to get in the way of our investigation. We see something that we perceive is obvious and latch on with a biased opinion, failing to observe the other evidence that is part of a new scenario. We see things and default to assume that the failure mode is similar to something we have previously experienced. We share our historically documented mental transcript, sometimes leading the team with the confidence and prestige to follow our bias. Unfortunately, what happened previously isn’t always what happened this time. How can we remain active with participation in an investigation without leading with a predetermined bias?

I saw this bias investigating technique at my weekend coffee shop. One Saturday morning, I was in line behind a lady and her daughter when they both laid eyes on one lonely cinnamon roll left at 8:04 in the morning. The lady says to the daughter, “this coffee shop makes a fantastic cinnamon roll, but don’t get it. It is there from yesterday.” The barista indicates, “Ma’am, those are fresh. We have sold the other 11 since we opened at 7:00 am.” This technique of seeing something and immediately attaching it to a previous experience disrupts our ability to explore more facts. This doesn’t only happen in line at a coffee shop but also with incidents we experience in manufacturing.

We too often allow our internal biases, which have been organically created from previous scenarios, to influence the most current incident. We also may be overwhelmingly influenced by the confidence of those around us to steer an investigation the wrong way. If we are lucky, the previous incident may have the same cause-and-effect relationship as the current incident. However, the celebratory quick solution is more often wrong than right.

Instead, we need to find ways to disrupt these shortcircuit solution tactics and approach each incident as if it is a new problem to solve. This is not discrediting the value obtained from previous incidents in a way nor the experience from historic tactics. Instead, if we can rapidly test any incident with a three-step approach, we can get to the root cause quicker and move on to the corrective actions.

1. Show the thought process

The first step is to have the patience to show a cause-and-effect relationship of a thought process. I prefer to stay away from a 5-why process because rarely do problems I engage in are linearly this simplistic. Instead, I tend to utilize problem-solving techniques that explore the magnitude of cause-and-effect relationships. Consider something like ThinkReliability’s Cause Map tool or an Ishikawa diagram with markers on a dry-erase board. The goal is to explore with the investigation team the relationships and evidence that display the sequence of events before the incident. Software tools such as TapRoot or Sologic can assist with these techniques with a complete investigation package.

2. Follow the energy

The second step is to follow the energy within the sequence of events. The Law of Conservation of Energy enables you to understand the steps that lead to the incident by unraveling the evidence. One of my favorite tools to help guide this process is the Shainin System developed by Dorian Shainin. The Shainin System is sometimes referred to as Statistical Engineering or Red X in the automotive sector. It is founded on the belief that there are dominant causes of variation as evidence in any incident. And within this evidence, the goal is to diagnose the glaring evidence until a major contributor to the cause is found. To find the major contributor, you are trained to follow the sources of energy that created the evidence.

3. Avoid creating a story for Hollywood

The third is to try and avoid the allure of a grand story or a simple solution. I remember a mentor of mine years ago said, there is no such thing as operator error. Having the mindset to immediately avoid blaming an individual for an action or inaction, can lead to root causes that are more systematic and process-oriented. Blaming the operator immediately is short-circuiting the investigation to unsustainable corrective actions. The trick here is to respect information derived from previous incidents but respectfully understand that they may not influence the current incident. We must avoid and be aware that the cause and an immediate solution aren’t always derived from the first thing that we see. You may find yourself swayed by these simple answers and discard information that doesn’t seem relevant. Your previous biases may sway you and, more damaging, sway others. Instead, allow others to crosscheck your biases.

Be aware that you may be swayed, and discard the information that isn’t relevant. Don’t let your biases sway others, and get other people to cross-check your data. - Lawrence Jones -

Going back to ThinkReliability’s Cause Mapping tool, if a component or process failed again, consider immediately opening the previous Cause Map. This road map of the previous incident can serve as a guide to the past to solve the most recent incident. Exploring the previous cause and effect relationships can lead the team to evidence that is different this time, or evidence that may have been missed in a previous incident.


Regardless, in an investigation, we have to challenge and prove our assumptions. We must engrain investigation habits that recognize that we are leaning towards our biases versus exploring the current situation’s evidence. When we build these habits and focus on these three techniques, we can be more successful at investigations. Having a mindset that the root cause is not always obvious is a glue to allow the team to come in with sustainable solutions.


Recent Posts

See All