Are humans are overzealous cause detectors?

The title of this post is a line from a Wall Street Journal (WSJ) article “Do our Gadgets Really Threaten Planes? by Daniel Simons and Christopher Chabris, Sept 7 2012.  The article poses a question if the ban on phones and such is based on fact or fear.  It is a good question because many of us have forgot to turn our stuff off during a flight a time or two.

The article is making a good point, in my opinion, because I do not believe the electronic devices do disrupt planes.  Another fact is that the Mythbusters tested it and showed no effect on their test plane even when they increased the phone transmit power 10x.

I write this post because of a subplot in the article.  Down near the middle the author states that fear may be driving the ban more than facts.  This is where they state that humans are overzealous cause detectors.  Two quote the authors “When two events occur close in time, an one plausibly might have caused the other, we tend to assume it did.”  This is a trait that most lean six sigma practitioners face in many projects.  Humans like to see connectivity and causation when only a correlation may exist.  One of my mentors called this “False Knowledge.”  People hypothesize a belief after they do something different and the result of a process changes.  Now the person must choose between a belief of random correlation or human causation.  Most will choose human causation.

As a statistician and Lean Six Sigma MBB, I have gained a deep understanding of variation and see it in everything. I have found that most people do not fully comprehend the nature of random variation.  Most people consider variation as “regular” or “common,”  but I know that there are random extreme events.  Things break, fail, and stop for random extreme events, while normal people assume there is a cause, I more tend to first consider it as a random extreme event.  I believe the assumption of a cause for all extreme events leads to much of our false knowledge and many bad decisions.

For lean six sigma projects, I think the greatest successes (for permanent change) are made when you are required to prove the non-random nature of an event prior to implementing changes.  This is what the statistical testing provides.  The null hypothesis is randomness, the alternate is causation.

An unintended consequence of my belief of greater randomness is that I will recommend that no actions be taken after many seemingly extreme events, because there is not enough evidence to prove causation by events that are found to be correlated during root cause analysis efforts.  Plus, a recommendation to take no action goes against human nature.  If I have a client that must take a corrective action, because of culture or a contract requirement, I recommend that they take an action that is expected to be non-impactful.  These actions are things like re-writing a document to be for readable, change a training program, or brief the workforces.  Just do not change the process from a historically good setup because of a single random extreme event.