Humans appear to be wired, for survival reasons, to jump to conclusions based on personal experience, particularly regarding potential threats. Eons ago, Harry eat red berries and Harry fall down, have convulsions, and die. Conclusion: We no eat red berries. Can’t argue with that. Although other varieties of red berries may provide wonderful nourishment, why take a chance?
Equating consumption of red berries with dying is an example of a correlation; an observed apparent coupling of one thing to another. As statisticians like to say, however, correlation is not proof of causation. The classic example of this is the observation that the rooster crows at dawn, i.e. there is a correlation between the rooster crowing and the sun peeping over the horizon. A primitive person might observe this correlation and conclude that the rooster’s crowing causes the sun to come up. Actually, as we know, the reverse is true; i.e. the rooster’s crowing is a result, not a cause. Therefore, to learn the correct lessons from observing correlations, it’s important to discern cause from effect.
In other instances, a correlation is simply a coincidence; i.e. no true correlation exists. For example, someone dreams that a loved one will be in an accident, and subsequently they are. Although the dream is observed to correlate with the subsequent unfortunate event, in truth this was just a coincidence (see “The Single Event Fallacy” in “I’m Right! (Or Am I?)”). But the person who had such a dream will (unless they employ Engineering Thinking) understandably be very likely to conclude that they’ve experienced a profound psychic event.
If an engineering design exhibits problems, engineers are very careful to thoroughly examine the situation. They can’t afford to confuse cause and effect, or to assume that correlations exist when they may not. Instead, they study and test until they have a reasonable certainty that they truly understand the root cause of the problem. This understanding allows engineers to devise effective solutions.
A misguided attempt to fix a problem, without understanding the true root cause, can make things worse instead of better. An even more unfortunate result is when a fix superficially appears to work, wherein in reality it introduces hidden defects. Later in time — after the supposed fixers have taken their bows and are long gone from the scene — the hidden defects erupt, wreaking havoc. Because of the time delay, the defects may not be perceived as having originated with the earlier faulty fix. This is indeed a tragic outcome: a supposed solution is perceived as being successful, when in reality it made things worse.
An Action’s Success Should Not Be Judged
On Whether Or Not It Appears To Improve Things In The Short Run,
Instead It Should Be Judged On Whether Or Not
(A) The Improvement Is Maintained Over Time, And
(B) The Improvement Is Superior To Alternative Actions (Including Doing Nothing)
For example, the conventional wisdom is that during the Great Depression president Franklin D. Roosevelt helped guide the economy to recovery by vigorously inserting the federal government into economic affairs. FDR initiated a myriad of intrusions, such as “work relief” programs; jobs that were funded by the government. Eventually — many years later, during World War 2 — the economy finally did indeed improve. Some observers thought, wow, FDR didn’t spend enough federal money, because it wasn’t until the world war started and federal spending went up even more, that we finally got ourselves out of the depression. In other words, there was a perceived correlation between massive federal spending and the end of the depression. But was this correlation properly interpreted? Was federal spending the root cause of the recovery?
In our next post we’ll take a look at how an engineering team might address that question.
A Brief Engineering Review of Economic Meltdowns