Saturday, September 30, 2017

Don't Just Pivot!

No doubt you've heard insanity defined as "doing the same thing over and over and expecting different results." And yet we know from science that advances often come when experiments fail to replicate a result. Einstein himself, said to be the aphorism's author, often repeated experiments. After all, experiments sometimes produce false results. You don't have to be Einstein to know it is a good idea to run the test again.

Yet if you are in business, you probably live by the insanity aphorism - insisting that no test be repeated. How often have you said "But we already tried that, and look how it turned out!" Calling others insane is an effective way to shut down further experimentation (and thinking).

Fortunately for us all, the world re-runs experiments all the time, and often gets different results. Webvan failed. Now Amazon, Google, and others are delivering to your door. EachNet (and its acquirer, eBay) failed to make cash-on-delivery work in Chinese C-to-C e-commerce. Now Taobao's cash-on-delivery system is thriving. The failures of Alta Vista, Excite, Lycos, and others led many to conclude that internet search could not be a business. Now, well, you know.

You're probably already trying to explain the differences in all these examples. Slow down; the broader issue here is the problem of false results.

Sometimes experiments generate false negatives - they tell you "no" when the real answer is "yes." And sometimes experiments generate false positives, telling you "yes" when the real answer is "no." You of course know about false positives and false negatives in medicine. We worry about them a lot, which is why we often go back for a re-test when things get medically serious. But for whatever reason, we don't think about false results nearly enough in business.

For instance, I recall one of the early movers in digital medical diagnostic imaging. Their system was rapidly adopted by several hospitals, leading to a lot of excitement, including executives quitting their jobs and joining the company. Then growth abruptly ended. It turns out the early wins were a false positive. (Many more examples of false positives can be found in Geoffrey Moore's "Crossing the Chasm" books, enough to have created a consulting juggernaut.)

False negatives in business are common too, as in the examples of search, delivery, and Chinese COD - but they are often harder to spot. The problem is that false positives are self-correcting, but false negatives are not. When you get a positive result from a business experiment, typically you'll keep at it. If it turns out to have been a false result, the world will make that clear enough. But if you get a false negative, you'll be inclined to "pivot." And you'll never know that you were on to something good - unless somebody else tries it again.

Perhaps you are thinking: "Hey, in all your examples, there were some variables that changed. A good test would take into account all the variables that matter." Sure, Amazon and Google now know about some things that Webvan did not, and they have adjusted those variables accordingly and that's why their experiments are working when Webvan's did not.

Here's the problem: Often we don't know all the variables that matter. This problem is well understood in science. Good scientists know that two seemingly identical experiments can produce different results, since often there are variables operating that are unknown to the scientist at the time. In fact, even random chance can produce odd results. That's why a good scientist knows that there is a lot she does not know; so she runs the test again.

The lesson: Insane though it may seem, don't just pivot. Run the test again.


A rigorous treatment of this problem, sometimes called the "hot stove effect" is by Jerker Denrell and James March.