It is a common business frustration: however much “insight” you throw at your critical business decisions, it is hard to get the impact you really want. Somehow, additional data and analysis doesn’t deliver the business value it promises to. It is ignored or under-used. Your people don’t seem to be able to absorb it. What is going on?
The answer is fundamental to the way humans work. We are not good at making decisions that require assimilating large and varied streams of information. We have a bandwidth constraint. We tend to take short-cuts and focus on only a small subset of the data available to us. Computers do it better.
Consider this excerpt from “The Robust Beauty of Improper Linear Models in in Decision Making” (Robin M. Dawes, 1979), which addresses the relative strengths of man and machine:
“People—especially the experts in a field—are much better at selecting and coding information than they are at integrating it.
But people are important. The statistical model may integrate the information in an optimal manner, but it is always the individual (judge, clinician, subjects) who chooses variables. Moreover, it is the human judge who knows the directional relationship between the predictor variables and the criterion of interest, or who can code the variables in such a way that they have clear directional relationships. And it is in precisely the situation where the predictor variables are good and where they have a conditionally monotone relationship with the criterion that proper linear models work well.
The linear model cannot replace the expert in deciding such things as "what to look for," but it is precisely this knowledge of what to look for in reaching the decision that is the special expertise people have.
In summary, proper linear models work for a very simple reason. People are good at picking out the right predictor variables and at coding them in such a way that they have a conditionally monotone relationship with the criterion. People are bad at integrating information from diverse and incomparable sources.”
(This paper also presents evidence that even improper linear models – i.e. deliberately bad ones – are superior to expert intuition in many cases.)
Why, in 2016, are we talking about a 1979 academic paper (University of Oregon) about the merits of a technique that dates back to 1805 (regression of least squares)?
Because this paper sheds light on the underlying cause of the dilemma discussed at the top of this article. The best way to improve corporate decision making is to put the “science” and the algorithms first, while also recognizing that human judgment (“art”) can add value by identifying what’s most important. Once you make that leap of faith, the constraint on human bandwidth is gone, and your ability to absorb more and better insight to continually improve decision making is dramatically higher.
If “science” has long been better than “art” in terms of making decisions, then that logic is even more valid today with the power of modern machine-learning techniques and near-unlimited computing power. And executives who cling to their own sense of infallibility, as either an instinctive or logical decision maker, will lose ground to competitors who accept science as a superior starting point—especially when supplemented with art. There’s a big difference between saying, “I don’t trust machines to make decisions for me,” and saying, “I think that machine model could be tweaked a bit to provide an even more accurate decision.”
Average is Over, in which renowned economist Tyler Cowen explains that high earners are taking ever more advantage of machine intelligence in data analysis, contains an extensive discussion of “freestyle chess.” In this hybrid of real chess, humans can use any and all tools available—especially computers—to play the best chess game possible. The best teams are combinations of very good computer models supplemented by a diverse team of people who use the computer’s analysis as a base for their moves but each bring a different and complementary way of adding value to that algorithmic answer. Cowen notes that “man plus computer” is a stronger player than “computer alone,” at least provided the human knows what he is doing. This is true in business as well.
Machines are better at assimilating large amounts of data and discriminating between different scenarios; humans are better than machines when deciding what’s important. When the “art” of human judgment is used to adjust the “science” of algorithms, their accuracy increases. The sooner you accept that, the better off you’ll be.