Recently, someone forwarded a Live Science web link to me. The web page had information on how human brains are able to read words and make sense of them even when the letters are jumbled up. The website suggested that it was possible for a human brain to make sense of the words as long as the first and last letters were correct.
Now, I had seen some version of this before and never given it any more thought. But this time, it made me think of an audio book I was recently listening to called, “Creativity Inc.” by Dr. Ed Catmull, president of Walt Disney and Pixar Animation (Great book, highly recommend it). The author talks about a neuroscientist telling one of the directors how only 40% of the information of what we think we ‘see’ comes through our eyes and the rest of the detail is filled up by patterns and memories from past experiences. Later on, the author talked about the other end of the spectrum as well, meaning our eyes see what they want to see and ignore the facts that do not fit that recognizable pattern (Of course I am paraphrasing here.) In the words of Anais Nin, we don’t see things as they are; we see them as we are. He mentioned the human tendency for confirmation bias, wherein we accept data that confirms our mental models and reject the data that contradicts them.
Now, what does this have to do with forecasting? Here is what I have heard over the years. The science of collaborative forecasting depends on this notion of collecting forecast inputs from knowledgeable human participants (users). Whenever users are asked to provide forecasting input, they have shown a preference to see the historical data graphically (a picture is worth a thousand words kind of thing). One can only imagine that a person looks at these graphs and tries to do a best-can-do job of fitting a pattern. If only 40% is based on what they are seeing, the rest has to come from pre-baked intuitions and biases.
Over the years working with human forecasters, I have come to notice there are many biases. Here are some that I know of:
- Optimism bias: I have seen this primarily with the sales team who seem to have an abundance of confidence in their ability to sell and therefore inflate the end results.
- Sandbagging bias: This is the reverse of the above and I have seen this where well-meaning executives have created a system of bonuses based on exceeding the forecasts and this has created a culture of sandbagging.
- Anecdote bias: I have heard so many instances where regardless of what the data is telling them, client personnel would be wary of seeing it on account of some very bad thing that happened in the past and is part of the company folklore. Their forecast is therefore biased based on the anecdotes.
- Recent data bias: This is probably true for all processes where humans are involved. The more recent occurrences weigh heavier in our mind. In the case of forecasting, this can create an overreaction based on recent events.
- Silly bias: In a study done by Amor Tversky and Daniel Kahneman, they asked respondents to guess the number of countries in Africa. However, they showed them a number right before asking them to guess. What they found was on average, the estimate of number of countries went up when the user was shown a bigger number and went down when the users were shown a smaller number prior to answering the question. This makes me think a forecast could be impacted by silly things you saw before you start doing the forecast. For example, what if they saw the temperature and it was a hot day? Does that high number skew the forecast higher? What if they called someone prior to forecasting and the phone number was comprised of larger digits?
So there are various reasons a bias can creep in when we get humans involved in forecasting. In the coming weeks, we will look at some ways to mitigate these biases and getting good forecast input from the human participants. This is important, because, when done right, the human input can improve the forecast tremendously.