In the race for AI, some companies are pushing simulation too far

Google+ Pinterest LinkedIn Tumblr +

Research on the probabilistic reasoning abilities of humans has made it abundantly clear that people routinely make decisions and forecasts based on faulty mental heuristics. Because of this, predictive analytics in the age of AI has been heralded as the solution to everything from budgeting, to security, to scouting baseball players. But according to two senior data scientists — one at Uber, and one at a global financial services company — AI-powered simulations can have far-reaching consequences.

Both experts outlined examples of behavioral forecasting that they developed that resulted in unintended outcomes. The two most common negative effects of predictive modeling are:

  1. Bullwhip effect
  2. Self-fulfilling prophecy

The results of either effect occurring can range from the comical to the disastrous. The two experts laid out why AI has made behavioral forecasting a dangerous beast to contend with.

Did predictive simulations exacerbate the 2008 financial crisis?

The Bullwhip effect describes a phenomenon in supply chain management wherein forecasts for demand drive up inventory requests, in an ever-increasing degree of distortion as one moves up the supply chain. For example, while demand at the point of consumption is usually stable, the company may ask for slightly more inventory to ensure products stay in stock, which in turn affects the manufacturers of the product, the shipping services, and everything in between. The further away from the consumer purchase you get, the more distorted the supply chain looks. In the figure below, the tip of whip would be the very first parts manufacturer in the supply chain.

This may not seem that disastrous. But according to both experts this same model prolonged, exacerbated, and even contributed to the financial crisis of 2008.

In a bullwhip simulation model, any individual link in the chain forecasts its final inventory on the production needs of its direct customer, not the demand at the end of the chain. So imagine that demand goes down by 10%, then the manufacturer decreases its own production by more than 5% to adjust its final inventory to the lower demand level. Based on this, suppliers even further up decrease their demand even more, resulting in an exaggerated demand decrease through the supply chain.

During the financial crisis, many industrial companies reduced their inventories dramatically in order to reduce working capital. While demand declined 12%, manufacturers reduced inventory by 15%, and imports went down over 30%. This effect exacerbated and prolonged the recession, as there was a significant lag between demand and supply when the economy began to recover.

“The bullwhip effect should have actually stopped during the recession,” explained the global finance expert. “Everyone knew that consumer demand was decreasing. But in fact, while consumer demand volatility tripled during the recession, retailer inventories decreased 4% in the same time period — indicating that manufacturers were responding to micro signals downstream instead of macro trends.”

According to the former Uber data scientist, the disconnect between macro and micro trends can largely be attributed to automated predictive simulation. Upstream, the reverberations of acting based on forecasts of micro-signals didn’t reveal themselves to humans until it was too late, since much of the forecasting upon which supply and demand rests is completely automated.

When simulations create their own reality

The other potential effect of behavioral simulations is creating a self-fulfilling prophecy. This often occurs in budgeting: the budgeting entity will anticipate certain behaviors from the company, which in turn affects how the company spends. The cycle between the two entities becomes self-referential, in which one is predicting behavior, and the prediction then causes that behavior to occur. An example of this would be in finance, where forecasting stock prices necessarily affects stock prices, because people act based on forecasts.

Again, this phenomenon occurred before AI — self-fulfilling prophecies have occurred in business budgets for decades — but it has been exacerbated by predictive simulations because there’s less oversight.

“I definitely have had that happened before,” confessed the former Uber data scientist. “And sometimes I wouldn’t realize that what my algorithm said would happen was happening because my algorithm said it would happen until the cycle had gone on for some time.”

The bright side of behavioral simulations

Every time you base things on expected behaviors you introduce potentially circular effects because you’ve removed yourself one degree from reality. But according to both data scientists, there are ways to protect against the bullwhip effect and self-fulfilling prophecies. The trick is anticipating them from the genesis of the model. AI won’t replace humans in forecasting primarily because the experts for whom the models are designed know what information should be included in the predictive model. Furthermore, these experts can judge whether or not historical data is actually any indication of future data, assuming that the forecast doesn’t affect the outcome. In other words, experts exist precisely to protect against behavioral simulation pitfalls such as the ones outlined in this article.

Behavioral simulations are, in general, significantly more accurate than humans are at predicting behavior — but sometimes it takes a human to see the whole picture.

 

Share.

About Author

Comments are closed.