Managing AI Decision-Making Tools

Managing AI Decision-Making Tools

by Bloomberg Stocks
0 comment 9 views
A+A-
Reset

The nature of micro-decisions requires some level of automation, particularly for real-time and higher-volume decisions. Automation is enabled by algorithms (the rules, predictions, constraints, and logic that determine how a micro-decision is made). And these decision-making algorithms are often described as artificial intelligence (AI). The critical question is, how do human managers manage these types of algorithm-powered systems. An autonomous system is conceptually very easy. Imagine a driverless car without a steering wheel. The driver simply tells the car where to go and hopes for the best. But the moment there’s a steering wheel, you have a problem. You must inform the driver when they might want to intervene, how they can intervene, and how much notice you will give them when the need to intervene arises. You must think carefully about the information you will present to the driver to help them make an appropriate intervention.

Your business’s use of AI is only going to increase, and that’s a good thing. Digitalization allows businesses to operate at an atomic level and make millions of decisions each day about a single customer, product, supplier, asset, or transaction. But these decisions cannot be made by humans working in a spreadsheet.

We call these granular, AI-powered decisions “micro-decisions” (borrowed from Taylor and Raden’s “Smart Enough Systems”). They require a complete paradigm shift, a move from making decisions to making “decisions about decisions.” You must manage at a new level of abstraction through rules, parameters, and algorithms. This shift is happening across every industry and across all kinds of decision-making. In this article we propose a framework for how to think about these decisions and how to determine the optimal management model.

Micro-Decisions Require Automation

The nature of micro-decisions requires some level of automation, particularly for real-time and higher-volume decisions. Automation is enabled by algorithms (the rules, predictions, constraints, and logic that determine how a micro-decision is made). And these decision-making algorithms are often described as artificial intelligence (AI). The critical question is, how do human managers manage these types of algorithm-powered systems?

An autonomous system is conceptually very easy. Imagine a driverless car without a steering wheel. The driver simply tells the car where to go and hopes for the best. But the moment there’s a steering wheel, you have a problem. You must inform the driver when they might want to intervene, how they can intervene, and how much notice you will give them when the need to intervene arises. You must think carefully about the information you will present to the driver to help them make an appropriate intervention.

The same is true for any micro-decision. The moment there is a human involved, you need to think carefully about how to design a decision system that enables the human to have a meaningful interaction with the machine.

The four main management models we developed vary based on the level and nature of the human intervention: We call them HITL, HITLFE, HOTL, HOOTL. It’s important to recognize this is a spectrum, and while we have pulled out the key management models, there are sub-variants based on the split between human and machine, and the level of management abstraction at which the human engages with the system.

The Range of Management Options

Human in the loop (HITL): A human is assisted by a machine. In this model, the human is doing the decision making and the machine is providing only decision support or partial automation of some decisions, or parts of decisions. This is often referred to as intelligence amplification (IA).

Collecting and disposing of waste and recycling is a complex business where everything from the weather to local noise ordinances, parking lot layouts to gate locks, recycling types to dump locations, driver availability and truck capabilities, all play a role in an efficient operation. A Fortune 500 company is investing heavily in using AI to improve its operations. They recognize that the value of AI often comes from helping humans do their job better. One example is in helping the dispatchers handle tickets and routes more effectively. Many things can prevent a smooth service event, for example, the need for a specific key or code, time windows where pick-up is or is not possible, width and length restrictions, instructions for getting things moved or opened, temporary construction, and much more.

A recently developed bot crawls through all the tickets and requests in multiple systems to identify anything that might impact a particular stop and brings it to the dispatcher’s attention. It proactively identifies all the possible issues for the route as currently set up (and re-does this when stops are added, moved, or removed during the day) and can be used reactively by dispatchers as they work to find the best way to add requests to in-flight routes. The human dispatcher monitors the system as it works to free up 20-25% of their day by automating thousands of decisions about service tickets

Human in the loop for exceptions (HITLFE): Most decisions are automated in this model, and the human only handles exceptions. For the exceptions, the system requires some judgment or input from the human before it can make the decision, though it is unlikely to ask the human to make the whole decision. Humans also control the logic to determine which exceptions are flagged for review.

A beauty brand developed a machine learning (ML) algorithm to predict the sales uplift for different types of promotion to replace an existing human-powered approach. The ML prediction took account of such factors as the offer, marketing support, seasonality, and cannibalization to create an automated forecast. For many promotions, the ML prediction worked well, but managers quickly lost confidence after initial success was quickly followed by some extreme failures, which resulted in significant lost sales. When the data scientists reviewed the predictions, they found that the ML algorithm struggled to predict certain types of promotion. Rather than abandoning the project, they developed a HITLFE approach. The key was to codify the machine’s level of confidence in its predictions and have the humans review predictions on an exception basis where the machine had low confidence.

Human on the loop (HOTL): Here, the machine is assisted by a human. The machine makes the micro-decisions, but the human reviews the decision outcomes and can adjust rules and parameters for future decisions. In a more advanced set-up, the machine also recommends parameters or rule changes that are then approved by a human.

A European food delivery business needed to manage its fleet of cyclists and used a spreadsheet to plan the number of “delivery slots” required over the next hour/day/week. They then deployed various incentives, for example, increasing the per delivery rate to match driver supply with expected demand. This was a highly manual and imprecise process, and they decided to develop a completely automated system to test against their manual approach. The results were interesting. Sometimes the humans performed better, sometimes the machine performed better. They realized that they had mis-framed the problem. The real question was how to get the humans and machines to collaborate. This led to a second approach in which, rather than the humans managing at the driver level, they designed a set of control parameters that allowed the managers to make a trade-off of risk, cost, and service. This approach acknowledged the dynamic nature of the system, the need to make trade-offs that might change over time, and the critical need to keep the jobs interesting!

Human Out of the Loop (HOOTL): In this model, the machine is monitored by the human. The machine makes every decision, and the human intervenes only by setting new constraints and objectives. Improvement is also an automated closed loop. Adjustments, based on feedback from humans, are automated.

The Mayflower Autonomous Ship is exploring the world’s ocean using radar, GPS, AI-powered cameras, dozens of sensors, and multiple edge computing devices. But it does not have a crew. With humans completely out of the loop, the Mayflower must sense its environment, predict courses, identify hazards, apply collision regulations, and obey the rules of the sea. Its AI Captain does this autonomously, moving to achieve the goals set in advance by the humans in charge of the project. The humans, back onshore, simply tell it where to go.

What Can Go Wrong

A U.S. travel business implemented a completely automated HOOTL system for keyword marketing on Google. The marketing team was able to input a budget and objective, and then the system automatically determined the optimal allocation of spend and bidding logic across millions of keywords. The system worked well at first and delivered both efficiency gains, and improved results. However, when the system started performing less well, the team were unable to explain why or take any corrective action. This was a fully black box system that was based on proprietary algorithms but was unmanageable in practice, and the team went back to their old rules-based system.

If performance improves (even for a time), managers are happy, but if the decisions start performing poorly, it is an extremely complex task to unravel which element of the new process is to blame. For example: An algorithmic decision may be too opaque to pass regulatory scrutiny or to be explained to unhappy customers. Automated changes in the algorithm in response to feedback collected by the algorithm may create a race condition where the algorithm spins off course. Far too many decisions may be referred for manual review, sharply limiting the value of the algorithm. Or human involvement in the algorithm may be at the wrong level, causing the algorithm to be sidelined by human users.

Part of the solution is picking the right model for human engagement for a given decision. In addition, every micro-decision-making system should be monitored, regardless of how much human involvement there is. Monitoring ensures the decision-making is “good” or at least fit for purpose now while also creating the data needed to spot problems and systematically improve the decision-making over time. It’s also critical that you measure decision-making effectiveness: At least two metrics should be captured that are focused on decision-making effectiveness. No real-world business decision can be optimized by focusing on only one metric, there’s always a trade-off. Additionally, you should always capture information about how the system made the decision, not just the actual decision made. This allows both the effective explanation of “bad” decisions and the matching of suboptimal outcomes to the specifics of the way the decision was made. Finally, you should track the business outcome and map it to how decisions were made.

Deciding Which Model is Right for You

It’s important to recognize that these systems will evolve over time, enabled by new technology, an organizations’ desire to make ever more surgical decisions, and greater management confidence in automation. You must decide what level of human management is possible and desirable, and your appetite for risk and iteration. There is no correct answer.

Whichever model you adopt, we believe it’s critical to put the AI on the org chart and in the process design to ensure that human managers feel responsible for its output. The need for more autonomous systems, consumer demand for instant responses, real-time coordination of supply chains and remote, automated environments are all combining to make increased AI use within your organization an inevitability. These systems will be making increasingly fine-grained micro-decisions on your behalf, impacting your customers, your employees, your partners, and your suppliers. To succeed, you need to understand the different ways you can interact with AI and pick the right management option for each of your AI systems.

Read More

You may also like

Leave a Comment