stub Automation Complacency: How to Put Humans Back in the Loop
Connect with us
Array
(
    [ID] => 167
    [user_firstname] => Art
    [user_lastname] => Kleiner
    [nickname] => ArtKleiner
    [user_nicename] => artkleiner
    [display_name] => Art Kleiner
    [user_email] => [email protected]
    [user_url] => 
    [user_registered] => 2023-08-28 19:15:31
    [user_description] => Art Kleiner is a writer, editor and futurist. His books include The Age of Heretics, Who Really
Matters, Privilege and Success, and The Wise. He was editor of strategy+business, the
award-winning magazine published by PwC.
    [user_avatar] => mm
)

Thought Leaders

Automation Complacency: How to Put Humans Back in the Loop

Published

 on

In a dramatic turn of events, Robotaxis, self-driving vehicles that pick up fares with no human operator, were recently unleashed in San Francisco. After a contentious 7-hour public hearing, the decision was driven home by the California Public Utilities commission. Despite protests, there’s a sense of inevitability in the air. California has been gradually loosening restrictions since early 2022. The new rules allow the two companies with permits – Alphabet’s Waymo and GM’s Cruise – to send these taxis anywhere within the 7-square-mile city except highways, and to charge fares to riders.

The idea of self-driving taxis tends to bring up two conflicting emotions: Excitement (“taxis at a much lower cost!”) and fear (“will they hit me or my kids?”) Thus, regulators often require that the cars get tested with passengers who can intervene and manage the controls before an accident occurs. Unfortunately, having humans on the alert, ready to override systems in real-time, may not be the best way to assure safety.

In fact, of the 18 deaths in the U.S. associated with self-driving car crashes (as of February of this year), all of them had some form of human control, either in the car or remotely. This includes one of the most famous, which occurred late at night on a wide suburban road in Tempe, Arizona, in 2018. An automated Uber test vehicle killed a 49-year-old woman named Elaine Herzberg, who was running with her bike to cross the road. The human operator in the passenger seat was looking down, and the car didn’t alert them until less than a second before impact. They grabbed the wheel too late. The accident caused Uber to suspend its testing of self-driving cars. Ultimately, it sold the automated vehicles division, which had been a key part of its business strategy.

The operator ended up in jail because of automation complacency, a phenomenon first discovered in the earliest days of pilot flight training. Overconfidence is a frequent dynamic with AI systems. The more autonomous the system, the more human operators tend to trust it and not pay full attention. We get bored watching over these technologies. When an accident is actually about to happen, we don’t expect it and we don’t react in time.

Humans are naturals at what risk expert, Ron Dembo, calls “risk thinking” – a way of thinking that even the most sophisticated machine learning cannot yet emulate. This is the ability to recognize, when the answer isn’t obvious, that we should slow down or stop. Risk thinking is critical for automated systems, and that creates a dilemma. Humans want to be in the loop, but putting us in control when we rely so complacently on automated systems, may actually make things worse.

How, then, can the developers of automated systems solve this dilemma, so that experiments like the one taking place in San Francisco end positively? The answer is extra diligence not just before the moment of impact, but at the early stages of design and development. All AI systems involve risks when they are left unchecked. Self-driving cars will not be free of risk, even if they turn out to be safer, on average, than human-driven cars.

The Uber accident shows what happens when we don’t risk-think with intentionality. To do this, we need creative friction: bringing multiple human perspectives into play long before these systems are released. In other words, thinking through the implications of AI systems rather than just the applications requires the perspective of the communities that will be directly affected by the technology.

Waymo and Cruise have both defended the safety records of their vehicles, on the grounds of statistical probability. Nonetheless, this decision turns San Francisco into a living experiment. When the outcomes are tallied, it’s going to be extremely important to capture the right data, to share the successes and the failures, and let the affected communities weigh in along with the specialists, the politicians, and the business people. In other words, keep all the humans in the loop. Otherwise, we risk automation complacency – the willingness to delegate decision-making to the AI systems – at a very large scale.

Juliette Powell and Art Kleiner are co-authors of the new book The AI Dilemma: 7 Principles for Responsible Technology.

Juliette Powell is an author, a television creator with 9,000 live shows under her belt, and a
technologist and sociologist. She is also a commentator on Bloomberg TV/ Business News
Networks and a speaker at conferences organized by the Economist and the International
Finance Corporation. Her TED talk has 130,000 views on YouTube.

Art Kleiner is a writer, editor and futurist. His books include The Age of Heretics, Who Really
Matters, Privilege and Success, and The Wise. He was editor of strategy+business, the
award-winning magazine published by PwC.