Inside courtroom College protests Start the day smarter ☀️ Bird colors explained
Artificial Intelligence

US Air Force says it did not run simulation in which AI drone 'killed its operator'

WASHINGTON —The Air Force on Friday denied staging a simulation with an AI-controlled drone in which artificial intelligence turned on its operator and attacked to achieve its goal.

The story mushroomed on social media based on apparently misinterpreted comments from an Air Force colonel at a seminar in London last month. Col. Tucker Hamilton, an experimental fighter test pilot, had described an exercise in which an AI-controlled drone had been programmed to destroy enemy air defenses. When ordered to ignore a target, the drone attacked its operator for interfering with its primary goal.

The apocalyptic theme of machines turning on humans and becoming autonomous killers coincided with increasing concern about the danger of artificial intelligence. On Thursday, President Joe Biden warned that AI "could overtake human thinking."

However, Hamilton was only speaking about a hypothetical scenario to illustrate the potential hazards of artificial intelligence, according to the Air Force. It did not conduct such a simulation with a drone.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Ann Stefanek, an Air Force spokesperson, said in a statement.

Prep for the polls: See who is running for president and compare where they stand on key issues in our Voter Guide

Hamilton’s comments appear to have been taken out of context, Stefanek said.

The fighter test pilot subsequently clarified in a statement that he "mis-spoke" in his presentation at the London summit and that the "rogue AI drone simulation" was a hypothetical "thought experiment."

AI's rapid rise and increased accessibility has elicited concern even from some of the technologists who helped develop it, such as Geoffrey Hinton, a renowned British researcher and one of the so-called Godfathers of AI.

It's the end of the world as we know it:'Godfather of AI' warns nation of trouble ahead

An Air Force MQ-9 Reaper drone.

In a recent conference appearance, Hinton was asked by a panel’s moderator what was the “worst case scenario that you think is conceivable” for AI. Hinton replied without hesitation.

“I think it's quite conceivable," he said, "that humanity is just a passing phase in the evolution of intelligence.”

Canadian Yoshua Bengio, another computer scientist often described as a "Godfather of AI," told the BBC this week that he did not think the military should be allowed to use AI's powers at all. He said it was one of the "worst places where we could put a super-intelligent AI" and that AI safety should be prioritized over usefulness.

Contributing: Josh Meyer

Featured Weekly Ad