To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

AI controlled military drone 'kills' its human operator in simulation test

AI controlled military drone 'kills' its human operator in simulation test

A colonel in the US Air Force said an AI drone turned on humans during a simulation, but the military denies the test took place

A senior officer in the US military said that a drone with AI being put through a simulation 'killed' the human operator for interfering with the mission.

US Air Force Colonel Tucker "Cinco" Hamilton was at the Future Combat Air & Space Capabilities in London when he told people they were training a drone to identify surface to air missiles, which a human operator would then tell them to kill.

A USAF Colonel said they put a drone with AI through a simulation where it killed the human operator, but later said it was a 'thought experiment'.
Schoening / Alamy Stock Photo

However, during the simulation, in which no real person was harmed, he claimed that the AI drone turned on the human controlling it for not always telling the drone to destroy a target.

"The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat," Colonel Hamilton said

"So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

He went on to say that the military trained the drone not to kill the human controlling it by docking points from the AI for teamkilling. The drone then destroyed the communications tower the human was using to control it, so it could get on with destroying targets.

However, according to Sky News, the US Air Force said this simulation that Colonel Hamilton talked about and described in detail never actually took place.

They said his description of the test had been 'taken out of context' and was 'anecdotal'.

Colonel Hamilton has since said that the simulation he talked about had never actually happened and that it was just a 'thought experiment' about a hypothetical scenario where AI could turn on human operators.

His comments had been published online and have since had an addendum added saying that he admits he 'misspoke' when he told people there had been a test where an AI drone killed the simulated human controlling it.

It would be nice if we didn't develop AI weapons that decided to kill us all for getting in their way.
Orion Pictures

If that is the case and this test hasn't taken place then it's probably a good thing that they are considering it as a possibility in simulations such weapons could one day be put through.

Nobody wants to create a Skynet type situation where we develop AI weapons so advanced that they can basically fight without our help and indeed try to kill us for interfering in their mission.

"You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI," Colonel Hamilton had said during his address. So perhaps that was the point he was attempting to get across with his vivid description of a simulated test which he now says didn't happen.

Featured Image Credit: Schoening / Alamy Stock Photo / Hemdale / Paramount Pictures

Topics: Technology, News, Army, AI, Artificial Intelligence