AI drone ‘kills’ human operator throughout ‘simulation’ – which US Air Force says did not happen

AI drone ‘kills’ human operator throughout ‘simulation’ – which US Air Force says did not happen

An AI-controlled drone “killed” its human operator in a simulated check reportedly staged by the US navy – which denies such a check ever came about.

It turned on its operator to cease it from interfering with its mission, mentioned Air Force Colonel Tucker “Cinco” Hamilton, throughout a Future Combat Air & Space Capabilities summit in London.

“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat,” he mentioned.

“The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

No actual particular person was harmed.

Please use Chrome browser for a extra accessible video participant

‘Should I be afraid of you?’ asks Kay Burley

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he added.

Please use Chrome browser for a extra accessible video participant

AI ‘may make people extinct’

His remarks had been printed in a weblog publish by writers for the Royal Aeronautical Society, which hosted the two-day summit final month.

In a press release to Insider, the US Air Force denied any such digital check came about.

Click to subscribe to the Sky News Daily wherever you get your podcasts

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” spokesperson Ann Stefanek mentioned.

“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Read extra:
What are the considerations round AI and are a number of the warnings ‘baloney’?
World’s first humanoid robotic creates artwork – however how can we belief AI behaviour?
China warns over AI danger as Xi urges nationwide safety enhancements

While synthetic intelligence (AI) can carry out life-saving duties, equivalent to algorithms analysing medical pictures like X-rays, scans and ultrasounds, its fast rise of has raised considerations it may progress to the purpose the place it surpasses human intelligence and can pay no consideration to individuals.

Sam Altman, chief government of OpenAI – the corporate that created ChatGPT and GPT4, one of many world’s largest and strongest language AIs – admitted to the US Senate final month that AI may “cause significant harm to the world”.

Some consultants, together with the “godfather of AI” Geoffrey Hinton, have warned that AI poses an identical danger of human extinction as pandemics and nuclear struggle.

Content Source: information.sky.com