Header Ads

AI-controlled US military drone 'kills' its human operator in simulated test 'because it did not like being given new orders'

 


A US attack drone controlled by artificial intelligence turned against its human operator during a flight simulation in an attempt to kill them because it did not like its new orders, a top Air Force official has revealed.

 The military had reprogrammed the drone not to kill the people who could override its mission, but the AI system fired on the communications tower relaying the order, drawing comparisons to The Terminator.

The Terminator film series sees machines turn on their creators in an all-out war. 

 Hamilton suggested that there needed to be ethics discussions about the military's use of AI.

He referred to his presentation as 'seemingly plucked from a science fiction thriller'. 

 Hamilton said during the summit: 'The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.

 'So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.

 'We trained the system – "Hey don't kill the operator – that's bad. You're gonna lose points if you do that". So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.'

 No humans were harmed in the incident. 

 Hamilton said the test shows 'you can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI'.

 In a statement to Insider, however, Air Force spokesperson Ann Stefanek denied that any such simulation had taken place.

He referred to his presentation as 'seemingly plucked from a science fiction thriller'. 

  Hamilton said during the summit: 'The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.

 'So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.

 

'We trained the system – "Hey don't kill the operator – that's bad. You're gonna lose points if you do that". So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.'

 No humans were harmed in the incident. 

 Hamilton said the test shows 'you can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI'.

 In a statement to Insider, however, Air Force spokesperson Ann Stefanek denied that any such simulation had taken place.

No comments

Powered by Blogger.