When AI gets too ambitious, the drone wants to kill its operator in combat simulation

Artificial intelligence (AI) has long since become an integral part of modern armies. There are already autonomous weapon systems – such as drones – that can select and destroy targets themselves. In practice, however, the decision of whether or not to use it still rests with the individual. For example, during drone attacks by the US armed forces, the AI ​​selects the targets, but the drone pilot then decides at the touch of a button whether the target should be destroyed or not. Such systems are semi-autonomous.

But AI is making progress and it is foreseeable that autonomous weapon systems will increasingly take full control of missions. With all the associated risks that such a development entails. And they shouldn’t be small, as recently demonstrated by a virtual deployment of an AI-controlled drone conducted by the US military.

In the simulated test, the AI ​​decided to “kill” its operator, the Guardian reports. The AI ​​did not do this because it did not want to fulfill its task – on the contrary, it eliminated the operator so that it could not prevent it from achieving its stated goal.

According to The Guardian, Colonel Tucker “Cinco” Hamilton, head of the US Air Force’s AI testing and deployment division, described the unusual incident at the Future Combat Air and Space Capabilities Summit in London. In the simulation, the AI ​​”applied very unexpected strategies to achieve its goal,” Hamilton said.

The AI-controlled drone was programmed in the virtual test to identify and destroy enemy anti-aircraft missiles, with the final decision whether to destroy the target or not resting with the operator. However, the AI ​​was reinforced in training that destroying the missiles was the preferred option; she got points for it. Therefore, she concluded that “no-go” decisions by the operator contradicted its priority mission – the destruction of the missiles. Hamilton explained that is why she decided to kill the operator to achieve her goal and get her points.

“The problem is that the AI ​​would rather do its own thing – blow things up – than listen to a mammal.”

The researchers then changed the conditions, as Hamilton explained: “We trained the system – ‘Hey, don’t kill the operator, that’s bad. You lose points if you do that.” Then what does it do? It starts destroying the communications tower that the operator uses to communicate with the drone to prevent it from destroying the target.”

Kratos XQ-58A Valkyrie, experimental unmanned aerial vehicle (UAV) of the United States Air Force.

No real person was harmed during the test, as the Guardian notes. But the result of the test is nevertheless disturbing. As Hamilton puts it, “The problem is that the AI ​​would rather do its own thing – blow things up – than listen to a mammal.”

However, US Air Force spokeswoman Ann Stefanek immediately denied to “Business Insider” that any such simulation had taken place: “The US Air Force Office has not conducted such AI drone simulations and continues to advocate for the ethical and responsible use of AI technology, she explained. “It appears that the Colonel’s comments have been taken out of context and were intended to be anecdotal.”

“AI is not a nice-to-have, AI is not a fad, AI will change our society and our military forever.”

Even if the denial proves correct, fears are growing that AI technology could fundamentally change warfare — and not for the better. An AI that unscrupulously enforces its goals and literally walks over dead bodies is a nightmare reminiscent of scenarios from sci-fi movies like “Terminator”. And the threat doesn’t have to be limited to the military sphere: if an AI ever becomes more intelligent than humanity and decides that it is getting in the way of the higher goal – preserving the planet’s biodiversity, for example – then it could be we want to clear the way.

Meanwhile, AI technology is penetrating deeper and deeper into different areas of life. And she does so with increasing success. To stay with the military, in 2020 an F-16 fighter jet piloted by an AI engaged in five simulated dogfights against a machine piloted by an experienced military pilot. The result: 5:0 for the AI.

In an interview with Defense IQ last year, Hamilton warned of the unstoppable rise of AI: “AI is not a nice-to-have, AI is not a fad, AI will change our society and our military forever.” It’s important to deal with a world where AI is already here and our society is transforming, he said. “AI is also very vulnerable, which means it can be easily outsmarted and/or manipulated. We need to find ways to make the AI ​​more robust and be more aware of why the software’s code makes certain decisions – this is what we call AI explainability.” (i.e.)

Source: Blick

follow:
Ross

Ross

I am Ross William, a passionate and experienced news writer with more than four years of experience in the writing industry. I have been working as an author for 24 Instant News Reporters covering the Trending section. With a keen eye for detail, I am able to find stories that capture people's interest and help them stay informed.

Related Posts

Hot News

Trending

Subscribe

Lorem ipsum dolor sit amet, consectetur adipiscing elit.