class=”sc-29f61514-0 fQbOYE”>
Killing machines like in the movie “Terminator” are a real threat. One person can control thousands of killer robots. And achieve similar effects as with a weapon of mass destruction. It is not for nothing that the army founded the Swiss Drone and Robotics Center (SDRZ) in 2017. The department must investigate how unmanned systems endanger Switzerland.
The head of the SDRZ, Kai Holtmann (33), says: “Military robots will revolutionize warfare.” As an example, he mentions fist-sized drones with facial recognition, so-called Slaughterbots. They can specifically kill people by flying into their heads and exploding. There are no such Slaughterbots yet. But Holtmann says, “Technology is evolving rapidly and can pose a threat.”
The Swiss military already uses drones and robots that can scout and monitor. She also focuses on robotics research for disaster relief. For example, she is experimenting with a robot snake to find people buried after an earthquake. Military robotics is still in its infancy. But that could soon change.
Error-prone robots in action
“In a few years, the military will use AI systems,” says Thomas Burri, 46, professor of international and European law at the University of St. Gallen. As an expert on ethical and legal issues, Burri advises the SDRZ. He says, “Even if a robot doesn’t work properly, it can be used in civil defense.” He cites the overarching interest in saving people as the reason. Assuming there was an AI search bot, it would have been used to track down the submarine that crashed on the Titanic. “Better an imperfect robot than nothing,” says Burri.
According to the professor, there is also a big problem. Who takes responsibility if the robot injures a human while searching? The behavior of an autonomous robot is never completely predictable. Because an AI can work with a neural network and the details of it remain hidden from humans. So it could be that an AI search bot has an unwanted behavior pattern that no one knows about – until it’s too late. “If a person is harmed by this, no one wants to take responsibility,” says Burri.
A search engine can discriminate
Another problem is bias. An AI is trained with data from the real world. If 10 percent of the population is black, the AI is biased. An autonomous search engine would better recognize and save people with a light skin color. Dark-skinned people would be discriminated against.
Another issue concerns cybersecurity. If a hacker breaks into an AI, he can enter this unwanted data. In this way, an autonomous car could register a wall as an exit.
Burri concludes: “The challenges are enormous, including for disaster relief.” At the moment there is a kind of Wild West in dealing with AI, as there is no law worldwide yet. As a pioneer, the EU now wants to adopt an AI regulation by November. Switzerland is currently looking at Brussels – and waiting.
Source:Blick

I am Liam Livingstone and I work in a news website. My main job is to write articles for the 24 Instant News. My specialty is covering politics and current affairs, which I’m passionate about. I have worked in this field for more than 5 years now and it’s been an amazing journey. With each passing day, my knowledge increases as well as my experience of the world we live in today.