You see a destroyed city. Soldiers are looking for enemy combatants. There’s a bang, someone yells, “We’ll be hit. Send Lanius!” Eight small drones take off. They approach the enemy soldiers and kill them one by one.
These are not scenes from a war film, but from a new commercial for the Israeli arms manufacturer Elbit Systems. The Lanius (Latin for “butcher”) is an artificial intelligence (AI) kamikaze drone. It can navigate on its own, even find its way through narrow building openings and recognize human targets.
Killing machine at your fingertips
All the soldier has to do is press a button and the drone becomes a killing machine: it fires at a target or charges at it and explodes – the seamless fusion of human combat decisions, drone technology and AI. And maybe the man or woman at the control table is no longer needed. Now the weapon is still set so that a flesh and blood human being must unleash the deadly action – and is responsible for it – but sooner or later this intermediate step can be programmed away.
Weapons such as those in the Elbit video clip are also being worked on in this country: in fundamental research at Swiss universities. Davide Scaramuzza reports in his office at the University of Zurich in Oerlikon: “When my team and I saw the Elbit video, we were shocked.” The professor leads the ‘Robotics and Perception’ research group and and his team are world leaders in this field. Scaramuzza and his team are working on artificial intelligence, among other things, to teach drones acrobatic maneuvers so that they can even be deployed in complex environments. In a video from the university, one of these planes darts around a forest, buzzing between trees and around dense buildings, and flying into windows. The resemblance to the technology in Elbit’s videos is undeniable.
Armies around the world are working on such technologies
In 2021, the Israeli army sent a swarm of such drones into the Palestinian Gaza Strip for the first time. Another person was involved in the chain of action. A year earlier, a Turkish Kargu-2 autonomous drone was deployed in Libya. The exact circumstances are unclear, but it is possible that this aircraft was already operating without human intervention. In addition to Israel and Turkey, other militaries are also working on such technologies, including in the US, China, Great Britain and India.
AI researchers have been alarmed for some time. They fear an arms race for autonomous weapons. In 2017, the so-called “Slaughterbots” video made the rounds, in which minidrones target humans hunting without anyone monitoring them. Slaughterbot means “slaughter robot”.
Leading scientists from around the world have been calling for better regulation of such research approaches for years. They want to prevent autonomous weapons from becoming the Kalashnikovs of tomorrow. “Many politicians have not understood the technology well enough,” said Max Tegmark, a physics professor at the Massachusetts Institute of Technology in Boston, the leading US university of technology. “These are weapons of mass destruction that should be accessible to everyone.” And US military expert Zachary Kallenborn compares the dangers of armed drone swarms to those of chemical and biological warfare agents.
Scientific appeal to politics
Eight Swiss researchers made an urgent appeal to politicians in November 2021. In their paper, now published and available on SonntagsBlick, they call on federal councilor Ignazio Cassis to ensure that algorithms should never decide the life and death of people. Without government regulation, lethal autonomous weapons could be a reality within a decade. Cassis responded, “The Federal Council shares many of the legal, ethical, and safety concerns scientists and researchers have raised about such weapons.” Switzerland wants to work internationally on appropriate regulations, such as those that already exist worldwide for chemical and nuclear weapons.
The problem with this is that an international agreement is currently unrealistic (see interview with Reto Wollenmann). And American military expert Kallenborn sees it the same way: “The great military powers do not want to give up weapons that can be of use to them.”
Even Switzerland is not stopping itself from pulling out all the stops and leading the way as a top location for drone development. The country’s technical universities are among the best in the world. In fact, Switzerland ranks first in terms of the quality of scientific publications and their influence on research. The region around Zurich is considered the “Silicon Valley of robotics” – thanks to Google and Co., but also because of the excellent university laboratories.
Visit to the drone laboratory
SonntagsBlick is visiting Professor Scaramuzza in one such laboratory. Drones are set up in display cases in the hallway. One room is arranged as an escape hall, with obstacles on the floor and nets hanging from the ceiling – a drone should stray from the path for safety.
Scaramuzza co-signed the letter to Cassis. He is not one to shy away from debate. The head of research takes two hours to explain to us how his lab works – and how his applications, which are so similar to those of the Israeli arms company, work. The Scaramuzza team already achieved a breakthrough in 2009: the construction of a drone equipped with a camera that can fly autonomously, without GPS. Since then one success has followed another. The professor led a European project that developed an autopilot – today the patent is used millions of times. One of his teammates went to NASA and brought Swiss-developed technology to Mars. Scaramuzza’s lab’s first entrepreneurial project was bought by Facebook in 2016 and developed “Oculus Quest”, the leading virtual reality goggles, also known as VR headsets.
Scaramuzza says enthusiastically about his work that his team is working on new sensors so that they can also fly in smoke and that they are developing algorithms for AI that allow robots to take over human tasks.
Challenge: Good technology in the wrong hands
“That naturally raises many ethical questions,” says the professor. “Anything that can be used for good can also be used for bad.” This is a well-known challenge in robotics – and always has been. In the same breath, Scaramuzza clarifies: “The same algorithms we use to control these drones were used for breast cancer screening. You have already saved millions of people. Should we ban them? No.”
The AI-controlled drones are still too inaccurate to use in a war. But the investigation is progressing. That’s why Scaramuzza thinks now is the right time to ask: “How do you make sure the technology isn’t misused?”
Scaramuzza himself worked on a project funded by the US military institute DARPA. Pure fundamental research, he emphasizes. He took part in a drone race organized by the weapons company Lockheed Martin and demonstrated in 2021 in Dübendorf ZH that AI can fly faster than a human pilot. However, no software was made available to the US military, it was only notified of the results for publication. His team also failed to provide any code to the weapons company.
No exchange between the University of Zurich and the arms company
But: visual control, as he researches it, plays a key role for military applications. So what is the connection between the new weapons systems of the weapons company Elbit and his research? “They use similar algorithms,” Scaramuzza explains.
The researcher says there is no direct contact or technology transfer between his lab and Elbit. “I condemn any military application of our technology,” he added unequivocally. Any collaboration with external companies is monitored by the university. Dual-use items, ie items that can be used for both civilian and military purposes, require approval from the university management and Seco. The media office of the University of Zurich confirms this, but Scaramuzza sees indirect connections: scientific publications are usually freely accessible. And employees take their know-how with them when they change jobs – even when they switch to a weapons company.
How to mitigate risks
The researcher emphasizes that progress lives from the free exchange of knowledge. Any censorship is dangerous. But there are ways to mitigate the risks. “Researchers can keep parts of a code or pass it on only under license,” says Scaramuzza. This is already happening – for ethical or commercial reasons. However, structured processes for this do not yet exist in Switzerland.
That is why some universities abroad have already started asking about risks. The professor: “That’s good, because it opens the eyes of the researchers.”
The research was made possible in part by a grant from the Journafonds.
Do you have any clues for explosive stories? Write to us: recherche@ringier.ch
Do you have any clues for explosive stories? Write to us: recherche@ringier.ch