Drone has gone rogue and the future of AI warfare

Omar Adan

Global Courant

In a surprising simulation, a US autonomous weapon system turned on its operator during combat operations, raising serious concerns and questions about the increasing use of AI in warfare.

This month, The Warzone reported that at the Royal Aeronautical Society’s Future Combat Air and Space Capabilities Summit in May in London in May, Colonel Tucker Hamilton, US Air Force Chief of Artificial Intelligence AI Test and Operations, described a simulation in which an AI-enabled drone was tasked with suppressing of an air assault defense mission (SEAD) against surface-to-air missile (SAM) sites, where the human operator must issue the final combat order.

According to Hamilton, the AI ​​drone was “reinforced” in training to go after the SAM sites, with the AI ​​deciding that the human operator’s “no-go” orders were interfering with its mission.

- Advertisement -

Although Hamilton noted that the AI-controlled drone was trained not to go against its human operator, the drone attacked the communications tower used by the latter to communicate with the former, then proceeded to destroy the SAM site.

While Hamilton emphasized the hypothetical nature of the experimental simulation, he said this scenario illustrates what can happen when fail-safes such as geofencing, remote kill switches, self-destruct and selective weapons disabling are called into question.

Autonomous drones have sparked controversy over their operational and strategic implications. In a May 2021 Bulletin of Atomic Scientists articleZachary Kallenborn notes that in 2020, a Turkish-made autonomous weapon – the STM Kargu-2 drone – may have hunted down and remotely deployed retreating soldiers loyal to Libyan general Khalifa Haftar.

Kallenborn notes that the Kargu-2 uses machine learning-based object classification to select and attack targets, developing swarm capabilities to allow 20 drones to work together. If anyone had been killed in that attack, Kallenborn notes, it would have been the first instance of autonomous weapons being used to kill.

The Turkish Kargu-2 drone sometimes has a mind of its own. Image: Twitter

- Advertisement -

He also notes that perspectives on the use of autonomous weapons differ from those who advocate a complete ban and say they cannot distinguish between civilians and combatants. Others, on the other hand, say they will be crucial in countering emerging threats, such as swarms of drones, and make fewer mistakes than humans.

However, Kallenborn says the global community still needs to develop a common objective risk picture with corresponding international standards regulating autonomous weapons, taking into account risks and benefits, personal organization and national values.

While a state using autonomous weapons to hunt down its enemies is one thing, an autonomous weapon turning on its operators is quite another. Scenarios such as the US Air Force simulation in 2023 and the drone strikes in Libya in 2020 raise several points of discussion.

- Advertisement -

In a 2017 US Army University Press articleAmitai Etzioni and Oren Etzioni elaborate on the arguments for and against autonomous weapons.

With regard to arguments supporting the development of autonomous weapons, Amitai and Oren say that autonomous weapons offer several military advantages, as they act as a force multiplier while increasing the effectiveness of individual human warfighters.

They also say that autonomous weapons can expand the battlefield and conduct combat operations in previously inaccessible areas. Finally, they mention that autonomous weapons can reduce risk to human combatants by de-manning the battlefield.

Aside from those arguments supporting autonomous weapons, Amitai and Oren say they can replace humans when performing boring, dangerous, dirty and demanding missions, make significant savings by replacing humans and manned platforms, and are not constrained by human physical limits.

They also mention that autonomous weapons may be superior to humans in perception, planning, learning, human-robot interaction, natural language understanding, and multi-agent coordination.

Amitai and Oren also discuss the moral benefits of autonomous weapon systems, saying they can be programmed to avoid “shoot first, ask later” practices, are unaffected by stress and emotions that can cloud human judgment, and objectively violate ethical standards. report when people stay. quiet.

Amitai and Oren also discuss counterarguments against autonomous weapons. They point out that the unregulated development of autonomous weapons could erode public perception of AI technology, limiting its future benefits.

They also point to the dangers of a “flash war,” in which opposing autonomous systems react against each other in an uncontrollable chain reaction that leads to unintentional escalation.

In addition, Amitai and Oren point out the moral drawbacks of autonomous weapons, citing the problem of accountability, as faulty decisions of such weapons cannot be easily linked to software problems or their human operators.

They also note that autonomous weapons can encourage aggression, as commanders can become less risk averse, knowing that there is no direct risk to their forces in using autonomous weapons.

Despite these pros and cons, autonomous weapons are an irreversible reality that will feature prominently in future conflicts. So the whole debate between human moral judgment and autonomous weapons can be a false dichotomy.

Paul Scharre notes in a 2016 Center for New American Security report that the best weapon systems combine human and machine intelligence to create hybrid cognitive structures that exploit both advantages.

The hybrid intelligence debate is about AI warfare. Image: Twitter

Scharre mentions that such a cognitive structure can lead to better results than relying solely on humans or AI. He says combining human and machine cognition for engagement decisions can combine the precision and reliability of automation without sacrificing human flexibility and robustness.

As such, a human-in-the-loop system architecture for autonomous weapons may be the ideal solution to prevent the latter from going against their operators through flawed logic, software glitches, or enemy interference.

In the end, it is human ingenuity and tenacity that wins wars, not technology. Koichiro Takagi notes in a November 2022 Hudson Institute article that AI may not be the deciding factor in future warfare, but it will be the innovativeness of the concepts behind its employment, human intelligence and creativity.

Similar:

Like it Loading…

Drone has gone rogue and the future of AI warfare

Asia Region News ,Next Big Thing in Public Knowledg

Share This Article
slot indoxxi ilk21 ilk21