Feature

Military AI: autonomy and the machine

At a global level, human in the loop remains the standard, but on the battlefield autonomous AI use is spreading. Neil Thompson reports.

Autonomy is a growing feature on the battlefield. Credit: Haris Mm via Shutterstock

Artificial intelligence (AI) systems with autonomy are units or platforms that are capable of operating independently with limited direct human intervention or oversight required.

The idea of military platforms controlled by AI, wherein battlefield units are able to operate free of human oversight, remains a controversial one, with many governments and defence companies against widespread adoption of such systems without international legal restraints.

The US and many of its allies have endorsed Washington’s Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, which calls for tight restrictions on the use of AI in military affairs.

However, US rivals like Russia and China have taken a more relaxed approach to the adoption of AI — including autonomous AI applications — in their military research, though China claims to want a ban on lethal autonomous weapon systems.

At present, the general stance of the global defence industry is similar to the one taken by BAE Systems, a spokesperson of which told Global Defence Technology: "Our stance on AI in defence... is that, like our customers, we firmly believe there should be meaningful human input into the use of it.”

Nevertheless, the growing capabilities of AI systems together with conflict in the real world have jointly driven a growing maturity in autonomous AI systems in some areas. At present however, this has manifested less at the global level of weapon systems, such as the US homeland defence surveillance system, and more at the level individual battlefield units operating in combat or logistical situations.

At the global system level, in recent years some US officials have called for AI models with increased autonomy to upgrade the systems charged with the country’s homeland defence against threats like hypersonic missiles, particularly if armed with nuclear warheads.

In 2020, the then-commander of US Northern Command and of the North American Aerospace Defense Command (NORAD), General Terrence J. O’Shaughnessy, told a Senate committee hearing that he wanted to automate his force’s detection systems and reduce human decision-making within the now-automated system in order to speed up response times to hostile threats, such as hypersonic weapon systems.

O’Shaughnessy said that NORAD had to get away from ‘human in the loop’ AI systems are towards a ‘human on the loop’ standard. This would mean ending a system whereby a human would have the power to stop or start any action by an intelligent system which had received a prompt to act, in favour of one where an AI system could act without awaiting human approval for its decision. 

Oversight to continue for lethal systems

However, Dr Tom Stefanick, a nonresident senior fellow at the Brookings Institution’s Strobe Talbott Center on Security, Strategy, and Technology, warns that despite the advantages in reaction time in cutting out human decision-making, there are legal and technical barriers to the adoption of autonomous AI systems for vital global-level systems like the US anti-missile defence systems used by NORAD.

Stefanick defines the practical difference between ‘human in the loop’ vs ‘human on the loop’ AI designs as follows: “Inside an autonomous weapon system, say a missile, sensor data is used to estimate a “picture” of some part of a battlespace, and that estimate is based on algorithms that detect, identify, localise and track things with some uncertainty.

I think the coming era of autonomous military devices will [instead] be dominated by things for collecting data and moving it to commanders.

Dr Tom Stefanick, Strobe Talbott Center

“Some algorithms may be ‘AI’, but many are not. The important point is that estimating a picture of what is present is not a decision to use force, it is just an estimate. Given an estimate of the picture – with uncertainties – a human in the loop makes the decision to both select and to attack an object, whereas a human on the loop might approve an algorithm’s selection of an object and approve the decision to attack it,” Stefanick explains.

According to Stefanick, for all autonomous weapon systems from individual units to larger systems operating on a global scale, ‘human in the loop’ AI systems are likely to remain the majority of designs for aggressive weapon systems at present, and likely into the near future.

This is both for practical and legal reasons. When an autonomous machine with a ‘human on the loop’ AI design has a picture it has created of some part of a battlespace – with the attendant uncertainties – it can “select” objects to attack and then actually make the decision to attack them without asking for human approval.

These terms are crucial because “select” and “attack” are the defining elements of international humanitarian law (IHL) and the global Law of Armed Conflict (LOAC), and a ‘human on the loop’ AI design could break these rules without being countermanded, raising legal risk for the commanders involved.

Stefanick warns: “The algorithms [used] to select and to attack an object are based on some pre-determined logic: how precise and accurate is the picture, is the thing identified in the picture a non-combatant, are there non-combatants nearby, and so on.

The USAF has been tested autonomy in small drones. Credit: USAF

“These logical steps are not suited to AI, they encode the logic of IHL and LOAC. I think the coming era of autonomous military devices will [instead] be dominated by things for collecting data and moving it to commanders, on the one hand, as well as jammers and decoys for disrupting this flow of data,” Stefanick details.

Iain Boyd, the director of the Center for National Security Initiatives at the University of Colorado, also believes that in the case of global level weapon systems, including aggressive weapons like hypersonic missile weapons, the use of autonomous AI will not outpace its development in conventional battlefield-level systems, in the near term at least.

Boyd cites technical restrictions in current era-technology, among other issues: “I do not see hypersonic weapons being much different from other weapon systems in terms of how [artificial intelligence] might be used to augment their effectiveness.

“Today’s hypersonic weapons have very little human interaction after they have been launched. They are pre-programmed to address specified targets. In the future, AI might be used to allow hypersonic weapons to make decisions during flight that change target sets and collaborate with other weapons systems. But that is a long way from being realised today.”

Open systems approach to combat air

Despite officials’ and firms’ present reluctance to remove human oversight, cases are emerging where autonomous AI designs are being used to enhance weapons systems at the battlefield level, driven by events like the conflict in Ukraine.

Among other developments, the Ukraine-Russia war has turbocharged research into autonomous unmanned ground vehicles (UGVs); both for the battlespace and for support systems like logistics. For example, Ukraine is aiming to integrate a UGV designed to evacuate wounded troops into its ground operations by end-2024.  

Autonomous drones could search for wounded on the battlefield. Credit: Drop of Light via Shutterstock

Both sides have also experimented with more aggressive UGV designs, including kamikaze ground drones like Ukraine’s antitank RatelS, that uses explosives to blow up tanks; and ranged UGVs armed with grenade launchers or machine-guns.

In March 2024, Russia used three tracked Courier UGVs armed with automatic grenade launchers to attack a Ukrainian defensive position. The two countries have also designed support UGVs for tasks including minelaying or demining, and for transporting supplies.

Our stance on AI in defence... is that, like our customers, we firmly believe there should be meaningful human input into the use of it.

BAE Systems spokesperson

Many UGV designs seen in the Ukrainian battlespace are still at an early stage of sophistication. Given Western sanctions on Russia’s economy to try and cut off the flow of electronic components, raw materials and other resources Moscow needs to fight its war in Ukraine, and given Ukraine’s own restricted funding situation, many UGV models appearing in the conflict are designed using civilian components.

Ukraine’s United24 government fundraising entity announced in March 2024 it would begin funding the mass production of hundreds of UGV models, including the Death Scythe, a remotely operated weapon turret controlled using a video game control pad. Ukraine believes it can produce some UGV fighter designs for as little as $900, as it attempts to counter Russia’s advantage in manpower by fielding hundreds of cheap autonomous UGVs; similar to its previous efforts to mass produce cheap aerial drones to counter stronger Russian conventional airpower.

Problems with managing update requirements

Outside of Ukraine, major defence producers continue to design UGVs and aerial drones with greater autonomous functionality for Western militaries too. Due to ethical concerns about moving away from ‘human in the loop’ AI systems, non-lethal systems such as logistics have been more of a focus for defence firms looking into autonomous AI system designs at the battlefield level, at least in Western countries.

Back in 2021, the UK’s Defence Science and Technology Laboratory trialled UGVs for resupply operations as part of its efforts to automate ground and air resupply operations. The Joint Tactical Autonomous Resupply and Replenishment (JTARR) project used the QinetiQ and Horiba Mira UGVs to test current levels of technology in areas like terrain perception, navigation and objection recognition.  

If robots can take over certain tasks, soldiers can do other, more vital ones.

British Army Sergeant Major Dan Brown

Highlighting the utility of autonomous AI for units like UGVs in a supporting role, British Army equipment support Sergeant Major Dan Brown said: “If there is a machine that can do what a human can but take personnel from harm’s way, that’s a fantastic thing. If robots can take over certain tasks, soldiers can do other, more vital ones.”

More recently, AI firm Mindy announced in December 2023 that it had created KodiakDriver, an autonomous AI system that enables an equipped vehicle to operate as an autonomous military ground vehicle.

This included the ability for the military to operate a truck so-designed remotely, but Mindy said the hardware and software behind KodiakDriver would enable a vehicle equipped with it to operate autonomously, including in a variety of challenging circumstances, such as when facing off road obstacles or when operating in areas with impaired GPS.  

US and UK forces testing autonomous resupply in 2019. Credit: US Army

Meanwhile, in November 2023, European firm Airbus announced the testing of Auto’Mate, an autonomous in-flight refuelling system that will eventually allow an aerial Airbus tanker to refuel military aircraft in flight.

In the air and on the ground, autonomous AI systems are already revolutionising battlefield-level operations, even if the technology has yet to filter up to global level weapons platforms due to the ethical and technical barriers complicating AI autonomous uptake in those areas. 

Australia could be one of the main beneficiaries of this dramatic increase in demand, where private companies and local governments alike are eager to expand the country’s nascent rare earths production. In 2021, Australia produced the fourth-most rare earths in the world. It’s total annual production of 19,958 tonnes remains significantly less than the mammoth 152,407 tonnes produced by China, but a dramatic improvement over the 1,995 tonnes produced domestically in 2011.

The dominance of China in the rare earths space has also encouraged other countries, notably the US, to look further afield for rare earth deposits to diversify their supply of the increasingly vital minerals. With the US eager to ringfence rare earth production within its allies as part of the Inflation Reduction Act, including potentially allowing the Department of Defense to invest in Australian rare earths, there could be an unexpected windfall for Australian rare earths producers.

Credit: US Department of Defense; Department of Energy (originally compiled by Neta Crawford)

Total annual production

$345m: Lynas Rare Earth's planned investment into Mount Weld.

Caption. Credit: 

Phillip Day. Credit: Scotgold Resources