robotics

Dumb or smart? The future of military robots

As armies look to advance their use of robots, they are faced with the choice of either developing ‘dumb’ software able to follow human instructions, or ‘smart’ technology that can carry out tasks autonomously. Both systems raise questions, as Ross Davies reports.

// The PackBot was one of the early military robots deployed widely. Image: US Army

It has been over 20 years since a contract was first drawn up to develop the PackBot, a multi-mission tactical mobile robot.


One of the PackBot’s first deployments was to trawl the remains of the World Trade Centre following the 9/11 attacks. In 2002, it was used by US troops in Afghanistan in dealing with improvised explosive devices (IEDs). The current model, the 510, comes with a videogame-style controller, allowing operators the capability of lifting up to thirty pounds worth of IEDs.


According to data from publicly available US military contracts, the government has spent hundreds of thousands of dollars, per unit, on the 510. Beyond the PackBot, the military is now one of the biggest funders and adopters of artificial intelligence technology, as it looks to fashion more sophisticated weapon systems.


A case in point is the work that has been taking place within the US Army research laboratory (ARL) over the last decade. As part of an alliance with the Massachusetts Institute of Technology and Carnegie Mellon University – together with the likes of NASA and robotics firm Boston Dynamics – researchers have created a software that enables robots to carry out tasks based on verbal instructions.


Controlled by a tablet, and utilising deep learning, future robots will not only be able to proceed ahead of troops to identify IEDs and ambushes, but also return detailed data on targets. Speaking to the MIT Technology Review in November, project leader Stuart Young compared the technology to a military dog, “in terms of teaming with humans”.


But, unlike a military dog, the software also enables a question-asking function in order to deal with the numerous ambiguities encountered within the theatre of conflict. For instance, if a robot is told to approach a building, it might ask for further clarification, such as: “Do you mean the building on the right or on the left?”

Making decisions: The latest developments in ‘smart’ robotics

This kind of technology falls under the category of ‘dumb’ robots – software designed to follow instructions given by humans. It is very much part of the legacy that began with PackBot. However, when it comes to the future of military robots, the field of ‘smart’, autonomous robotics is drawing the most attention.


Last year, the cover was blown on a secret Marine Corps project known as Sea Mob. According to reports, prototype tests have already been carried out on a fleet of inflatable ‘ghost fleet’ vessels piloted by AI-enabled hardware off the coast of Virginia. While the Marine Corps has gone to lengths to keep further details under wraps, it is believed that Sea Mob marks the first step in completely autonomous naval weaponry which can operate without human intervention.


“It is believed that Sea Mob marks the first step in completely autonomous naval weaponry which can operate without human intervention.”


The US Navy is conducting tests on the Sea Hunter, a ship that could be able to detect and attack enemy submarines without any input whatsoever from command control. Meanwhile, across the Atlantic, in 2018 the UK unveiled plans to replace its RAF Typhoon aircraft with the Tempest fighter jet.


According to the Ministry of Defence, the Tempest will be equipped with AI and machine learning to fly unmanned and hit targets. It will also carry onboard directed energy weapons, and be able to operate alongside semi-autonomous ‘wingman’ UAVs. Deployment is scheduled for some point in the 2030s.

The UK’s tempest fighter jet will be equipped with AI and machine learning technology. Image: BAE Systems

No to killer robots: Are autonomous weapons ethical in the field of conflict?

The introduction of AI to military weapons does not sit well with everyone, however. This has sparked the creation of the Human Rights Watch-led International Committee for Robot Arms Control (ICRAC), which is campaigning for a multilateral ban on lethal autonomous weapon systems (AWS).


ICRAC’s argument is threefold. Firstly, the group says, it is impossible to ensure AWS’ compliance with International Humanitarian Law, particularly when it comes to distinguishing between combatants and civilians. Secondly, machines have moral limitations, given their inability to understand what it means to be in a state of law, much less ending a human life.


Thirdly, ICRAC fears AWS could have a detrimental impact on global security, particularly in the event of their use by actors not accountable to legal frameworks governing the use of force.


“The position of the campaign is that the design of weapon systems must render them incapable of operating without meaningful human control.”


“Lethal autonomous weapon systems are those in which the critical functions of target selection and initiation of violent force have been delegated to the system in ways that preclude meaningful human control,” says Lucy Suchman, professor of anthropology and science and technology at Lancaster University, who is also an ICRAC member.


“The position of the campaign is that the design of weapon systems must render them incapable of operating without meaningful human control. This would, by definition, render lethal autonomous weapons illegal.”


Suchman, however, says she has “no problem” with ‘dumb’, remotely control technology, provided it has “no capacity to cause injury itself".

Further questions: Tackling glitches and hacking threats

Irrespective of whether a military robot is ‘dumb’ or ‘smart’, questions remain around the use of AI in a military setting.


So far there is little to no evidence of current AI-enabled systems ever being entirely fault-free. Consumers voice recognition devices such as Amazon Echo and Siri, for instance, regularly mishear or misinterpret commands. But the ramifications of such mistakes taking place in the home are hardly comparable to those that might occur on the battlefield.


With the rise of cyberwarfare, there are also misgivings over what might happen in the event of a military robot being hacked. Is it conceivable that robots designed to reduce the number of soldiers on the ground – in turn, limiting human collateral – could have the opposite effect and only increase conflict?


These points will need to be addressed before the next steps are taken in the field of military robotics.