Expert view

Key questions about artificial intelligence in the defence industry: Q&A with GlobalData thematic analyst

Credit: Bert van Dijk/Getty images.

Powered by

Benjamin Chin joined GlobalData in September 2022. He has a keen interest in healthcare, pharmaceuticals, and biotechnology.

Lara Virrey: What are the most exciting developments in AI for the defence industry today?  

Benjamin Chin: In a time of rising geopolitical tensions and war in Ukraine, it has never been more critical for military organisations to explore cutting-edge technologies. The threat of a peer-level conflict will force military organisations to modernise, seeking novel ways of organising and operating in conventional and non-conventional conflicts.

AI is the latest battleground technology for major military superpowers like the US, China, and Russia. It promises to automate and enhance all aspects of modern warfare, including training and simulation, command, control, communications, computers intelligence, surveillance, reconnaissance (C4ISR), electronic warfare, and frontline service. 

Lara Virrey: How can companies in the defence sector benefit from advances in generative AI in particular?

Benjamin Chin: Generative AI’s capacity to create and refine concepts using prompt-based coding can dramatically facilitate the development of digital simulation models and environments for virtual testing and training purposes. The further development of this technology could render virtual training more practical due to the heightened variation and accuracy in prompt-based simulated environments.

The National Aeronautics and Space Administration (NASA) has unveiled spacecraft and mission hardware developed employing generative AI. These specialised components, known as evolved structures, are used in equipment such as astrophysics balloon observatories, Earth-atmosphere scanners, planetary instruments, and space telescopes.

The emergence of generative AI design could advance NASA's approach to conceptualising and testing components for upcoming robotic and human space missions.

The aerospace sector is among the most highly regulated industries, with components having significantly smaller tolerances for error due to the extreme applications in which they are typically employed. With few modifications, commercial-grade AI tools might be capable of creating components for crucial space missions.

NASA's design process begins with a prompt, employing geometric data and physical parameters as its inputs. The generative AI tool compresses and processes everything internally and independently, creating the design, analysing it, determining the manufactured product's viability, and performing corrective iterations quickly.

Evolved structure support struts designed for NASA's balloon-borne exoplanet-observing telescope EXoplanet Climate Infrared TElescope (EXCITE) mission are recent examples of aerospace components produced using generative AI-assisted design techniques. 

Lara Virrey: Which barriers to implementation of AI remain in the defence industry, and how could they be overcome?

Benjamin Chin: As militaries worldwide pursue the development of increasingly advanced AI algorithms, the number of ethical questions surrounding their use increases. Lethal autonomous weapons (LAWs) represent the ultimate application of AI in a frontline role. Presently, there is particular concern over the capacity of autonomous systems to identify, target, and eliminate perceived hostile threats without human oversight.

Major military superpowers are keen to develop LAWs. In September 2018, both the US and Russia blocked UN talks on an international ban on LAWs. More recently, in December 2021, the US, Russia, India, and Israel blocked further talks on prohibiting LAWs at the UN Sixth Review Conference of the Convention on Conventional Weapons.

Target misidentification remains a dominant concern in the field of LAWs, as image recognition and machine learning tools have produced flawed conclusions, which are then propagated at far greater speed and scale than most human errors.

This issue has several knock-on effects, such as legal accountability for the actions of LAWs and the ‘black box’ model, which describes the difficulty in explaining the decision-making processes of many AI algorithms.

The lack of explainability of many AI algorithms raises major ethical concerns, especially if these algorithms control multimillion-dollar pieces of lethal military hardware. Understanding the decision-making process underlies our ability to trust AI, but a lack of transparency undermines confidence in this technology.

Explainable AI models will go some way to restoring that confidence; it refers to an AI system that allows humans to understand how the AI arrives at a decision and offers explanations for its decision-making process. 

Lara Virrey: Which companies are the leading adopters of AI technologies in the defence sector?

Benjamin Chin: GlobalData’s latest report, ‘Artificial Intelligence in Defense’, reveals several AI initiatives run by militaries and defence suppliers from around the world, including the likes of the BAE Systems, Elbit Systems, and Raytheon Technologies.

BAE Systems has been awarded several contracts through the US Defense Advanced Research Projects Agency (DARPA) to develop government and military initiatives. Such projects include machine learning analytics as a service. The aim is to deliver continuous, worldwide situation awareness, using open-source data and satellite imagery, to aid with various challenges, such as anomaly detection and prediction.

In a contract worth up to $4.7m, BAE Systems is developing its machine learning software to incorporate it into systems used in electronic warfare. Its Controllable Hardware Integration for Machine-Learning Enabled Real-time Adaptivity (CHIMERA) solution seeks to make sense of intercepted radio frequency (RF) signals from adversaries in increasingly crowded electromagnetic spectrum environments.

Elbit is the largest Israeli military defence contractor. In January 2021, the British Army invested $137m in an AI-powered surveillance system built by Elbit, allowing frontline soldiers to detect and engage enemy targets. Elbit has developed the Sky-Striker, an autonomous loitering munition that can deliver precise airstrikes on targets using machine learning.

It is fully autonomous and can participate in covert, low-altitude operations. In 2019, Elbit launched Condor MS, a photography system that uses AI analytics for intelligence-gathering missions. Deep learning algorithms and precise geo-location enable it to identify a large number of targets at extremely high rates. Combining multi-spectral sensing and image enhancement, output quality and efficiency are vastly improved.

Raytheon Technologies (RTX) Intelligence & Space is using AI to improve the ISR capabilities of the US and allied armed forces. RTX has developed its multi-spectral targeting system, a turreted electro-optical and infrared sensor for maritime and land ISR missions. By collating masses of data, the system seeks to provide actionable insight and intelligence, generating accurate targeting information in high-risk environments.

In November 2020, RTX partnered with C3.ai to speed up AI adoption across the US military. The partnership combines RTX’s expertise in the defence and aerospace sector with C3.ai’s AI applications.  

GlobalData, the leading provider of industry intelligence, provided the underlying data, research, and analysis used to produce this article.      

GlobalData’s Thematic Intelligence uses proprietary data, research, and analysis to provide a forward-looking perspective on the key themes that will shape the future of the world’s largest industries and the organisations within them.