Weapons proliferation in the age of AI

| November 10, 2023

Just over 20 years ago, Spanish naval personnel operating in the Arabian Sea intercepted a merchant ship sailing from North Korea to the port of Aden. Acting on intelligence from their US counterparts, the interdiction team discovered a consignment of Scud missiles hidden on board the vessel, the So San. At the time, Washington’s concern was that the Scud missiles were bound for Iraq and preparations were building in the region for the coalition’s ill-fated intervention against Saddam Hussein’s military.

Lacking any legal basis to seize the consignment, the Spanish team could do little more than release the So San and let it sail, after receiving an undertaking from the Yemeni government that the missiles wouldn’t be transferred to any third party. From that point on, a realisation took hold that the international community needed a new architecture for counterproliferation.

What followed was the establishment of a landmark international strategy aimed at controlling the spread of chemical, biological, radiological, nuclear and high-consequence explosive (CBRNE) technologies: the Proliferation Security Initiative, which this year marked its 20th anniversary. Over the past two decades, the initiative has consolidated an alliance of more than 106 countries committed to preventing the spread of CBRNE.

However, much has changed in the geopolitical and technological landscape since the heady days of the 2000s. While the war on terror concentrated on non-state actors and an isolated ‘axis of evil’, great-power competition has returned as a central focus of international security. And although missile components and dual-use centrifuges are still being hunted down, non-proliferation is today equally concerned with materials that are smuggled with much more ease. This is especially true with the tools of dual-use chemistry and synthetic biology.

In the same year as the So San incident, a group of scientists constructed the first entirely artificial virus, a chemically synthesised strain of polio. Three years later, reverse genetics was used to recreate H1N1 ‘Spanish’ influenza, which had killed more than 50 million people in the years following the First World War.

Since then, pox viruses, coronaviruses, avian flu and several other pathogens have been revived, amplified or modified into forms that can evade herd immunity or render established medical countermeasures obsolete.

The technologies used to produce these infectious agents are much smaller than anything recently interdicted on the high seas: whole genome sequencers that can be held in the palm of one’s hand, chemical reagents that can be ordered online, and a host of other products that don’t need to be transported on a slow-moving ship.

In the past year, the life sciences have been further turbocharged by the latest chapter of technological advancement: the newly emergent platforms of artificial intelligence. With the debut of novel large-language models in late 2022, the public has seen minor previews of how those with malicious intent could apply AI tools to CBRNE. An exercise in Massachusetts showed how a large-language model could help students without any scientific training construct synthetic versions of the causative agents of smallpox, influenza, Nipah virus and other diseases.

Elsewhere, researchers using generative AI for drug discovery found that they could also design a range of nerve agents, including VX. Discussion has even circulated of integrating AI into nuclear launch systems, a kind of digital dead hand for the new age. To all of this was added a slew of articles on AI-enabled high-consequence munitions, including fire-and-forget hypersonic missiles, autonomous loitering munitions and unmanned attack vehicles.

While much debate has erupted over the existential risk posed by AI platforms, some have argued that these concerns are either non-specific or alarmist. We, the authors, are investigating precisely how AI could accelerate the proliferation of CBRNE, and how such applications might be realistically controlled. To this end, we will be seeking expert opinions on how generative AI could lower informational barriers to CBRNE proliferation, or add new capabilities to existing weapon systems.

As both researchers and clinicians, our concern relates primarily to human security, and what can be done to safeguard international public health. We hope to understand how counterproliferation professionals are confronting this new era, and what new concepts will be needed to protect human life and prosperity.

Next year, Australia will host the Proliferation Security Initiative’s Asia–Pacific exercise rotation, Pacific Protector. Air force and naval assets will be deployed across a vast expanse of ocean at a time of increased tension over a place central to AI and its underlying technologies: the Taiwan Strait. While the exercise will have several enduring uses that will benefit the Asia–Pacific, it is undeniable that the task of countering CBRNE proliferation has fundamentally changed since the initiative’s establishment.

States, non-state actors and individuals now have access to technologies and informational aids that used to be the stuff of science fiction. How policymakers, researchers and practitioners confront weapons proliferation in the new age of AI will have long-term consequences for human security in this region, and across the globe.

This article was written by David Heslop, an associate professor in the School of Public Health and Community Medicine at the University of New South Wales; and Joel Keep, a journalist, clinician and post-graduate student at UNSW offering research assistance to Dr Heslop. published by The Strategist.

SHARE WITH: