What are AI superpowers doing to advance their incorporation of AI into their defence postures, and what are the national security implications for Australia?
By Michael Shoebridge, Director, Defence Strategy and National Security Program, Australian Strategic Policy Institute (ASPI).
This article is an extract from , a report published in partnership with the .
In 2022, there are two kinds of AI āsuperpowersā: companies and states.
The most capable AI national security power will be the state that has the closest connections to and is able to take advantage of corporate AI superpowers. AI capability isnāt simply transferable from one application or sector (for example, search, facial recognition or digital navigation) to another without deep understanding of the uses and purposes involved and the limitations of the available datasets and resulting machine-learning applications. AI uses in national security look compelling and potentially destabilising, from insight advantage from huge datasets to the control of autonomous systems and rapid decision-making.
Right now, the US is a potential AI superpower, and much of its technical capability is coming from US big tech (notably Amazon, Apple, Facebook, Google and IBM), although those capabilities have been developed for particular enterprise purposes. Thereās capability within the highly classified government world, also, which enables cyber and other security activities. Policies, strategies and principles have lagged AI development and application in the big-tech worldāa phenomenon best captured by the Facebook āmove fast and break thingsā mantra.1 Fortunately, that mindset hasnāt applied to AI in the defence realm, where ābreaking thingsā is less forgivable, given that those things can be people. A key constraint on AIās application to national security has been the adversarial relationship between government and big tech in the US. This shows some signs of easing but not ending (for example, thereās been a revival of anti-trust thinking).2
China is the other potential AI superpower through a combination of its state-centred data model and its own big-tech corporate sector.3 China is also the widest state applier of data and tech for particular state purposesā including state surveillance of its population (think Xinjiang and Chinaās social credit system) and state-centred data laws.4 To the extent that all data is open to the state and the state can enable its tech-world actors to use it, China has a ādata advantageā that should translate into an AI advantage. However, Beijingās moves to reassert Chinese Communist Party control over big tech risk damaging it.5
But data is only part of an AI capability. Applying AI is a multidisciplinary team sport, and it turns out that datasets collected for particular purposes and in particular ways can have biases and limitations when used for other purposes. This is an issue for entities that have high risk appetites for rolling AI applications out, particularly for military or offensive cyber uses. Those who apply AI to weapon systems without deep understanding of the intended purpose, the environment and data limitations and a level of knowledgeable human participation are likely to inflict and experience nasty surprises.
Other states and supranational entities (such as the European Union) have capabilities, but not at the scale of the US or China. The US alliance system could enable states such as Australia to both contribute to and draw from US capabilities, while Chinaās model is likely to remain a national one. Other nations and entities tend to be AI policy- and strategy-heavy,6 with a large focus on getting ethical principles right,7 but are short on applied capability that might use those policies and principles.
The national security implications of this for Australia are broad and complicated but, boiled down, mean one thing: if Australia doesnāt partner with and contribute to the US as an AI superpower, itās likely to be a victim of the Chinese AI superpower and just an AI customer of the US. AUKUS is a step towards this AI partnership for national security.8
(1) Hemant Taneja, āThe era of āmove fast and break thingsā is overā, Harvard Business Review, 22 January 2019, .
(2) Nicolas Rivero, āA cheat sheet to all of the antitrust cases against Big Tech in 2021ā, Quartz, 29 September 2021, .
(3) Mapping Chinaās tech giants, ASPI, Canberra, 2022, .
(4) Katja Drinhausen, Vincent Brussee, Chinaās social credit system in 2021: from fragmentation towards integration, MERICS: Mercator Institute for China Studies, 3 March 2021, .
(5) Lulu Yilun Chen, Jun Luo, Zheng Li, āChina crushed Jack Ma, and his fintech rivals are nextā, Bloomberg, 24 June 2021, .
(6) āA European approach to artificial intelligenceā, European Commission, no date, .
(7) Denham Sadler, āAlan Finkel on AI ethics and lawā, InnovationAus.com, 10 December 2019, .
(8) āJoint Leaders Statement on AUKUSā, The White House, 15 September 2021, .
This article is an extract from , a report published in partnership with the .