Popular articles

PIR Center consultant Oleg Demidov describes the role of the Verisign corporation in the DNS root zone management, comments on the process U.S. Department of Trade’s withdrawal from direct contractual relationship with the technical management of the root zone, and looks at the potential impact of c...

In the aftermath of the Second Session of the Preparatory Committee for the 2020 Review Conference of the Parties to the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), its Chair Ambassador Adam Bugajski of the Republic of Poland gave an exclusive interview to the Yaderny Kontrol bulletin ...


Some time ago the U.S. administration, including their former President Barack Obama, has voiced more and more often the idea that it would be desirable to continue strategic offensive reductions. There are several reasons why the United States are so interested in intensifying nuclear arms reduct...

All articles

Poll



 

AI and Global Security Environment

Margaret Kosal

Disruptive technologies and emerging innovations within today’s most cutting-edge science and technology (S&T) areas are cited as carrying the potential to revolutionize governmental structures, economies, and international security. Some have argued that such technologies will yield doomsday scenarios and that military applications of such technologies have even greater potential than nuclear weapons to radically change the balance of power.[1] While the suggestion that such emerging technologies will enable a new class of weapons that will alter the geopolitical landscape remains to be realized, a number of unresolved security puzzles have implications for international security, defense policy, governance, and arms control regimes. The extent to which these emerging technologies may exacerbate or mitigate the global security and governance challenges that states will pose in the future to global security interests will remain an integral question as policy-makers and leaders navigate the complex global environment

How, when, where, and in what form the shifting nature of technological progress may bring enhanced or entirely new capabilities, many of which are no longer the exclusive domain of a single nation-state, is contested and requires more cross-disciplinary thinking. Contemporary analyses of these emerging technologies often expose the tenuous links or disconnections among the scientific and technical realities and mainstream scholarship on national and international security.

In the post-Cold War environment, the most technologically advanced military power no longer guarantees national security. As nations and the international community look to the future – whether dominated by extremist groups co-opting advanced weapons in the world of globalized non-state actors or states engaged in persistent regional conflicts in areas of strategic interest – new adversaries and new science and technology will emerge. These new technologies and discoveries may significantly alter military capabilities and may generate new threats against military and civilian sectors. Greater strategic understanding of these game-changing technologies and the development of meaningful and testable metrics and models to help policymakers address the challenges of this complex global environment is needed.

 

Possible Challenges to Strategic Stability

The concept of strategic stability arose in the post-WWII nuclear policy realm in which military use of such weapons was recent memory. In the ensuing decades, it has become a cornerstone of national and international security and foreign policy by nuclear and non-nuclear states and cornerstone of deterrence.[2] Schelling and Wohlstetter-esque “stability of mutual deterrence” evokes strong connotations of stable and unstable equilibrium from the physical sciences.[3] Strategic stability was all about surviving a first nuclear attack and then credibly being able to respond with a massive retaliatory nuclear strike and how that calculus critically affected geopolitics. The Cold War paradigm sought strategic stability through parity of nuclear arsenals in terms of capabilities, numbers, and conceptual permissiveness of limited nuclear war fighting and conformity of intent.

How, to what extent, and in what ways Artificial Intelligence (AI) may affect strategic stability is speculative. The concepts below are grounded in geopolitical and technical robustness, nonetheless they are intended to be illustrative rather than predictive.


Situational Awareness / ISR

As the limits of human capacity to process large streams of data, especially in time-sensitive environments, the risk of “data overload” increases. AI, particularly in the context of machine learning, is seen as valuable for data fusion from heterogeneous streams originating in large number of sensors, communications networks, and other electronic devices. Currently the US DoD Project Maven/Algorithmic Cross-Functional Team is a first attempt directed to identify and locate Daesh/ISIL fighters.

 

Command and Control (C2)/ Command Decision Support

Beyond situational awareness, another potential application of AI is to increase decision-making capacities. For example, the USAF Multi-domain Command and Control (MDC2) system is meant  to assign tasks to air, space, and cyber forces. The DARPA Artificial Intelligence Exploration (AIE) generates, tests, & refines hypotheses to assist human decision-making..

 

Cyber

AI has the potential to reduce uncertainty by helping make cyber networks more secure through detection of anomalies, identification of vulnerabilities, and potentially implement protective action (patch, isolate, self-heal, etc.) Examples include the DARPA 2016 Cyber Grand Challenge which reduced process to seconds from previous metric of days to detect cyber intrusions and the NSA’s Sharkseer program, which monitors incoming email traffic to DoD servers for malware. Machine learning is also likely to be used for software verification and validation.

 

With respect to offensive cyber operations, AI may create vulnerabilities through introduction of incorrect training data as part of machine learning.

 

‘Flash Crashes’ or unexpected catastrophic failures are another concern with increasing incorporation of AI into complex, interconnected systems. Applying this to nuclear weapons and strategic stability can be done through the lens of “Normal Accidents” theory, originally proposed by Charles Perrow and applied to nuclear weapons by Scott Sagan.[4]

 

Autonomy

While much attention in popular press and at the international level has been given to autonomous systems, i.e., unmanned aerial vehicles, aka ‘drones,’ and lethality, the direction of the vector regarding increasing or decreasing stability is not resolved. Currently all US operational systems require “human in the loop” and are restricted in scope and nature, i.e., fixed anti-missile capabilities on ships, rather than general lethality. As systems are developed and deployed with higher levels of autonomy, broader scope, and the ability to move independently, the calculus will change.

 

One area of particular concern is swarms, i.e., multiple independent autonomous systems that can synchronize and coordinate collective offensive and/or defensive maneuvers. Frequently these have been envisioned as large (n >10) formations of low-cost UAVs that might be used to overwhelm ground or ship-based defensive systems or troops. The technology to enable swarm tactics will require advances in AI for imagined scenarios to be realized.

 

Nuclear

The specific applications of AI to nuclear weapons directly often can take on a ‘Dr.Strangelove”-esque motif. As far as implications for strategic stability, the application of AI that most often is mentioned is incorporation into launch on warning systems. This could result in a decreased decision-time by another nuclear state. Typical scenarios start with AI applied to machine vision and signal processing, which is then combined with autonomy and/or sensor fusion, to enable asymmetric capabilities for ISR, automatic target recognition (ATR), and technical guidance capabilities. Such capabilities could increase the likelihood that survivable forces (e.g., SLBMs and mobile missiles) could be targeted and even potentially destroyed, thereby also leading to increased plausibility of first strike.[5] It has been noted that such systems may undermine strategic stability even if state possessing such capabilities has no intention to use them,[6] as an adversary cannot be sure and may hedge.

 

 

 

Things to Watch Out For

 

Deep Fakes

Emerging video manipulation and fraudulent simulation technology that combines facial recognition with a neural network to allow its users to create fake monologues by public figures are referred to a ‘deep fakes.’ As this technology proliferates, it has the political to intensify political instability. By increasing the impact of misleading content, ‘video spoofing’ could lead to a rise in fake news, leadership imitation, and plausible deniability. As each of these scenarios can threaten key tenants of political stability—especially states with weak or compromised political structures—it is within international security interests to prepare for and counter the threat posed by this emerging technology.

 

AI and related technology may need to be employed as countermeasures to authenticate video. Video watermarking techniques already exist that allow authors to embed signatures of authenticity in the video itself.[7] Additionally, steganalysis and statistical methods can be used to search video for digital irregularities that are indicators of post-creation modification;[8] improving and innovating such media forensic techniques is the objective of DARPA’s current ‘MediFor’ project. Finally, checksum file verification programs can be used to ensure that a file  in question is the same as the one it is supposed to be; platforms like ConceptCrypt take advantage of the immutability of the blockchain for this very purpose.

 

Hype

In geopolitics, rhetoric matters. It’s not the only thing that matters nor often the most important, but it does matter. And therefore, one must be cognizant of hype. A prime example of this is the “Slaughterbots video,” produced by an NGO seeking an international treaty to ban lethal autonomous systems.[9]

 

Will AI Replicate Human Biases, Stereotypes, & Prejudices?

As machine learning applications such as facial recognition are increasingly employed, study of how the training data may replicate existing human biases has been well-documented.[10] And it’s not just topics like racism in which the training set may be influenced by human biases: algorithms for finding chemical reaction conditions are influenced by the chemists that program them.[11] This is a particularly fascinating example because few of us commonly think about things like chemical reactions and bias. Chemistry professor Joshua Schrier from Fordham University summarized it well:  “Considering machine learning’s promise, it’s a shame to make an algorithm that’s just as stupid as humans because of the way it’s trained.”[12]

If human biases can impact machine learning outcomes for designing inorganic reactions, it’s something to be cognizant of for other – potentially more consequential – decision-making assisted by AI.

 

Creative Countermeasures

In the 2017 monograph, Artificial Intelligence and National Security, authors Greg Allen and Taniel Chan, identify what they call “Potential Transformative Scenarios.”[13] The first of these scenarios is titled “Supercharged surveillance brings about the end of guerilla warfare.” Protesters in Hong Kong are innovating and using simple countermeasures to avoid surveillance and identification, such as physical barriers (masks) to lasers to dazzle the facial recognition cameras and wrapping themselves in Mylar emergency blankets to minimize heat (IR, infrared) signatures. These efforts by protestors have been called “[a] war against Chinese artificial intelligence.”[14] It suggests that states should not forget about human creativity. Thinking about the nature and how adversaries might employ simple, innovative countermeasures is understudied, if noticed at all.

 

Conclusions

Reducing the risk from misuse of technology will mean consideration of the highly transnational nature of the critical technology required. Traditional and innovative new approaches to nonproliferation and are important policy elements in reducing the risk of malfeasant applications of technology. Verification still remains a technical as well as diplomatic challenge and the role of international agreements and cooperative programs in the 21st Century is a contested intellectual and policy field.

 

Science diplomacy has perhaps made the biggest impact in foreign policy as a part of Track II diplomatic efforts:  informal diplomacy between individuals who are not officially empowered to act on behalf of the state but are acting in accordance with a state’s foreign policy goals and interact through dialogue, as part of increasing cooperation and transparency or in decreasing conflict among states. Track II efforts with nuclear physicists and other scientists during the Cold War are legendary.

 

Overall, Track II science diplomacy has been an under-utilized tool since the Cold War, which may be ironic considering that technology has enabled the spread, at an unprecedented rate, of scientific knowledge, capabilities, and materials globally. Efforts such as this one organized by CSIS and the PIR Center are critically important.

 

In the 21st Century, major barriers to effective science diplomacy for national security include three major risks:  not being relevant, not being strategic, and not being at the table. The ability t o translate and make relevant the role and importance of science to foreign policy aims is critical. While there are notable exceptions, often this goal is not best accomplished by active research scientists. Similarly, while there are notable exceptions, it is also not often accomplished well by traditional diplomats. In the global information age, there is a critical need for a cohort of individuals who are capable of bridging the divide across technical and national security and foreign policy arenas. In the US, one champion of S&T and foreign policy is institutionalized and embodied in the Science and Technology Advisor to the Secretary of State (STAS). 

 

Technical experts are vital, and lack of expertise can set back efforts by years. The ability to bridge those gaps and work between the technical and the political realms is sometimes over-looked. Once the metaphorical spotlight has been used to illuminate some issue, the science diplomats and other inside and outside the government who possess some mix of technical and policy expertise are responsible for creating, implementing, executing, and assessing the results. It requires empowered and resourced teams of individuals, and increasingly those teams are multi-national, i.e., requiring those with international experience, understanding, and backgrounds.

 

Much of the concern regarding the potential offensive applications of artificial intelligence is highly speculative and based on worst-case scenarios. The technical and operational veracity of scenarios varies highly from the robust pragmatic realpolitik to Hollywood-like fantasy. Particularly of the industrialized global north, worst-case scenarios garner easy media attention and can inadvertently drive policy decisions. Choices can be made today, and policy can be implemented in the near future that are likely to shift the balance in favor of maximizing the beneficial and minimizing the negative effects on global security.

 

Past methods for other technologies that don’t take into account the international nature of the science and technology industry are not adequate. Any international regime must be interdisciplinary in focus, cognizant of the multi-polar post-Cold War world, and appreciate the role of private funders, commercial development, and transnational corporations. To be clear, there is much to learn from and leverage in existing arms control and nonproliferation institutions. These starting points and history are valuable; they are not necessarily predictive, however. Regardless, the challenges in this arena are primarily political rather than technical.

 

 


[1] David E Jeremiah (VCJCS, USN, ret), “Nanotechnology and Global Security,” Palo Alto, CA; Fourth Foresight Conference on Molecular Nanotechnology, 9 November 1995.

[2] Dmitri Trenin, “Strategic Stability in the Changing World,” Carnegie Endowment for International Peace, March 2019, https://carnegieendowment.org/files/3-15_Trenin_StrategicStability.pdf; James M. Acton, “Reclaiming Strategic Stability,” in Elbridge A. Colby. Michael S. Gerson, Editors, Strategic Stability: Contending Interpretations, Army War College Strategic Studies Institute; Carlisle PA, February 2013, pp 117-146; Adam Stulberg and Lawrence Rubin (editors), End of Strategic Stability? Nuclear Weapons and Regional Rivalries, Georgetown University Press, 2018; Pavel Podvig, “The Myth of Strategic Stability,” Bulletin of Atomic Scientists, October 31, 2012, https://thebulletin.org/2012/10/the-myth-of-strategic-stability.

[3] Elbridge A. Colby. Michael S. Gerson, Editors, Strategic Stability: Contending Interpretations, Army War College Strategic Studies Institute; Carlisle PA, February 2013, https://publications.armywarcollege.edu/pubs/2216.pdf

[4] Charles Perrow, Normal Accidents: Living with High Risk Technologies, Princeton University Press, 1984; Scott Sagan, The Limits of Safety: Organizations, Accidents, and Nuclear Weapons, Princeton University Press, 1995.

[5] Edward Geist and Andrew J. Lohn, How Might Artificial Intelligence Affect the Risk of Nuclear War?” RAND, 2018.

[6] Zachary S. Davis, Artificial Intelligence on the Battlefield, Center for Global Security Research, Lawrence Livermore National Laboratory, March 2019.

[7] Langelaar, G. C., I. Setyawan, and R. L. Lagendijk. 2000. “Watermarking digital image and video data. A state-of-the-art overview.” IEEE Signal Processing Magazine 17 (5): 20-46.

[8] Richard, Golden G., and Vassil Roussev. 2006. “Next-generation digital forensics.” Communications of the ACM 49 (2): 67-80.

[9] Slaughterbots video:  https://www.youtube.com/watch?v=HipTO_7mUOw and Paul Schnarre on Why You Shouldn’t Fear Slaughterbots https://spectrum.ieee.org/automaton/robotics/military-robots/why-you-shouldnt-fear-slaughterbots

[10] Chouldechova, A., Putnam-Hornstein, E., Benavides-Prado, D., Fialko, O. & Vaithianathan, R. Proc. Machine Learn. Res. 81, 134–148 (2018); Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V. & Kalai, A. Adv. Neural Inf. Proc. Syst. 2016, 4349–4357 (2016); Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou, “Word embeddings quantify 100 years of gender and ethnic stereotypes,” PNAS, 17 April 2018, 115, pp 3635-E3644.

[11] Xiwen Jia, et al., “Anthropogenic biases in chemical reaction data hinder exploratory inorganic synthesis,” Nature, 11 September 2019, vol 573, pp 251–255, https://www.nature.com/articles/s41586-019-1540-5 

[12] Sam Lemonick, “Machine learning can have human bias,” Chemical & Engineering News, 16 September 2019, vol 97, p 6, https://cen.acs.org/physical-chemistry/computational-chemistry/Machine-learning-human-bias/97/i36

[13] Greg Allen & Taniel Chan, Artificial Intelligence and National Security, Harvard Belfer Center, July 2017, p 31.

[14] Via Twitter Alessandra Bocchi @alessabocchi

 

Imprint:

Comments

 
 
loading