skip to navigation skip to content skip to footer

Australian Journal of Defence and Strategic Studies

Australian Journal of Defence and Strategic Studies

AJDSS Volume 2 Number 2

Commentary

Westmoreland’s dream and Perrow’s nightmare: two perspectives on the future of military command and control

Shane Halton

Published online: 3 December 2020

EXTRACT

The near simultaneous introduction of machine-learning technologies into the heart of traditional command and control arrangements coupled with the operational challenges inherent in executing complex missions, such as hypersonic missile defence, poses unique risks and opportunities to today’s military commanders. This commentary explores this challenge from two perspectives. The first is the technological positivist perspective of US Army General William Westmoreland, which holds that military command and control functions can and should be automated to the highest degree possible to increase operational efficiency. The second is the more sceptical perspective of Dr Charles Perrow, which holds that interactively complex systems with tightly coupled components are inherently prone to unexpected and often dramatic failure. By incorporating both these perspectives into the design and operation of modern command and control systems, the author hopes these systems can be made to operate safely and more effectively.

In October 1969, standing behind a podium at the Sheraton Park Hotel in Washington DC, Army Chief of Staff General William C Westmoreland presented his vision of the future of warfare to the assembled attendees of the Annual Luncheon Association of the United States Army.

On the battlefield of the future, enemy forces will be located, tracked, and targeted almost instantaneously through the use of data links, computer assisted intelligence evaluation, and automated fire control … I see battlefields or combat areas that are under 24-hour real or near real time surveillance of all types. I see battlefields on which we can destroy anything we locate through instant communications and the almost instantaneous application of highly lethal firepower. 1
Westmoreland presented this vision, this dream, years before the US Department of Defense (DoD) embarked on its Second Offset Strategy, which was designed to leverage the US’s superiority in science and technology to overcome the Soviet advantage in raw troop numbers in Europe, and decades before the US would first operationalise this approach to warfare during the first Gulf War. In his speech, Westmoreland was describing ‘network-centric warfare’ almost 30 years before the idea would gain broad acceptance in the Pentagon in the late 1990s.
In April 2017, the Pentagon established the Algorithmic Warfare Cross Functional Team, also known as Project Maven, to integrate:

computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that DoD collects every day in support of counterinsurgency and counterterrorism operations. 2

Maven would later begin a second series of initiatives designed to bring not only Silicon Valley’s technology but also its approach to developing and deploying software into the heart of the US military. Eventually the whole of Project Maven would be absorbed into the much larger Joint Artificial Intelligence Center, a new organisation with the express goal of bridging the gap between DoD and Silicon Valley. A close collaboration between the brightest minds in academia, the commercial world and national security, this too was Westmoreland’s dream.

Though many facets of Westmoreland’s dream have since come to pass, the late 1960s were in many ways a high-water mark for this brand of technological positivism, the practical philosophy that holds that almost any environmental, technological or social problem can be overcome if you throw enough resources, computing power and engineers at it. The1970s and 1980ssaw a fairly radical paradigm shift in thinking about complex adaptive systems, such as weather patterns, animal populations and human-machine hybrid organisations like air traffic control systems. In the mid-1970s, research in physics and mathematics by Benoit Mandelbrot, Mitchell Feigenbaum and others laid the groundwork for a new way of thinking about complexity, chaos and the basic nature of the universe. This vein of research - which eventually entered into mainstream culture with the popularisation of concepts such as fractals, ‘sensitive dependence on initial conditions’ and the ‘butterfly effect’ - set limits on what could be reliably known, modelled or predicted about the world at any given time. And, it placed hard limits on Westmoreland’s techno-optimistic vision of the future. Engineers designing complex systems, and the technicians and managers responsible for operating them, began to gain a fuller appreciation for the many devious and difficult to predict ways glitches, friction, malfunctions, turbulence, poor design choices and interactive complexity could cause a system to underperform expectations or in certain cases fail all together.

One of the first researchers to incorporate the lessons from chaos and complexity research into the design and operation of complex systems was Charles Perrow. Perrow, in effect, made his career studying disasters. In 1984, he published Normal Accidents: Living With High Risk Technologies, which explored the root causes of industrial disasters, such as the partial meltdown of a nuclear reactor at Three Mile Island complex near Harrisburg, Pennsylvania. Perrow identified two factors which, when combined, increase the risk of a system failing catastrophically: tight coupling and interactive complexity. The ‘normal’ in normal accidents is a synonym for ‘inevitable.’ Normal accidents in a particular system may be rare (‘it’s is normal for us to die, but we only do it once’) but the system’s design and configuration make it more likely such accidents will occur. Perrow identifies systems at risk of normal accidents as ‘high risk systems.’

Interestingly, Perrow released his book two years before the 1986 Soviet nuclear disaster at Chernobyl but it subsequently became the normal accident par excellence, providing students of industrial design with an easy shorthand to reference normal accident risk. Today, it is chilling to read Perrow’s description of a normal accident knowing what happened in Chernobyl a mere two years later.

We need two or more failures among components that interact in some unexpected way. No one dreamed that when X failed, Y would also be out of order and the two failures would interact so as to both start a fire and silence the fire alarm. Furthermore, no one can figure out the interaction at the time and thus know what to do. The problem is just something that never occurred to the designers... This interacting tendency is a characteristic of a system, not of a part or an operator; we will call it the “interactive complexity” of the system.

…But suppose the system is also “tightly coupled” that is, processes happen very fast and can’t be turned off, the failed parts cannot be isolated from other parts... operator action or the safety system might make it worse, since for a time it is not known what the problem really is. 3

When the reactor crew at Chernobyl disabled the automatic shutdown mechanisms in preparation for a test and a previously undiscovered flaw in the control rod design caused hot nuclear fuel to rapidly mix with reactor cooling water which led to a rapid increase in pressure within the reactor, this was Perrow’s nightmare.

Chernobyl isn’t the only example from the late Soviet Union where an interactively complex and tightly coupled system catastrophically malfunctioned, causing near-instant death and destruction. In the early morning hours of 1 September 1983, Korean Air Lines Flight 007 (hereafter KAL007) departed Anchorage for Seoul. At the start of the flight, the flight crew made a fateful error; instead of selecting the Inertial Navigation System, which would have steered the plane on the proper route, the autopilot was instead set at a constant magnetic heading. This may have been caused by the failure to twist a knob one position further to the right. KAL007 drifted off course, unnoticed by the flight crew or any civilian air traffic controllers, eventually entering into Soviet air space near Kamchatka.

Read the full article as a pdf

To cite this article:

Documentary-note: Shane Halton, ‘Westmoreland’s dream and Perrow’s nightmare: two perspectives on the future of military command and control,’ Australian Journal of Defence and Strategic Studies, 2020, 2(2):259-268. https://www.defence.gov.au/ADC/Publications/AJDSS/volume2-number2/two-perspectives-on-future-military-c2.asp

Author-date (Harvard): Halton, S., 2020. ‘Westmoreland’s dream and Perrow’s nightmare: two perspectives on the future of military command and control, Australian Journal of Defence and Strategic Studies, [online] 2(2), 259-268. Available at: <https://www.defence.gov.au/ADC/Publications/AJDSS/volume2-number2/two-perspectives-on-future-military-c2.asp>


1 Randolph Nikutta, ‘Artificial Intelligence and the Automated Tactical Battlefield’ in Allan M. Dims (ed), Arms and Artificial Intelligence: Weapons and Arms Control Applications of Advanced Computing, Oxford University Press, Oxford, 1987, p 101.


2 Cheryl Pellerin, ‘Project Maven Industry Day Pursues Artificial Intelligence for DoD Challenges’, US Department of Defense, last modified 27 Oct. 2017. https://www.defense.gov/Explore/News/Article/Article/1356172/project-maven-industry-day-pursues artificial-intelligence-for-dod-challenges


3 Charles Perrow, Normal Accidents: Living with High Risk Technologies - Updated Edition, Princeton University Press, Princeton, 2011, pp 4-5.