This article considers the pre-history and history of EG&G, Inc., a key contractor in America’s nuclear weapons programme in the Cold War. EG&G was cofounded by M.I.T.’s Harold Edgerton, Kenneth J. Germeshausen, and Herbert E. Grier after World War II in order to serve the nuclear weapons timing and firing needs of the U.S. Department of Defense and Atomic Energy Commission. The three men began their collaboration in the 1930s at M.I.T. with work on flash photography. Indeed, their partnership began in high-speed ‘stroboscopic’ photography in the 1930s, became focused on nuclear weapon timing and firing in 1945–50, and eventually re-focused on high-speed photography in the 1950s. Instead of emphasizing, as others have, the reproduction and circulation of photographic images of nuclear detonations, this article examines how the convergence of photographic and ballistic regimes was constructed around what we call the ‘deep media’ of timing, firing, and exposing.
This article considers the pre-history and history of EG&G, Inc., a key contractor in America’s nuclear weapons programme in the Cold War. EG&G was cofounded by M.I.T.’s Harold Edgerton, Kenneth J. Germeshausen, and Herbert E. Grier after World War II in order to serve the nuclear weapons timing and firing needs of the U.S. Department of Defense and Atomic Energy Commission. The three men began their collaboration in the 1930s at M.I.T. with work on flash photography. Indeed, their partnership began in high-speed ‘stroboscopic’ photography in the 1930s, became focused on nuclear weapon timing and firing in 1945–50, and eventually re-focused on high-speed photography in the 1950s. Instead of emphasizing, as others have, the reproduction and circulation of photographic images of nuclear detonations, this article examines how the convergence of photographic and ballistic regimes was constructed around what we call the ‘deep media’ of timing, firing, and exposing.
Though many aspects of America’s atomic and nuclear research programme remain secret, and the archives of core defence contractors such as Edgerton, Germeshausen, and Grier, Inc. (EG&G) largely inaccessible or nonexistent, EG&G’s role in Cold War photography is iconic.1 Thanks in part to the prominence given Harold Edgerton’s inventions in the legacy of the Massachusetts Institute of Technology, and the commemorative work of such organizations as the Nevada Test Site Historical Foundation in Las Vegas, or the Atomic Heritage Foundation in Washington, D.C., we now know that EG&G — the private corporation formed after World War II out of the collaboration of M.I.T.’s Harold Edgerton, Kenneth Germeshausen, and Herbert Grier — developed a series of high-speed cameras based on Edgerton’s novel ‘flash’ technology that would be used to take photographs of nuclear fireballs in many of the United States’ nuclear tests during the Cold War. Indeed, in EG&G’s fireball photography, readers of the popular press during the nuclear testing era saw some of the most iconic images of nuclear detonations outside of the famous ‘mushroom cloud’ photographs (Figure 1) These uncanny images of vaguely organic or cellular white forms against a black background, captured in the first milliseconds of an explosion before a ‘mushroom cloud’ had yet formed, have circulated over the years in the pages of Life or the New York Times, or hung as memorials to scientific prowess in such institutions as the Smithsonian Museum in Washington, D.C.2 Most scholarship on nuclear test photography has focused on the mushroom cloud rather than the fireball, and on the political and cultural effects of nuclear image reproduction and iconicity, rather than on the technical processes that made proliferation across popular culture possible. In works by Boyer (1985), Hales (1991), Titus (1986), Rosenthal (1991), or Hariman and Lucaites (2012), for example, we read of how mushroom clouds move from the pages of Life magazine across multiple venues and artefacts to achieve a variety of functions far removed from that of the creation of historical records. Others such as Lippit (2005) have moved to theorize not only the circulation of such images but the changes they afford vision itself through new metaphors and conceptions of light and shadow. But there is little to no work that examines mushroom cloud or fireball photography in the way Galison (1997) examines the emergence of photography and x-rays as an integral aspect of early atomic research, making possible not only future science but whole new approaches to objectivity, image, and labour. We show in this article that the meanings and the histories of these images as icons cannot be separated from the development of the bomb technologies themselves. They share a common institutional and indeed technical history that brings us to see atomic photography and its images as artefacts of what we call the ‘deep media’ of timing, firing, and exposing. Our aim in this article is to make this argument by telling the story of Edgerton’s flash photography work and that of his company, EG&G, which grew into one of the leading U.S. defence contractors of the Cold War era. Our claims are threefold, with each claim following the others in the progression of this article. First, we argue that EG&G’s images of nuclear fireballs cannot be said to grow out of the history of photography in a relatively restricted technological sense, let alone out of a political economy limited to war photography. Rather they grew out of the political economy of ‘deep media’ — in this case deep media of timing, firing, and exposing. We understand deep media as the technical modalities that function ‘deep’ within technical artefacts. Deep media, as we are using the notion, refers to the fundamental technical modalities that mediate or ‘go on between’ artefacts and the physical elements on which they rely. As we will show, flash photography and bombs alike depend on the deep media of timing, firing, and exposing. Second, we show that the images that were produced and circulated of nuclear fireballs during the Cold War were embedded within a technological regime of cameras and ballistics. That is to say, the business of Cold War nuclear photography was inextricably tied to the business of nuclear weapons development, not by some grand conspiracy of the military-industrial complex but rather in a more mundane technological manner. Nuclear weapons testing needed, for diagnostic purposes, nuclear weapons photographs, and the cameras that took pictures of nuclear detonations were themselves being ‘field tested’ in nuclear tests. More strikingly, the cameras and the bombs were integrated in the same automated network of timing and firing. The images of fireballs taken by EG&G cameras during the Cold War were therefore ‘test’ photos in two senses: they were meant to ‘prove’ both the bombs and the cameras, and they were technologically integrated into the nuclear testing apparatus. Our third claim follows from the first two: both physical (in the sense of ‘physics’) and representational (in the sense of ‘aesthetics’) principles were integral to the business of nuclear testing photography. As the camera in general can be understood in both technological and representational terms, the representations of nuclear fireballs so integral to Cold War cannot be removed from a technological history without neglecting crucial aspects of their materialization and meaning, especially if we are interested in situating them within a political economy. How did, for example, a fireball image captured as a means of measuring bomb yield, and generated by a camera wired into the very firing mechanism of the bomb, end up as ‘Picture of the Week’ in Life magazine (Atomic Explosion Stopped at Millionth of a Second, 1953)? The business of Cold War nuclear imagery entailed the exploitation of the principles of both physics and aesthetics. To establish these claims, we consider the story of EG&G’s path from the research lab to reconnaissance photography to the business of timing and firing nuclear weapons at the end of World War II and into the Cold War. Our approach in the pages that follow is deliberately narrative in nature, for we are tracing a history, not a process, let alone a theory. Approaching deep media means tracing technical processes as phenomena that move in, through, and among time, place, and actants.3 Stages of vision Synchronous motors are motors that rotate ‘in time’ with the power supply. Factories depend on synchronous motors: synchronous motors convert electrical power into mechanical power without ‘slip’, thus maximizing efficiency. Moreover, no matter what the torque on the machine, or how the torque might vary, synchronous motors will maintain a constant speed: they turn in keeping with the supply current, irrespective of torque. Finally, synchronous motors allow for varying of motor speed through simple adjustments to the frequency of the power supply. Here we have a near-perfect factory machine: a belt moves on wheels that are propelled by a synchronous motor. On the belt are constructed consumer goods, part by part and in sequence, by workers standing in formation along the belt. Each worker repeats the same assembly operation as the belt passes by. To increase production, the factory manager would need to merely make the belt move faster. The synchronous motor allows him to do this with ease: by manipulating a lever, he is able to increase the frequency of the power supply, and thus speed up the belt. But factories are not the only operations that must run ‘in time’. Analogue electric clocks depend on synchronous motors, essentially so, as they cannot afford to slip without compromising their essence. So, too, analogue timers, tape recorders, film projectors, and certain precision servomechanisms like tape drives for computers and antiaircraft guns. In 1926, a very young Harold Edgerton arrived at M.I.T. from Schenectady, New York, where he had been working for General Electric. He came to study motors, synchronous motors, the kinds of motors General Electric was building for industrial applications (Edgerton-Digital-Collections, 2015a). The perfect factory machine had one acute vulnerability: a sudden adjustment to the frequency of the power supply — for example, as the result of a surge caused by a lightening strike, or a drop caused by sabotage — could cause a synchronous motor to 3 ‘Actants’ is the generic term that Bruno Latour uses to refer to human and non-human actors within a network, including machines (see Latour, 2005: 54–55, 199). EG&G AND THE DEEP MEDIA OF TIMING, FIRING, AND EXPOSING 185 destabilize, even to oscillate wildly, resulting in what engineers call ‘pole slipping’. Or, on the other hand, the motor could destabilize only momentarily — perhaps even for but a fraction of second — before re-stabilizing. In any case, such sudden jumps or drops to the power supply were a significant problem in the development of synchronous motors, simply because they meant that the motors could be violently thrown out of step. Just imagine the effects such sudden change could have on the projection of a motion-picture film, or the precision of an electric gun turret. What exactly happened to the motor in such a destabilizing event? This is what Edgerton wanted to find out as a graduate student at M.I.T. The problem he faced, however, was that he simply could not see what was happening: the motors moved too fast (Killian, 1979: 3). And so we come to the first major moment of vision in our story: In this study he was trying to determine with accuracy the transient changes in the angular displacement of the rotor of a synchronous electric motor. These displacements in the speeding rotor could not be seen by the unaided eye. One remedy tried during this experiment was the use of mercury arc rectifiers to supply very fast reinforcement to a powerless generator. During the course of the experiment, Edgerton suddenly became aware of being able to see the rotating poles of the machine oscillate about a mid-position following a simulated system disturbance. The mercury arc rectifiers happened to be placed close enough to the generator for their flashes to illuminate its rotor stroboscopically (Killian, 1979: 3). The principal underlying the phenomenon, as Edgerton’s colleague James R. Killian explains, was the persistence of vision, or retinal lag, the way in which we see ‘an object, line or light for about one-tenth of a second or less after it has moved away’ (Killian, 1979: 8). The flash created by the mercury arc rectifiers, but a microsecond in duration, functioned as a shutter exposing the retina to an image. If the instantaneous flash could be rapidly repeated and synchronized with the rate of the motor’s rotation, then an identical image could be exposed to the retina repeatedly, creating the illusion of ‘stop motion’ (Figure 2). If, in turn, as Edgerton explained in a 1931 paper in Electrical Engineering, a ‘special camera’ could be made, one that required ‘no shutter and no mechanical framing movement’, but rather — as with the retina, but lacking its lag or slippage — depended on the flash of light for exposing, and if film ‘could be run through the camera at a constant speed’, with the help of a synchronous motor, then we could produce ‘strobograms’ or pictorial records of the ‘operation of synchronous machines’ (Edgerton, 1931: 327, 328–29). As Killian notes, Edgerton was hardly the first to exploit retinal lag. Such was the power of cinema: the interval between the frames in motion pictures is ‘filled in’, so to speak, by the retinal impression. Nor was Edgerton the first to exploit retinal lag to create the illusion of stopping motion. Edgerton’s stroboscope can be seen as a later iteration of the devices created by Joseph Plateau of Ghent and Simon von Stampfer of Vienna in the middle of the nineteenth century (Killian, 1979: 8).4 What Killian fails to note, however, is that Plateau’s phenakistoscope, Stampfer’s stroboscopic disk, and Edgerton’s stroboscope each depend not only on the physiological principle of retinal lag, or persistence of vision, but on specific representational codes, for without the repetition of an identically appearing image (e.g. spokes or poles) motion could not appear to ‘stand still’. Each of these visual technologies thus depended on interworking physiological principles and visual techniques that together anticipated both physiological and cognitive processes in the viewer. Timing was also a matter of staging.5 In 1931, the same year the paper on stroboscopic moving pictures appeared in Electrical Engineering, Edgerton, having won a faculty position at M.I.T., partnered with one of his graduate students, Kenneth J. Germeshausen, to apply Germeshausen’s version of a xenon flash tube — which emitted brighter light than standard flash technologies at the time — to stroboscopic applications. In 1934 another of Edgerton’s graduate students, Herbert E. Grier, joined the experimental work on stroboscopic applications. Meanwhile, Edgerton licensed to General Radio Company the Strobotac, a portable strobe device with an adjustable pulse rate that allowed ‘stop motion’ effects to be taken into the factories themselves. Suddenly, as one account states, ‘operators of all sorts of machinery … saw faults that were obvious only when the machine was working’ (How Fast is Too Fast?, 1994; Zavattaro, 2007: 2). Edgerton himself began shooting action pictures of dancers, cowboys, hockey players, and circus performers. He displayed his work in periodicals and exhibitions nationally. In 1938, his work gained mass attention as he collaborated with M.I.T. alumnus Gjon Mili to shoot tennis player Bobby Riggs, stopped repeatedly, in the act of a serve (Figure 3) (Speed Camera Shows Tennis Form of Key Player Bobby Riggs, 1938). Soon Mili and Edgerton would be shooting Broadway dancers to promote the arts, and a man sneezing to promote covering one’s mouth. In 1938 as well, Edgerton teamed up with M.I.T.’s Killian to publish a popular book, Flash: Seeing the Unseen with Ultra-High Speed Photography, filled with flashes of black-and-white bodies caught in motion (Edgerton & Killian, 1939). Flash, the book, promoted not only Edgerton’s high-speed photography techniques; it introduced many readers to the idea of flash photography itself. Indeed, by 1940 figure 3 A tennis player photographed at 60 flashes/s. ©2010 MIT. Courtesy of MIT Museum 188 NED O’GORMAN AND KEVIN HAMILTON Edgerton, Germeshausen, and Grier produced the first portable flash units for news photographers, marketed by Eastman Kodak (Zavattaro, 2007: 2).6 Edgerton, Germeshausen, and Grier’s flash photography, like all electronic devices, depended on the storage of energy. But, unlike many electronic devices, it also depended on the limits of storage capacity. Electricity would flow into a capacitor, a storage mechanism, at a steady rate. When the capacitor reached its limit, the electricity would ‘overflow’. The source of Edgerton, Germeshausen, and Grier’s ‘flash’ was this instance of overflow; the key to the flash system was to control the timing of the overflow. As Killian writes, Electricity flows into a kind of electrical reservoir known as a condenser, and when the reservoir is full, it overflows at the desired instant to produce a brilliant flash inside the lamp. Electrical controls make it possible to govern very accurately the time between flashes and the exact moment of the flashes (Killian, 1979: 2). Edgerton’s electronic flash photography system thus made two processes, firing and exposing, subordinate to a third, that of timing. Previous photographic systems had relied on human operators to initiate separate, roughly synchronous processes of film exposing and combustion via chemical flash (or perhaps, as in the work of Etienne Jules Marey, employed motors to open camera shutters at regular intervals before an unwavering light source). Edgerton and his collaborators devised a unique new way of automating, modulating, and controlling electrical pulses. When paired with a recording medium such as film, this new engine of periodicity created a new mode of sensing and representation. Edgerton and Killian stressed in their book Flash that electronic flash photography was far more efficient than the combustible photoflash lamps. After covering overhead costs, Edgerton’s system could fire very bright flashes repeatedly at little cost per flash, and without having to change bulbs after every use. Moreover, unlike combustion-based flash lamps, which were limited in brightness by the danger they posed as chemical explosions, electronic flash systems could create great brightness without such chemical risks. With greater illumination and regular timing, the process of photographic production could literally speed up. The system was thus a uniquely regulable, tightly controlled, and repeatable generator of periodic energy and images, even images at night, and from great heights. Timing, firing, and exposing In 1860, James Wallace Black hopped into a basket strung to a balloon and ascended high into the sky above Boston, where he took a picture of the city by day (Cosgrove 6 All this stroboscopic publicity caught the eye of Metro Goldwyn Mayer’s George Sidney, who recruited Edgerton, Germeshausen, and Grier into making a short film, Quicker’n a Wink, featuring the wonders of high-speed photography accomplished with flashes of light rather than mechanical shutters. The film won an Oscar in 1940. EG&G AND THE DEEP MEDIA OF TIMING, FIRING, AND EXPOSING 189 & Fox, 2010: 7). In 1941, Harold Edgerton boarded a B-18 and was carried over Boston, where he took a picture of the city … at night (Edgerton-Digital-Collections, 2015b). Edgerton’s path from the lab to the night sky went through the Army Air Force, where growing publicity around his stroboscopic photography had caught the attention of Major George W. Goddard, then chief photographic officer at Wright Field in Ohio. In the 1920s, Goddard had developed a technique by which to take aerial photographs at night by exploding huge powder bombs that, like fireworks, would momentarily illuminate the ground below, affording just enough time for a well-timed camera to shoot a reconnaissance picture. Upon hearing of Edgerton’s flash technologies, Goddard wondered if great flashes of light might be electronically generated, and indeed synchronously timed with the rapid exposing of film (Goddard, 1960: 244). Major Goddard asked Edgerton to develop a strobe lamp so powerful that it could illuminate the ground at night from the height of 1 mile or 1.6 km. The technical problems, however, were threefold. First, light of such intensity would require a very large amount of energy, and in Edgerton’s system, the weight of a capacitor increased in direct proportion to the increase in energy stored (Edgerton & Killian, 1939: 190). Therefore, Goddard’s plan, if it could be executed, would require large airplanes capable of transporting heavy capacitors and crates of batteries. Second, such intense energy could easily blow the flash tubes, thus leaving the Army Air Force back in a situation equivalent to having to change bulbs after every flash. Third, even if the transport and flash tube problems could be worked out, the planes would be flying at night, making knowing exactly when to shoot the target site problematic. Again, it was a matter of timing. Though Edgerton’s system was used in the China-Burma campaign in World War II with some success, it was not until late in 1943 that the last of these problems — knowing when to shoot the target — was worked out. The solution was the incorporation of M.I.T.’s other great World War II technology, radar. During the war Edgerton’s partner, Germeshausen, had been working on radar in M.I.T.’s famous Radiation Laboratory, while Edgerton — who had managed to get himself appointed an army officer of undesignated rank — worked with pilots and crews overseas on aerial flash photography (Figure 4), and Grier oversaw the production of Edgerton’s aerial reconnaissance units at the Raytheon factory in Boston (Ghaffari et al., 2000: 38). Germeshausen seemed to be going a separate way from his colleagues; in fact, as things turned out, once radar was incorporated into Edgerton’s nighttime aerial photography, the three men found themselves again working on the same synchronous system. In the middle of the night on June 5, 1944, a large military aircraft outfitted with Edgerton’s device set aloft for the shores of Normandy. Expert eyes pored over the nighttime images, looking for signs of movement or mobilization. The pictures revealed German forces entirely unprepared for the coming onslaught. Indeed, for the German troops resting at ease below, the brilliant night flashes might have been their first awakening into the horrors of the day to come. 190 NED O’GORMAN AND KEVIN HAMILTON Meanwhile, as Eisenhower was invading Normandy, General Leslie Groves, Robert Oppenheimer, and the scientists and engineers working on the secret Manhattan Project were working on another sort of flash technology, atomic weapons. That the two flash technologies would converge in the final days of World War II is the most remarkable, and yet seemingly least known, fact of Edgerton, Germeshausen, and Grier’s partnership. At Los Alamos, New Mexico, base of the Manhattan Engineering District and primary location for the development of the first atomic bomb, timing and firing were systemic challenges. For all the months and years that it would take to create an atomic explosion, the explosion, if it were to come to fruition, would happen in but microseconds, and with such light and heat that — like the synchronous motor in Edgerton’s lab — unaided human observation was impossible. Photography — specifically high-speed photography — was the means by which the fireball would be both scientifically and ceremonially witnessed. But while Edgerton’s work informed the approach of the Manhattan Project’s Photographic and Optics Group as they devised a suitable photographic system for the Trinity test, the array of cameras at the test did not include a stroboscopic flash.7 That flash, rather, would come in the bomb. figure 4 Nighttime photograph of road in Europe taken with Edgerton’s flash unit in World War II. ©2010 MIT. Courtesy of MIT Museum 7 The Manhattan Project’s Photographic and Optics Group worked for 8 months prior to Trinity to plan and implement a photographic documentation system for the test. Based in part on Edgerton’s research, they knew that motion picture photography would be key to capturing the microsecond increments necessary for scientific study of the test. Berlyn Brixner, who was in charge of the motion picture photography programme, even procured an Edgerton system for use in other ballistics related work at Los Alamos. But Brixner relied on less advanced Fastex cameras, rather than Edgerton’s more advanced ones, to get the work done (see Brixner, 2013; Hoddeson et al., 1993: 353–55). EG&G AND THE DEEP MEDIA OF TIMING, FIRING, AND EXPOSING 191 One of the more difficult problems faced by the Los Alamos scientists in building the first atomic bombs was that of compression. It was one thing to produce a controlled chain reaction with uranium or plutonium — Fermi had done so with his so-called ‘atomic pile’ at the University of Chicago, which produced plenty of heat, but no explosion. It was quite another thing for the chain reaction to instantaneously produce a tremendous amount of energy — a destructive rush of force, light, and heat that registered to distant observers as a flash. The key was compression: throwing the uranium or plutonium together, so to speak, to produce the ‘critical mass’ necessary for an explosion. Los Alamos engineers devised two means by which to do this.8 The first and most straightforward was done with uranium through the so-called ‘gun-assembled’ bomb, the type dropped on Hiroshima. Here two slugs of uranium were placed at opposite ends of a barrel and fired into each other, so as to compress them and produce the necessary critical mass for a nuclear fireball. The second means of compression, and far more complicated one, was designed for a plutonium bomb. Plutonium could not be placed in a ‘gun barrel’ in the way the uranium could, for plutonium produced a good deal of stray radiation, so that a chain reaction could begin prematurely among two plutonium slugs, resulting in a fizzle (Younger, 2009: 23–24). The device tested in Trinity produced compression instead by lodging a ball of plutonium within a larger ball of explosives — thus it was called the ‘Fat Man’ design. This larger ball of explosives was triggered by means of what came to be known as an ‘exploding-bridgewire detonator’ (Cooper, 1996: 353–67; see also Rhodes, 1986: 654–55; Alvarez, 1987: 132–36). The detonator worked by means of a sudden infusion of high-voltage electricity through numerous wires, timed to detonate simultaneously. This required specialized firing-and-timing systems. Moreover, the Trinity ‘gadget’, which was not a workable bomb, could rely on power supplied from large, heavy, and remote capacitors where size and weight were no concern, as the current was supplied over the equivalent of power lines (Rhodes, 1986: 665). A portable version of the Fat Man device — that is, the workable bomb used over Nagasaki — would need to carry its own capacitors tied to a precisely timed firing unit. Where could such firing units be made? At the Raytheon factory in Boston, where Grier was building Edgerton’s aerial flash photography units. In the fall of 1944, the Manhattan Project turned to Raytheon to develop firing units for early implosion device prototypes (Hoddeson et al., 1993: 304). The reason was simple: Raytheon was already in the wartime business of making significant quantities of specialized firing units and their capacitors, and had a reputation for efficient manufacturing (Hoddeson et al., 1993: 304; Zavattaro, 2007: 3). Herbert Grier was flown from Boston to Los Alamos and briefed on the task (Breger, 1976). He approached the work as but another application of the firing 8 The following discussion of means of compression is drawn from Younger (2009: 22–24). 192 NED O’GORMAN AND KEVIN HAMILTON sets he had designed and manufactured for the nighttime aerial flash units (Breger, 1976; Grier to Murphy, 2006). Meanwhile, the Raytheon factory itself was converted from a unit devoted to aerial flash production to a unit devoted to building an atomic bomb firing-and-timing mechanism. The transformation was so subtle and seamless that the factory workers and their supervisors never knew such a radical change had taken place at all — they were simply told they were making an ‘advanced version of equipment for night aerial photography’ (O’Keefe, 1983: 80, 125). The final Grier-designed atomic bomb firing and timing device was tested on August 8, 1945 on Tinian Island (Hoddeson et al., 1993: 390). The next day it was used to detonate the Fat Man bomb over Nagasaki.9 Deep media and atomic flashes What does this history, so far, suggest? From the stroboscopic illumination of factory motors through the aerial flashing of enemy troops to the ignition of an atomic bomb, the ‘deep media’ of timing, firing, and exposing had rendered the technologies of cameras and bombs virtually interchangeable. ‘Deep media’, as we are using the notion, refers to underlying technical modalities that mediate or ‘go on between’ an artefact and its constituent physical, chemical, and/or biological elements.10 While deep media themselves are typically ‘artefacts’ inasmuch as they are humanly made (Mitcham, 1994: 172), they are realized only in other techniques and technologies: e.g. timing is realized only in techniques and technologies that are timed, firing only in artefacts that are fired, and so on. Conceptually, deep media have the form of gerunds, or verb-things. They are workings more than works. Still, because deep media go between artefacts and elements, they have a transitive quality: they never stand alone, and they can be applied to multiple objects. They thus comprise general technical modalities by which physical processes or elements work and are ‘concretized’ or realized in particular technical objects (Simondon, [1958] 1980). Deep media include timing, firing, and exposing, but also such modalities as absorbing, arranging, and searching. A shutter mechanism may be unique to certain cameras, and a particular chemical reaction unique to certain bombs, but in the case of nuclear and atomic testing, the same ‘deep’ processes animated and drove both cameras and bombs and were, to adapt Simondon’s ([1958] 1980) term, ‘concretized’ in these apparently divergent artefacts. More technically specific than the ‘principles’ of physics, but less specific or concrete than artefacts like a transistor or a lightsensitive diode, such deep processes as Edgerton’s approaches to timing and firing could find application in numerous techniques and technologies. Throughout the history of engineering, deep media have not only been the subject of inquiry and 9 The last hands to touch the bomb before it was sealed in its shell for delivery in Japan were those of the Manhattan Project’s Bernard O’Keefe, who after the war would go to work for Edgerton, Germeshausen, and Grier and eventually become the fourth partner in EG&G (O’Keefe, 1983: 98–101). 10We want to thank Chad Wellmon for helping us think through the concept of ‘deep media’. EG&G AND THE DEEP MEDIA OF TIMING, FIRING, AND EXPOSING 193 manipulation (how to time? how to fire? how to order?) but also fascination, experimentation, innovation, and commercialization. By drawing our attention to fundamental technical processes, deep media, as we see with EG&G, help account for the way innovation moves across artefacts by means of the appropriation and application of technical processes. Our sense of deep media draws on Peters’ (2015) related notion of ‘elemental media’. The distinction between ‘deep media’ and ‘elemental media’ is more a matter of focus than substance. Whereas Peters is concerned with pushing the boundaries of the definition of ‘technics’ and ‘media’ by presenting them as broadly as possible as the ‘vessels and environments’ (p. 2) of being, we are interested in pushing the boundaries of the political economy of media and technological innovation beyond questions of ownership, design, regulation, and distribution of artefacts and what might be called ‘surface media’ (messages, content, information, data) to the more basic, underlying technical modalities that underlie the histories of technics. This puts us in proximity to engineering as a practice and epistemology (Mitcham, 1994): the sense of urgency we bring to the study of deep media is, like engineering, focused on the way technics and technical innovation goes about working, and for whom it is working. Moreover, our approach to deep media challenges the basic concept of the mere instrumental ‘application’ of technologies that drives discourse about technological innovation by drawing attention to the way engineering always works with a finite set of technical modalities — timing, firing, and exposing in our case study here — to bring them to concretizations in discrete technologies (Simondon, [1958] 1980) in networks (Latour, 2005).11 Thus while over Normandy a camera occupied the bay of a bomber, over Nagasaki a plutonium core occupied the place in a circuit normally reserved for a camera flash. Both Normandy and Nagasaki entailed systems of timing, firing, and exposing. The horrible atomic flash would leave permanent shadows inscribed upon Nagasaki’s architecture (Figure 5). And though there were few cameras rolling at Hiroshima and Nagasaki, victims received the invisible light of the bomb as radiation, and registered its location and distance in surface flash burns.12 Like the city’s architecture, the victims in these cities became the film for the flash of the bomb. Blistered bodies functioned like the film badge dosimeters of nuclear 11Simondon’s sense of concretization is slightly different from Latour’s sense of ‘assemblage’ (Latour, 2005) in that the former stresses the ‘autonomy’ of the technical object (Simondon [1958] 1980, 56) whereas the latter tends to stress mosaic-like heterogeneous configurations of which objects are a part. 12Was Nagasaki a nuclear test, the beginning of EG&G’s long history in nuclear testing? Inventories of global nuclear tests vary on the inclusion of Hiroshima and Nagasaki in America’s nuclear tests. A recurring list of tests published in The Bulletin of Atomic Scientists included them for a number of decades, until 1994; and the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty counts them as tests to this day. Not surprisingly, the official United States record has not included them among America’s nuclear tests (United States Department of Energy, 2000). However, from a logistical and organizational point of view, the Nagasaki bomb did take place in the context of testing, as part of ‘Project Alberta,’ the last chapter of the Manhattan Project. And throughout the nuclear testing era of the Cold War the cities of Hiroshima and Nagasaki served as test cases for the least understood aspect of the bomb – that of radiation, and its effects. When Stafford Warren, the chief medical officer of the Manhattan Project, 194 NED O’GORMAN AND KEVIN HAMILTON workers, registering particular limits of harmful exposure. Geiger counters held to apparently undamaged skin also revealed less visible exposure.13 ‘Shadowed’ bodies registered before-and-after images of flash exposure through removal of hats or shirts for photographers, as in sunburn. Disrobed torsos of survivors revealed the patterns of various fabrics burned into skin, suggesting better or worse protection in kinds of clothing. And in some cases, victims left shadows of their bodies burned into streets, steps, and bridges where they stood or sat at zero hour.14 If the Fat Man bomb over Nagasaki depended on timing and firing for its execution, its effects would be measured against the capacities of flesh, bone, and brick for exposure to light. A new synchronous image machine After the war, Norris Bradbury, now directing Los Alamos Scientific Lab in Oppenheimer’s place, came to M.I.T. to ask Edgerton, Germeshausen, and Grier to work on building a firing set for a full-scale nuclear weapons test scheduled to take figure 5 Shadow of valve on gas tank at Hiroshima (Strategic Bombing Survey) reached Hiroshima on September 8 to assess the radiological effects of the atomic bomb’s detonation a month prior, he found a group of Japanese scientists already hard at work on a systematic study. The leader of the Japanese group, surgeon Masao Tsuziki, had studied in the United States, where he had published a thesis on the medical effects of ionizing radiation. One doctor on Warren’s team reported handing back the document to Tsuziki after reading it in Hiroshima, at which time the Japanese doctor quipped ‘Ah, but the Americans – they are wonderful. It has remained for them to conduct the human experiment’ (Hacker, 1987: 114). 13At the Pacific island of Bikini, the site of later nuclear tests, scientists returning to study radiation’s effects would place dead fish on special photographic paper in a light-tight box, developing the paper later through a kind of ‘contact print’ process to see whether radiation from the creatures had registered an image, indicating persistent present of radiation. 14November of 1946 saw a presidential directive from Truman mandating a study of ‘the medical and biological effects of radiation’ on the 14,000 Japanese known to have been exposed to radiation, as well as others ‘yet to be identified’. The directive instructed that the study would ‘continue for a span of time as yet undeterminable’. To this day, the section of Glasstone’s (Glasstone and Dolan, 1977) definitive textbook The Effects of Nuclear Weapons relies on details of Hiroshima and Nagasaki for its description of ‘Biological Effects’, shaping much Civil Defence policy and even subsequent American nuclear tests. As recently as 1998, the National Academy of Sciences described the resulting studies as providing ‘the primary basis for radiation health standards’. Hiroshima and Nagasaki were thus among the most scientifically productive nuclear detonations (Putnam, 1998). EG&G AND THE DEEP MEDIA OF TIMING, FIRING, AND EXPOSING 195 place in the South Pacific in 1948 — what would be known as Operation Sandstone. Sandstone would be the second of the postwar nuclear tests. Unlike its predecessor, Crossroads, Sandstone would focus on new weapon designs, rather than strictly measure the weapon effects of already existing designs. As a consequence, the test would be run by the Atomic Energy Commission, rather than the Department of Defense. When Bradbury approached Edgerton, Germeshausen, and Grier about working on Sandstone, he asked not only for a firing set, but for the group to apply their expertise in fast pulse measuring to neutron measurement and, most significantly, to develop a network of timing signals to coordinate all the instrumentation for the test (O’Keefe, 1983: 126–27; Zavattaro, 2007: 4). In short, in asking Edgerton, Germeshausen, and Grier to time, fire, and help measure the test, Bradbury was asking them to build and run a massive synchronous firing and sensing machine. And indeed, when the countdown was sounded and the buttons pushed it was an Edgerton, Germeshausen, and Grier man acting as the ‘voice’ of the countdown and Edgerton, Germeshausen, and Grier men doing the button pushing. Edgerton, Germeshausen, and Grier went to the research director at M.I.T. with Bradbury’s request, looking for M.I.T.’s approval. However, they encountered resistance in President James Killian’s desire to divest the institution’s large quantity of government projects. M.I.T. suggested that the three men form a corporation and do the work independently (Zavattaro, 2007: 4). And so began the business of Edgerton, Germeshausen, and Grier, Inc., which would serve as a major government contractor in virtually every American nuclear test from 1948 on (Zavattaro, 2007: 5). By 1952, some 4 years after Sandstone, the company’s revenues totalled $2.5 million (about $18 million today), with profits around $25,000 (or $180,000 today) (Zavattaro, 2007: 10). In 1960, Edgerton, Germeshausen, and Grier, Inc. went public as EG&G, selling 100,000 shares of common stock (Zavattaro, 2007: 16). A couple of years later, their sales reached $38 million, or over a quarter billion in real dollars (Zavattaro, 2007: 19). By 1989, EG&G was the fourth largest Department of Energy contractor, with offices nationwide doing work ranging from nuclear terrorist detection, to managing Department of Energy facilities in Ohio and Colorado, to helping build a super-collider to creating seismic maps of ocean bottoms, to searching, with Jacques Cousteau, for the Loch Ness monster (Zavattaro, 2007). The work of EG&G at Sandstone represented not only the engineers’ first operation as an incorporated business, but their first opportunity to incorporate two autonomous systems from previous tests — that of detonation and of documentation — into a single synchronous machine. Whereas Trinity’s photographic documentation systems operated independently of the test and the detonations in Japan left the city itself to serve as a kind of photographic record, the new Sandstone test would apply EG&G’s proven timing system to drive both the detonation and the documentation components. Though at Sandstone EG&G was not explicitly 196 NED O’GORMAN AND KEVIN HAMILTON charged with photographic work, cameras were incorporated as one of several different types of sensors built into the larger timing and firing mechanism they constructed. To be sure, the EG&G cameras would be introduced to later tests, but then in the context of EG&G’s larger contribution — that of an expansive, reconfigurable, and reproducible deep media system for timing, firing, and exposing. Indeed, the images that EG&G cameras would produce of nuclear fireballs were embedded within the technological regime of ‘proving and testing’, both of ballistics and cameras.15 EG&G’s more explicit photographic work in nuclear testing began in 1951 at the Ranger tests in Nevada. There EG&G was asked by the Atomic Energy Commission to do high-speed photography of the sort Edgerton had perfected in the 1930s and 40s so as to aid in measuring the yield of new weapon designs. A few months later, Edgerton would bring his newly created — with his graduate student, Charles Wyckoff — Rapatronic (rapid action electronic) cameras to the greenhouse tests in the South Pacific. A set of Rapatronic cameras, each capable of only a single exposure, would shoot in sequence images of fireballs at an exposure rate of 2–3 μs (or millionths of a second) (Zavattaro, 2007: 18). By the end of the 1950s, EG&G was using high-speed cameras at nuclear tests like Operation Hardtack that were capable of shooting more than 18,000 frames/s (Zavattaro, 2007: 14). But if the official purpose of Rapatronic and other EG&G cameras in nuclear tests was yield measurement, a function of the ‘proving and testing’ regime, as significant an aspect of these synchronous image machines was rhetorical and representational. Edgerton’s most popular images, of bullets flying through apples and drops of milk flying through space, spoke of the laboratory not only through their hermetic staging, but because they offered ‘proof’, glimpses of the unseen to publics that purportedly needed to be convinced less of the mysteries of nature and more the powers of science and engineering in the hands of the national security state. This rhetorical application of the new slow-motion photography found its way to the public not only through mass-reproduced fireball images, but through the propaganda films of the Federal Civil Defense Administration. In one striking example, the Atomic Energy Commission contracted EG&G to create specialized photographic documentation of domestic structures under nuclear siege during the Apple-II test of 1955s Teapot series. The widely-seen film Operation Cue (United States Federal Civil Defense Administration, 1955) features footage from EG&G cameras peering out of curtained windows past clothed mannequins, or watching from an isometric angle as familiar middle-class house structures disintegrate from the blast. Consistent with their work on the Pacific tests, EG&G approached the 15Indeed, well before anyone could know of the imminent arrival of nuclear weaponry, Edgerton and Wyckoff were testing their latest high-speed cameras at the Aberdeen Proving Grounds in Maryland, where they took shots of 37 mm shells and aerial bombs. There it was apparent that the U.S. Army’s weapons engineers and contractors were in the business of timing, targeting, and firing, just as was Edgerton and his lab (see Bomb Burst: High-Speed Photographs Record Violent Details of Explosions, 1944). EG&G AND THE DEEP MEDIA OF TIMING, FIRING, AND EXPOSING 197 problem as a single timing and firing system, with the cameras and detonator wired together through a common underground circuit (O’Keefe, 1983). Conclusion If media, as McLuhan (1994) and others have argued, are ‘extensions’ of the human, deep media are extensions of natural processes into the world of artifice, technics, and engineering. No longer a world where fire just is, or where fire can be made, engineering as a practice, epistemology, institution, and economy has concerned itself with firing; no longer a world were time is, or where time can be measured, engineering turns its attention to timing; and no longer a world exposed, or where things can be exposed, engineering takes up the technics of exposing. What does attention to deep media offer critics and scholars? It offers us ways of tracing technological innovation beneath the surface, precisely where engineering tends to work. It invites us to write media histories as engineering histories. And it helps us to see the political economy of media and technology as including, in addition to the control of messages and the regulation of ownership, the manipulation of technical modalities toward the management of both nature and culture. To get a sense for what this might mean for specifically the intertwined endeavours of war, photography, business, and ideology, it is helpful to return to where Edgerton’s sensing work began — that is, with the phenakistoscope. Recall that the phenakistoscope depended not only on the physiological phenomenon of retinal lag, but also on the representational phenomenon of identity. Representational principles were built into this particular technology of vision. So too with the large-scale testing machines, which as ‘proving’ operations were inherently representational. It was not enough, it was never enough, to simply detonate a bomb. The detonation had to be measured, narrated, and indeed photographed in order to be deemed successful and powerful. In this way, the images of nuclear fireballs shot by EG&G cameras have been widely circulated in the press and in other public venues — first in the pages of the New York Times or Life, but more recently in such art books as Michael Light’s 100 Suns (2003), or even in art exhibitions such as the Hirshhorn Museum’s ‘Damage Control: Art and Destruction Since 1950’ or the Art Gallery of Ontario’s ‘Camera Atomica’. 16 Such appearances of these images do not so much lift the representational regime from the technological one, as extend the technological regime of testing into other techniques and technologies, distinctive to each new setting: techniques of consciousness, of ideology, and indeed of profit-maximization; technologies of publication, distribution, and 16For more information about the Hirshhorn Museum’s ‘Damage Control’ see http://hirshhorn.si.edu/collection/ damage-control/#collection=damage-control&detail=http%3A//hirshhorn.si.edu/bio/damage-control-art-anddestruction-since-1950/&title=Damage+Control%3A+Art+and+Destruction+Since+1950; for more information about the Art Gallery of Ontario’s ‘Camera Atomica’ see http://www.ago.net/camera-atomica. The latter has also released a publication, Camera Atomica (O’Brian, 2015). 198 NED O’GORMAN AND KEVIN HAMILTON exhibition; and perhaps most significantly in the case of the popular press in an era of deterrence, techniques of political power, violence, and exploitation. By contrast, the image of the nuclear mushroom cloud, reproduced ad nauseam, barely retains such connections to its roots. Indeed, this iconic image has become so ubiquitous as to serve as an exemplar of ideology’s ‘naturalization’, in which a dominant discourse, which is also a discourse of domination, ‘appears to lose its connection with particular ideologies and interests and become the common-sense practice of the institution’ (Fairclough, 2001: 89). We know we live in a nuclearized world. We expect it to be so always. The mushroom cloud is an icon of the naturalization of the nuclear (see Hales, 1991; Rosenthal, 1991). Not so the nuclear fireball (Figure 5). It appears otherworldly, unnatural, fantastic, in the way Edgerton’s stop-motion photos from the 1930s made dancers, cowboys, hockey players, and circus performers seem strange, whimsical, even unreal. What are we to make of this exoticization of nuclear power? What are we to make of its transfiguration into the ‘unreal’? The Cold War was a highly fluid and distinctly artificial geopolitical affair that managed to produce a remarkably resilient sense of a fixed, enduring, bipolar conflict. ‘At the start of the Cold War’, Joseph Masco writes, ‘the United States transformed an anticipated Soviet nuclear capability into the rationale for building a global technological system, which became the always-on-alert infrastructure of mutual assured destruction’ (Masco, 2014: 294–96). The imaginary, and to be sure the image, as Masco argues, was integral to this project (Masco, 2014: 332– 33). In the familiar image of the mushroom cloud and the nuclear fireball many felt they were faced with a threat, a threat that in turn called for a counter-threat: thus the logic of mutual assured destruction, which indeed did appear as the ‘common-sense’ of the Cold War. But it was an artificial technological system that was offered as the way out of such mutual assured destruction. It was, we might say, engineering that would save us from our enemies, even from ourselves. In this context, EG&G’s fantastic images of nuclear flashes — so exotic compared to the ubiquitous mushroom clouds of the Cold War — reminded viewers of the highly artificial and technical quality of the nuclear state, and in a manner that would dazzle rather than fizzle. If artificial systems were to be the means of human salvation, artificial systems needed the rhetorical means by which to display their power as artifice. It was here, in the transfiguration of engineering into artifice, even art, that EG&G’s massive nuclear timing, firing, and exposing machine found an ideological termination point.
https://www.kevinhamilton.org/articles/EG&G_OGorman_Hamilton.pdf