email icon Email this citation

CIAO DATE: 4/00

Secrecy and Science Revisited: From Politics to Historical Practice and Back

Michael Aaron Dennis

Secrecy and Knowledge Production
Judith Reppy, Editor
Peace Studies Program, Cornell University
Occasional Paper #23
October 1999

If conventional understandings of science were accurate representations of our world, the conjunction of science and secrecy might serve as a powerful example of an oxymoron. Writing recently in Scientific American, Jeffrey Richelson, a student of secret government intelligence programs, explained that the major source of difficulty in having scientists cooperate with the U.S. intelligence establishment was that such

cooperation will require an accommodation between two cultures, those of science and of intelligence, that have essentially opposite methods of handling information. In science, the unrestricted dissemination of data is accepted as being necessary for progress, whereas in intelligence, the flow of information is tightly restricted by a “need to know” policy; only those who have the proper security clearances and who cannot carry out their assigned responsibilities without certain knowledge or information are given access to it. 1

For Richelson and countless others, the distinctive character of science is manifested in its openness, that is, the unrestricted exchange of information and knowledge without regard for the race, creed, sex, or national origin of those involved in the exchange. Secrecy is, however, far from unknown within the world of science. All of us are familiar with the existence of a classified world of research, containing its own journals, meetings, and professional organizations. That world exists both within and apart from the world we experience on a daily basis. Even the materials Richelson is addressing–the use of national intelligence databases to understand global environmental change, Project Medea–is predicated on the existence of a secret world where researchers, more often than not academics, produced the knowledge that we might now harvest.

Science and secrecy were not, and are not, the polar opposites of common understanding. Timothy Ferris, a regular New Yorker science writer, declared that

real science is a white hole that gushes information; scientists (astronomers especially) prefer to tell one another almost everything, because if they don’t they can’t build on each other’s results. (The gravest concern of those who do classified work is that if they are cut off from such constant exchange their careers will wither). 2

Given that the history of science is littered with examples of willful and deliberate secrecy, whether on the part of individuals or institutions, including states, such a claim is patently false. Furthermore, despite his invocation of Soviet science as an example of what happens when science is kept secret, Ferris does not address David Holloway’s remarkable claim: that researchers in the secret cities of the Soviet atomic bomb project, such as Sakharov, were the bearers of democratic values and practices during the long Cold War. 3 If one accepts Holloway’s claim, secrecy isn’t simply part of science but essential for democracy.

What then is the relation between science and secrecy? Is there a single, necessary relationship between the production of knowledge and the technologies through which that knowledge is made and disseminated? 4 This paper is more assay than essay–an attempt to chart the terrain of understanding secrecy and/in the production of knowledge. What follows is a discussion of the foundations of much of the existing work on secrecy. I argue that much, if not all, of this work views secrecy as being identical to questions of access; that is, questions of who can know specific pieces of information. 5 In this literature arguments against secrecy are cast in the language of economic rationality–it is inefficient to keep knowledge from others who might needlessly duplicate work already done. Almost all discussions of secrecy and science take place in a context where secrecy is viewed as obviously necessary–a nuclear weapons laboratory, for example–or where such restrictions are viewed as absurd and hence inimical to the “advancement of science.” In response to this literature I suggest that we might read some accounts of secrecy like Edward Shils’ 1956 text, The Torment of Secrecy, or Norbert Wiener’s autobiographical writings as steps towards the development of a radically different view of secrecy. 6 Specifically, such works observe that access is but one aspect of understanding secrecy and science; another often-ignored dimension is the effect of such practices upon the content of knowledge developed under particular secrecy regimes. Such a perspective might draw upon much work in science and technology studies to render secrecy comprehensible, if not transparent.

 

Normal Science?

Robert K. Merton’s famous norms of science–communism, universalism, disinterestedness, and organized skepticism (CUDOS)–are the locus classicus for most understandings of the inimical and unnatural relation of science and secrecy. Drawing upon his pioneering study of Puritanism and the rise of the “new science” of the 17th century, Merton extracted what he identified as the guiding norms of the scientific community. In an influential 1942 article, “Science and Technology in a Democratic Order,” Merton articulated his famous norms as a direct defense of the necessary relation of progress in science with democratic politics. 7 As David Hollinger has observed, Merton made it clear that science could only flourish under a democratic regime, not the fascist regime of Nazi Germany. 8 Merton clearly stated that secrecy was the antithesis of his norm of communism, the belief that scientific knowledge was the common property of all people. My point here is not to claim that Merton invented the idea that science and secrecy are anathema. After all, his claim was that he had identified this practice through his study of the history of science. Central figures in the so-called Scientific Revolution distinguished themselves from other knowledge producers because of their emphasis on the public, and published, character of their knowledge claims. He was merely making clear to social scientists what natural scientists took as a self-evident truth, one that was visible from the emergence of the Royal Society in 17th century England.

For Merton the problem with secrecy in science was two-fold. First, secret science could not provide the researcher with the appropriate credit for their discoveries. Given that the only recognition in Merton’s universe came to those who established their priority in making discoveries or breakthroughs, secrecy was clearly not in a researcher’s self-interest. While working on a particular problem, researchers might choose not to communicate with others about their work, but when the work was completed they would race to publish their findings. Priority was the means to a reputation, to greater credibility, and to the rewards of science–prizes, grants, and status. 9 Second, secret knowledge was not open to the scrutiny of others who might point out errors and problems related to both the production and interpretation of the knowledge claims. If, as Merton and others believed, science “worked” through the rigorous self-policing of knowledge claims, then secrecy or restricting the dissemination of information might lead to the production of false knowledge. Finally, note that Merton’s norms also created an autonomous social space for science, since only other scientists could credibly discuss the veracity of specific technical knowledge claims. Those untrained in the ways of science were incapable of adjudicating intellectual matters.

If Merton and his students, especially Bernard Barber, 10 were among the prime intellectual sources for the post World War II understanding of the relationship between science and secrecy, then we must look to the war itself and the subsequent militarization of American science for the institutional context in which such discussions began. Here we must make a historical point. We may think of the war, especially the Manhattan Project, as the modern occasion for our discussions of science and secrecy, but that would be a profound mistake. Discussions about secrecy were endemic with the establishment of the first industrial research laboratories in early twentieth century America and the great expansion of such laboratories in the post World War I context, what one observer called “a fever of commercial science.” 11 Similarly, the fear that corporate monopolies might control the production of scientific and technological knowledge, as presented in the Temporary National Economic Condition (TNEC) Hearings of 1939, was an early analogue of postwar fears of the military control of science. 12 To an extent we are largely unaware of, wartime discussions of secrecy drew upon these earlier debates as well as the recognition that for many industry had not affected science in a negative manner. On the contrary, many began to conceive of industrial research laboratories as universities in exile, a view that had little relation to corporate reality. With this caveat, let us turn to the war.

Pick up any memoir of the Manhattan Project and one will find ringing denunciation of General Leslie Groves and his policy of compartmentalization. Even Richard Rhodes, our contemporary chronicler of nuclear history, accepts the seemingly universal condemnation of Groves’ apparent obsession with security and restricting the flow of information. 13 Oppenheimer’s creation of the Los Alamos seminar series is viewed by both participants and historians as a triumph of the values of science over military paranoia. Los Alamos might have been isolated, but on the Mesa science ruled. Alas, such a perspective is seriously defective. First, while some researchers, such as Szilard clearly fought the classification and compartmentalization system, others accepted security as a necessary wartime evil. Far from chafing under the demands of security, these researchers flourished and relished knowing that they were responsible for only one aspect of a larger project. Second, all such accounts view secrecy and the military as the “enemy.” Unfortunately, this ignores another view of secrecy that is quite important. Secrecy and the ability to keep secrets were an important way in which the researchers might gain the confidence of their military colleagues and paymasters. Vannevar Bush, the leader of the wartime research and development establishment made this clear when he told his colleague, Karl T. Compton, the president of MIT, that

you and I are responsible for rather serious things, and the maintenance of our relations with the Army and Navy depends upon an orderly handling that inspires confidence. 14

Keeping secrets was essential to establishing and maintaining the credibility of the civilian researchers. This is a definition or function of secrecy that we often forget. The relationship of academic researchers and the armed forces was new; building the connections that we accept as a historical given was an accomplishment in its own time. Undergirding Bush’s statement was his recognition that only by properly handling the security issues would he and his organization acquire the trust of the military officers actually planning and fighting the war. Those who, like Szilard, bridled under the security regulations became individuals whom the military effectively ignored. Playing by the military’s rules about information distribution allowed one the possibility of actually having an effect upon their actions.

Another problem with our over-reliance upon the Manhattan project for our understanding of wartime secrecy is that we seldom look at the other research and development programs. Take the case of the proximity fuze, which Bush believed even more difficult than the atomic bomb. In this case, the development of a sophisticated electronic device demanded the creation of new laboratories and new forms of industrial-military-academic cooperation. Merle Tuve, the leader of the project, instituted a compartmentalization policy that extended into the worker’s eating habits. Researchers often ate lunch at a local “Hot Shoppes.” At one lunch, Tuve overheard laboratory workers discussing their work. This led to a wonderful memo which was posted throughout the laboratory explaining that the Hot Shoppes was not a secure site and hence any discussion of the fuze project inside the restaurant would result in the arrest of all the members of a conversational group. Tuve’s staff got the message, loud and clear, but they did not understand Tuve’s intentions. Of course, Tuve was concerned that enemy agents might be serving the meat-loaf, but more pressing was the possibility that staff members might learn about work unrelated to their own specific job assignments. Compartmentalization was a form of management as well as a security precaution. For Tuve, controlling the flow of information among the researchers was as important, if not more important than controlling the possible loss of information to an enemy. 15 Localized secrecy was the means to an end, but not an end in itself.

Secrecy might also be considered an essential element of the design process regardless of whether a nation is at war. The design and development of new technologies is marked by initial periods of contestation and struggle over goals, methods, and even the very possibility of the goal. Hence, if one is developing a new technology–such as a proximity fuse, an atomic bomb, or an inertial guidance system–it might prove beneficial to restrict the sheer number of voices until the group working on the project has produced what they believe is a stable vision or version of the technology. In other words, secrecy might reduce the stress of interpretive flexibility–the inherent plastic meaning of any technology. Take the case of inertial guidance for aircraft and ballistic missiles. For this technology to ‘work’ it was essential that the inertial apparatus separate the acceleration of the plane from the acceleration of gravity. For many people, including George Gamow, the famous physicist, such a separation was impossible since it would violate Einstein’s relativity theory. Those involved in developing the technology were of a rather different opinion, but the multiplicity of groups working on the problem aggravated the task of responding to Gamow’s criticism since there was far from one solution to his objection. 16 Had the managers of the inertial projects kept their work a better secret they might not had to deal with Gamow’s critique until after they had stabilized their devices and methods. Once again, secrecy acts as a management technique, one that is quite powerful but easily abused. One can easily imagine researchers working on a device that shows little promise, but where the secret status of the project allows the work to persist. While we have several examples of this, including the Navy’s canceled A-12 stealth attack aircraft, secrecy need not necessarily breed corruption. 17

Understanding the range of ways secrecy was part of the wartime research effort is important, but we are forced to return to the atomic bomb. Certainly the bomb was among the best kept secrets of the war: on 5 August 1945 less than 100 people knew the full scale and scope of the project. 18 Furthermore, all knowledge relating to the bomb was secret; any public discussion required an active decision to declassify particular pieces of information. Even the Smythe Report, perhaps the oddest press release in American history, did not present technical details, only a general discussion of the project and its work. 19 However, the report’s final paragraph contains the fundamental idea behind the report: an informed citizenry, with the tutelage of physicists, can make an informed set of decisions about the future of nuclear weapons. The interesting point here was that the government censors were the adjudicators of what the American people needed to know about the Manhattan Project–the autonomy of science had already been breached.

The postwar debate over the legislation establishing the Atomic Energy Commission dealt extensively with the issue of secrecy, but largely in terms of the punishments for revealing America’s atomic secrets. Central to the congressional discussion was a gradual shift from an emphasis on the dissemination of Manhattan’s knowledge to one of restricting and finally controlling the flow of information. Just as Vannevar Bush attempted to create a new taxonomy of knowledge centered upon the elusive idea of basic research, so did the Congress create a new taxonomy of secret, the category of “restricted data” defined as

all data concerning the manufacture or utilization of atomic weapons, the production of fissionable material, or the use of fissionable material in the production of power, but shall not include any data which the commission from time to time determines may be published without adversely affecting the common defense and security. 20

What does the invention of a new level of secrecy do? First, it creates an additional class of individuals who have access to restricted data. Although this might be of interest to those studying the mixing of individuals with different clearances, or how particular organizations work, it is unclear how the taxonomy affects the issues with which we are concerned. 21 Is this not simply another example of access being the rationale and meaning of secrecy? Second, the invention of restricted data reminds us that during the immediate postwar period many people spoke and acted as if the revelation of a particular piece of information might “give away” the “secret” of the bomb. 22

For students of this period, the growth of restricted data is both a problem and a blessing. If we view secrecy as a problem in access, then we are mainly concerned with acquiring that access for ourselves. In other words, we operate under the belief that whatever is classified should be declassified or removed from the penumbra of secrecy; in turn, we will have a better idea of what actually happened. Among the many assumptions present in our call for access is the belief that the classified and the unclassified are linked in some direct and unmediated fashion; as if the light of inquiry would make the past clearer. More than likely the opposite is true–the relation of the classified and unclassified is problematic and highly mediated. Knowing the contents of restricted data might not help us reconstruct events and processes; if I learn that Beryllium is an important ingredient in thermonuclear weapons have I learned something important? Only if I am attempting to understand the growth and development of the Beryllium machining industry or the growth in incidences of complaints of Beryllium poisoning or a related inquiry. 23 In other words, restricted data in and of itself might prove more meaningless than meaningful. Hence, if access is why we are interested in secrecy we really don’t have much to say other than on a case-by-case basis. It is one thing to know what actually took place at the Gulf of Tonkin by reading the previously classified cables from the region; it is another thing to know that element X is used in technology Y. Knowing secrets may be exciting, but it may not be intellectually interesting.

 

So, what is interesting about secrecy?

Open the newspaper nearly any day of the week: secrecy is on display. New products, like Gillette’s new three-blade razor, are the result of industrial processes so guarded that they make the Manhattan Project look like a sieve. 24 Secrets are only known when they are no longer secrets, but the power to unveil and display a secret is what makes secrets useful and dangerous. These types of events and practices don’t figure in our understandings of secrecy and science, despite the way in which the atomic bomb’s use at Hiroshima might be likened to the unveiling of a new and powerful product.

Return to our earlier ideas about why access is not what is interesting about secrecy. What is interesting is how researchers discuss secrecy. The most common belief appears to be that secrecy is a necessary evil, but one that ultimately undermines the development of science. It is one thing to keep secrets in wartime, another to do so under the conditions of peace. Yet researchers keep secrets all the time, sometimes quite inadvertently. In his study of Toshiba’s management of intellectual capital, Mark Fruin tells us that Toshiba had a great deal of trouble setting up Knowledge Works factories overseas; indeed, the skills and knowledge necessary to make a Knowledge Works factory operate are so site and person specific that there is no way to capture this know-how short of exporting the people from a successful factory. As Fruin makes clear “the nature of factory know-how is not contained in manuals but is found instead in practice and experience.” 25 For students of science and technology studies, it is clear that Fruin is talking about tacit knowledge–that knowledge which is practice-specific and often incapable of being articulated in any formal way. 26 Unlike restricted data, tacit knowledge is not intentionally secret but it has a similar effect. Restrictions on data are about slowing the spread of a technology; similarly, an inability to transmit tacit knowledge slows the ability of Toshiba to grow and compete with other Japanese and American firms. Clearly, however, tacit knowledge doesn’t count as secrecy; rather it is part of the “tricks of the trade.”

Another reason researchers argue against secrecy is the claim embodied in the Smythe report: secrecy denies the public the ability to learn about issues vital to the survival of the polity. There is an element of truth here, but not very much. Recall that during the debate over the H-bomb Leo Szilard believed the American public incapable of making the right decision with respect to the weapons’ development. 27 More information was not going to help the public; the decision had to be made by those who knew best: physicists. Restricted data created a community of inquirers capable of making the best possible decision.

Szilard’s world was far from democratic. Accountability was a problem for everyone but scientists. Despite his obsession with secrecy, Szilard accepted a political ideal that was a pure technocracy; a point made clear in his seminal story, “The Voice of the Dolphins.” 28 Readers will recall that the story’s underlying narrative, that intelligent dolphins rather than politicians were capable of ending the nuclear arms race, rested upon keeping the dolphins’ actual work practices secret. In turn, after the story’s happy ending, Szilard reveals the possibility that the dolphins were simply a cover for scientists imposing their rational vision upon international politics. In Szilard’s universe secrecy prevented the uninformed from playing an authoritative role in politics. Ignorance was more than bliss, it was the basis upon which one might erect a rational political order.

If, as Yaron Ezrahi argues, science plays an authoritative and constitutive role in liberal democratic polities because it is transparent, then secrecy might undermine democracy. 29 Transparency refers to the public’s ability to see the process through which authoritative claims are made; conceivably, anyone with enough time and patience might gather “the facts” and understand how a decision was made or a policy developed. Diane Vaughan’s account of the Challenger disaster is an example of the belief in transparency; Vaughan’s meticulous reconstruction of the cultures of NASA and Morton Thiokol as well as the conversations leading to the launch decision exemplify transparency’s political value. 30 Vaughan as both scholar and citizen wades through the documents and pieces together what she believes is the actual story. The alleged transparency of technical processes, the belief that with enough time and resources we might understand any given decision, appears at odds with secrecy. Alternatively, transparency might rest upon the credibility of researchers who vouch for the truth of what takes place in the classified world. Individual researchers become spokespeople for the government’s massive investment in secret research. In turn, the credibility of individuals becomes a surrogate for the credibility of the state. In this sense, secrecy and democratic politics don’t appear as diametrically opposed as researchers and analysts might believe.

Reading accounts of secrecy in science from the postwar era written by researchers or those involved in the loyalty and security programs reveals a common strand: a belief that secrecy was a new evil. That is, whether it is Shils’ The Torment of Secrecy or Wiener’s Invention or his autobiography I am a Mathematician, one is struck by the overwhelming sense of nostalgia for a time when secrecy did not affect science. Read as Wiener discusses the state of science in 1956:

There is not doubt that the present age, particularly in America, is one in which more men and women are devoting themselves to a formally scientific career than ever before in history. This does not mean that the intellectual environment of science received a proportionate increment. Many of today’s American scientists are working in government laboratories, where secrecy is the order of the day, and they are protected by the deliberate subdivision of problems to the extent that no man can be fully aware of the bearing of his own work. These laboratories, as well as the great industrial laboratories are so aware of the importance of the scientist that he is forced to punch the time clock and to give an accounting of the last minute of his research. Vacations are cut down to a dead minimum, but consultations and reports and visits to other plants are encouraged without limit, so that the scientist, and the young scientist in particular, has not the leisure to ripen his own ideas. 31

The poignant character of Wiener’s lament should not be lost on us, but it is important that this is a complaint about two different issues. First, losing control over the direction of research. Second, losing control over the actual content of the knowledge produced by the researcher. Secrecy was an imposition from those who did not understand the Mertonian ethos that scientists took for granted. In other words, the scientist always possessed dual citizenship: first, in what Michael Polanyi called the “republic of science” and next in a particular nation-state. 32 Implicit in the Mertonian formulation that Wiener and researchers embraced was the very possibility of divided loyalties. Choosing between science and country became something akin to choosing between a friend and country. Research problem choice could be seen as a way of assessing loyalty to a government; even if a researcher did not find the work interesting s/he would have to work on the project or risk being labeled as disloyal. The norms of science and the norms of secrecy were not merely antithetical, they were mutually exclusive.

Wiener’s recognition that secrecy, citizenship, and knowledge-production were of a piece implied that secrecy affected the very content of knowledge. This is certainly a far more controversial point since we are leaving the realm of access behind. Wiener’s point, and that of Edward Shils’, was not simply the question of economic rationality. That is, secret science forced the unnecessary duplication of work that had already been completed. Rather, it was a qualitative point more difficult to address. Put simply, Wiener is arguing that one gets a certain type of knowledge from a particular social organization, in this case a secret organization or research that is secret. This knowledge is different than what might be produced in a more open space. The argument is not that secrecy allows “bad” or incompetent science to flourish, although that was certainly a possibility if one believed in the scientific community’s homeostatic propensities. Instead, it was an argument about the constraints and conditioning of the imagination. Secret knowledge produced a different map of intellectual geography, a different sense of the horizons of possibility. Pursued over time, such knowledge would produce an entirely different and separate world, one in which access would be the least of an outsider’s problems. Even with access, the outsider would find themselves as visitors in a foreign country without any sense of the nation’s language or grammar. Obviously, translation would prove possible over time, but such a scheme undermined the possibility of claims to universalism, let alone the claim that scientific knowledge was public property. Secrecy eroded the extent to which scientific knowledge, and concomitantly the world explained with that knowledge, might serve as a common currency for culture across boundaries. 33

We might also read these discussions of secrecy as versions of Paul Forman’s belief that knowledge is made to order; you get what you pay for. 34 That is, secrecy is at one with the idea that scientists are employees following orders. As employees why should we expect that they would control the content and direction of their research? While such a perspective is attractive, it does not appear to connect with the ways that scientists present themselves; indeed, we might read Forman as being more like Wiener and Shils insofar as he laments the transformation of physics into its secret and corporate present.

 

On the Matter of Conclusions?

Far from being straightforward, the relationship of secrecy and the production of knowledge opens up a hermeneutic can of worms that science and technology studies must address. Part of the problem is that conventional understandings of science are inadequate to the task because they are implicated in the problem. In her work on research subpoenas, Sheila Jasanoff makes it clear that simply acquiring access to the raw materials that an investigator uses to write a scientific paper does not provide one with a road map to the construction of any particular paper. Instead, such access transforms those demanding the data into interpreters who must provide their own story about the materials or explain why the materials cannot be used to make the claims that are at issue. 35 Lawyers have an advantage generally not available to historians or sociologists: the discovery process. More recently, discovery has acquired a new meaning. At MIT students working for startup companies established by individual professors are required to sigh non-disclosure agreements, i.e., contracts that forbid the student from discussing the product under development; Professors working on related products have allegedly designed homework assignment to determine the nature and status of a competitor’s work. Student employees are caught in a bind: violate their non-disclosure agreement or fail the homework assignment. 36 Industrial espionage masked as pedagogy has brought the marketplace squarely into the classroom, but it also raises the issues of secrecy in a powerful and palpable form.

We can not acquire all the relevant materials, no matter how much we desire to do so. At the same time we need to think of ways to discuss how the classified world relates to the world to which we do have access. How are we to imagine the relations between realms that have very different reciprocal relations. Once again, we are back to questions of access, but with a difference. The question is not how to access this world, but how to assess that world’s impact on what is visible. 37 How is the hand that stamps the security seal on a document linked to the hands that write the document? Is our situation reminiscent of the physicist studying a black hole: how can we find out what happens in a black hole if nothing can escape from it? Or, is it that some things do move from the classified to the unclassified worlds–people, for example, and information. By studying the shape and form of what we can see, might we not make inferences about the secret world? 38 Or is it, as Wiener suggested, utterly outside the scope of our imaginations?

 


Endnotes:

Note 1: Jeffrey T. Richelson, “Scientists in Black,” Scientific American 278, 2 (1998): 48-55, 48. Back.

Note 2: T. Ferris, “Not Rocket Science,” New Yorker 74 (20 July 1998): 4-5. Back.

Note 3: David Holloway, Stalin and the Bomb (New Haven: Yale University Press, 1994). Back.

Note 4: Technologies in this sense also include the systems of classification and secrecy that surround much contemporary knowledge, whether for reasons of national security or corporate market position. Back.

Note 5: Some examples of this work are Sissela Bok, Secrets: On the Ethics of Concealment and Revelation (New York: Vintage, 1989 [1983]); Herbert Foerstel, Secret Science: Federal Control of American Science and Technology (Westport: Praeger, 1993); and the collection edited by Marcel La Follette, “Secrecy in University-based Research: Who Controls? Who Tells?” Science, Technology and Human Values 10, 2 (1985): 3-119. Back.

Note 6: Edward A. Shils, The Torment of Secrecy (Chicago: Ivan R. Dee, [1956]; reprinted 1996); Norbert Wiener, I am a Mathematician: The Latter Life of a Prodigy (Cambridge: MIT Press, 1956); Norbert Wiener, Invention: The Care and Feeding of Ideas (Cambridge: MIT Press, 1993). Back.

Note 7: Reprinted as “The Normative Structure of Science,” in Robert K. Merton, The Sociology of Science: Theoretical and Empirical Investigations, ed. Norman W. Storer (Chicago: University of Chicago Press, 1973), pp. 267-78. Merton’s norms were subject to a powerful and devastating critique that is largely forgotten: Ian I. Mitroff, The Subjective Side of Science: A Philosophical Inquiry into the Psychology of the Apollo Moon Scientists (Amsterdam: Elsevier, 1974). Mitroff convincingly demonstrated that whatever activity might be explained by a set of norms might also be explained by a set of counter-norms. Hence, it is possible to understand the entire process described by Merton with a set of norms articulating the opposite set of values–private property, local understanding, interestedness, and organized credulity. Unfortunately, it does not lend itself to a neat acronym. Back.

Note 8: David A. Hollinger, “The Defence of Democracy and Robert K. Merton’s Formulation of the Scientific Ethos,” pp.1-15 in Knowledge and Society, ed. Robert Alun Jones and Henrika Kuklick (Greenwich, CT: JAI Press, 1983). Also of interest here is Everett Mendelsohn, “Robert K. Merton: The Celebration and Defense of Science” Science in Context 3 (1989): 269-90. Back.

Note 9: Certainly I don’t mean this to be an exhaustive list, merely evocative. It is altogether too easy to translate Merton’s norms into a framework for the acquisition of social capital. If we do that secrecy might become both an asset and a liability. Back.

Note 10: Bernard Barber, Science and the Social Order (New York: Collier Books, 1962 [1952]) is an especially good source for the antithetical relationship of science and secrecy. Back.

Note 11: I am embarrassed to do this, but some discussion of this issue can be found in Michael Aaron Dennis, “Accounting for Research: New Histories of Corporate Laboratories and the Social History of American Science,” Social Studies of Science 17 (1987): 479-518. Back.

Note 12: On this point, see Larry Owens, “Patents, the ‘Frontiers’ of American Invention, and the Monopoly Committee of 1939: Anatomy of a Discourse,” Technology and Culture 32,4 (1991): 1076-93. For a specific example of the fear of industrial control, see Peter Galison, Bruce Hevly, and Rebecca Lowen, “Controlling the Monster: Stanford and the Growth of Physics Research, 1935-1962,” pp. 46-77 in Big Science: The Growth of Large Scale Research, ed. Peter Galison and Bruce Hevly (Stanford: Stanford University Press, 1992). Back.

Note 13: Given that so much information went to the Soviet Union, one might wonder if Groves’ obsession was really so unwarranted. For Rhodes, see Richard Rhodes, The Making of the Atomic Bomb (New York: Simon and Schuster, 1986). Back.

Note 14: See 1 April 1941, VB to KTC, Box 26, Folder 609 (KTC ‘39-‘42), Vannevar Bush Papers, LC. Back.

Note 15: On these points, see Michael Aaron Dennis, “Technologies of War: The Proximity Fuze and the Applied Physics Laboratory,” in A Change of State: Political Culture and Technical Practice in Cold War America (monograph in process). Back.

Note 16: For this specific example, see Donald MacKenzie, Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance (Cambridge: MIT Press, 1990); and Michael Aaron Dennis, “‘Our First Line of Defense’: Two University Laboratories in the Postwar American State,” Isis 85, no. 3 (1994): 427-55. Given the complexity of this particular example, it may be a poor choice. Gamow probably came to his knowledge of inertial techniques through his membership on the Air Force Science Advisory Board. Conceivably, one might argue that as a board member Gamow was only doing his job by expressing his beliefs about the untenable character of the research. What is striking is that Gamow does not appear to have visited or contacted any of the groups trying to develop this technology before he produced his critique. Back.

Note 17: See Robert Holzer, “DOD Secrecy Drives Up Weapons Cost, Development Time,” Defense News, 21 October 1992, p. 10. Back.

Note 18: Richard Hewlett, “‘Born Classified’ in the AEC: A Historian’s View,” Bulletin of the Atomic Scientists 37 (December 1981): 20-27. Back.

Note 19: Henry DeWolfe Smythe, Atomic Energy for Military Purposes (Washington, DC: GPO, 1945; Stanford: Stanford University Press, 1989). Back.

Note 20: Hewlett, “‘Born Classified’, p. 21 Back.

Note 21: For an interesting discussion of these very issues, see Hugh Gusterson, Nuclear Rites: A Weapons Laboratory at the End of the Cold War (Berkeley: University of California Press, 1996), pp. 68-100. Back.

Note 22: On the idea that there was a single secret and its consequences see Gregg Herken, The Winning Weapon: The Atomic Bomb in the Cold War, 1945-1950 (New York: Vintage, 1981). Shils’ work, cited above, also addresses this particular conception of an “atomic secret.” Back.

Note 23: Or if I am trying to build my own bomb. However, even if I learn this particular fact and others, I still need to do a great deal of work if I want my own nuke. As recent events make clear, even impoverished nations are willing to use scarce resources to build the infrastructure necessary for a nuclear arsenal. My point is simply that individual factoids are not going to teach anyone how to build a bomb. Back.

Note 24: See the Wall Street Journal, front page, left column, 14 April 1998. Back.

Note 25: W. Mark Fruin, Knowledge Works: Managing Intellectual Capital at Toshiba (New York: Oxford University Press, 1997), p. 162. Back.

Note 26: On tacit knowledge, see H.M. Collins and R.G. Harrison. “Building a TEA Laser: The Caprices of Communication,” Social Studies of Science 5 (1975): 441-50. Back.

Note 27: On Szilard’s undemocratic perspective, see Peter Galison and Barton Bernstein, “In Any Light: Scientists and the Decision to Build the Superbomb, 1952-1954,” Historical Studies in the Physical and Biological Sciences 19 (1989): 267-347. Back.

Note 28: Leo Szilard, The Voice of the Dolphins and Other Stories, exp. ed. (Stanford: Stanford University Press, 1961, 1992). Back.

Note 29: Yaron Ezrahi, The Descent of Icarus: Science and the Transformation of Contemporary Democracy (Cambridge: Harvard University Press, 1990). Back.

Note 30: Diane Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA (Chicago: University of Chicago Press, 1996). I owe this insight, even in this mangled form, to my colleague, Sheila Jasanoff. Back.

Note 31: Wiener, I am a Mathematician, p. 361. Back.

Note 32: Michael Polanyi, Science, Faith and Society (Chicago: University of Chicago Press, 1946). Back.

Note 33: I think that this constraining of possibilities is what Ian Hacking is going on about in Ian Hacking, “Weapons Research and the Form of Scientific Knowledge,” in Nuclear Weapons, Deterrence, and Disarmament, ed. David Copp (Calgary: University of Calgary Press, 1986). Back.

Note 34: See Paul Forman, “Behind Quantum Electronics: National Security as Basis for Physical Research in the United States, 1940-1960,” Historical Studies in the Physical Sciences 18 (1987): 149-229; and Paul Forman, “Inventing the Maser in Postwar America,” Osiris (2nd ser.) 7 (1992): 105-34. Back.

Note 35: Sheila Jasanoff, “Research Subpoenas and the Sociology of Knowledge,” Law and Contemporary Problems 59, Summer (1996): 95-118. Obviously this point is also related to the historian’s use of laboratory notebooks in reconstructing scientific and technical practices. How is what is in the notebook related to what is in the published document? A fascinating example of this is found in Gerald L Geison, The Private Science of Louis Pasteur (Princeton: Princeton University Press, 1995). Note that I have not discussed what Merton and others take for granted–the need for some secrecy in the quest for priority–since such claims rest on an assumption of openness. Back.

Note 36: See Amy Decker Marcus, “MIT Students, Lured to New Tech Firms, Get Caught in a Bind,” Wall Street Journal, 24 June, 1999, A1. Back.

Note 37: Ron Doel is getting at a related idea near the end of his essay, “Scientists as Policymakers, Advisors, and Intelligence Agents: Linking Contemporary Diplomatic History with the History of Contemporary Science,” pp. 215-44 in The Historiography of Contemporary Science and Technology, ed. Thomas Söderqvist (Amsterdam: Harwood Academic Publishers, 1997). Back.

Note 38: For example, could we not argue that the International Geophysical Year (IGY) of 1957 was simply arms control by other means? That is, by measuring the earth’s gravitational field and producing sophisticated maps of the Arctic, Russia and the U.S. acquired the information necessary to allow inertial guidance systems to fly to their targets with a greater degree of accuracy. That is, more information allowed for greater claims of inevitable destruction. Back.