The National Interest

The National Interest
Winter 2000

Extracts from The Deconstruction of Death

by Fred Iklé

 

Since the eighteenth century, a succession of technological revolutions has transformed the human condition and the course of history. First, the steam engine took center stage. By the end of the nineteenth century, the multifaceted applications of electricity had begun to change the world. During the second half of the last century, computer technology transformed scientific research, economic activity, military forces and nearly every aspect of human affairs. Now the mapping of the genome signals that a new wave of technology-driven change is coming.

The genome project highlights the recent progress in genetics and the other life sciences, which in turn inspires and sustains continuing advances in biotechnology. By promising to satisfy the most elemental human yearnings–the desire for good health and for the postponement of death–biotechnology attracts the kind of deep-rooted political support and strong financial backing that few other fields of science enjoy. It can therefore maintain a momentum capable of generating a stream of scientific-technological developments that governments and international organizations will find hard to control. And there is now little doubt about where this is leading: to human intervention in the process of evolution itself.

Some of these developments, it can be safely predicted, will pose new and fundamental challenges to prevailing religious doctrines and teachings. Longevity, combined with good health, is a goal that democratic governments cannot oppose. Who would want to block the path to possible eradication of hereditary sickle cell anemia, or to medications that promise a cure to Parkinson's and Alzheimer's diseases? But when such universally acceptable goals have been reached, science will not come to a full stop, even if religious organizations, ethics advocates or politicians should want to draw a line beyond which human nature must not be altered.

The good and the bad that the era of the genome promises to bring will often be inseparable. Consider this simple example: experts predict confidently that progress in biotechnology will make it possible, probably well before the end of this century, to extend people's active life span by twenty years or more. This, most people would agree, will be a good thing. But if this prediction comes true, one consequence will be that entrenched dictators will live longer, thus postponing the leadership successions that until now have so often offered the sole means of relief from tyrannical regimes. Stalin, for one, comes to mind as a fellow who would not have volunteered to retire had his doctors been able to keep him active and fit to the age of, say, 120. If he could have benefited from the medical technology that seems likely to be available a few decades hence, he would have ruled his evil empire until just about now, and his unfortunate subjects would have suffered many more campaigns of terror. And if biotechnology could have offered Mao Zedong and Deng Xiaoping the same extended life span, Deng would still be waiting in the wings for an opportunity to implement his reforms. . . .

Thanks to growing knowledge about the functioning of the brain, it will become possible to integrate the computer-based assistance to human intelligence that works "from the outside" with new ways to enhance the power of the brain "from the inside", and thus to lower the barrier between the two approaches. Already, devices linked to computers have been inserted into the brain on an exploratory basis, and living brain tissue from animals has been inserted into computers. More important in the long term will be the substantial research effort now under way on technologies for treating diseases of the brain, such as gene therapies or the use of stem cells. Intelligence, in its diverse aspects, is surely governed by a multiplicity of genes, but this need not rule out the possibility of strengthening some important aspect of intelligence by targeting a single gene. Particularly significant in this respect would be a major enhancement of human memory. Experiments with mice have already demonstrated genetically-induced improvements in memory.

Many research projects are now under way to develop new ways of teaming brain power with computers. These projects tend to be small and draw on different scientific disciplines. They address problems of perception, memory, reasoning, consciousness, emotion and other mental phenomena. Experiments have been conducted with embryonic nerve cells from the spinal marrow of mice, kept alive by nutrient solutions, to create a neural web that emits distinct electronic signals in response to contact with different chemicals. By connecting an appropriate computer to this web, a new type of sensor can be constructed to read and categorize these signals. (One early and comparatively modest practical development of such a sensor might be the creation of sniffers capable of outperforming the specially trained dogs that customs officers use to search for narcotics and explosives.)

Proceeding along a different line, researchers at NASA's Ames Research Center have successfully tested a "bio-computer" that links a computer to an aircraft pilot by way of sensors, which pick up tiny electrical impulses from the pilot's forearm muscles and nerves. With such a system, the pilot could, without using a keyboard, directly instruct the computers that control the aircraft. The goal of this work, in the words of NASA administrator Daniel Goldin, is to develop "hybrid systems that combine the best features of biological processes" with opto-electronic and other non-organic devices.

Hundreds of such ongoing projects are reported in scientific literature today in what amounts to only a modest beginning. It seems plausible that over time such linkages of living things with computer systems will enrich the capabilities of traditional computers with some of the unique cognitive capabilities of humans or animals. Precisely because these projects are so innocuous in their stated ambitions, they can proceed and receive financial support without stirring up ethical objections or risking a government-imposed moratorium on further research. Thus, while the general public is being treated to fanciful stories about human cloning and doubling the length of human life, claims that biotechnology might double or triple the powers of the human mind have been rather restrained–among the experts in the field as well as among the popularizers.

Mr. Brain Marries Miss Computer

An exception to this restraint is provided by stories about robots with superhuman skills and intelligence. Such ideas are now a mainstay of science fiction and have famous precursors in Mary Shelley's Frankenstein and H.G. Wells' The Island of Doctor Moreau. However, most of these stories have "smart" scientists stupidly creating artifacts in the crude likeness of a human being, monsters with clumsy robot feet, husky voices and a predilection to kill or enslave their creators.

The lone mad scientist and roaming robot are not apt metaphors for the age of the genome. The human genome project is a large, international undertaking whose latest discoveries are broadcast daily on the Internet. As British Prime Minister Tony Blair takes care to emphasize: "We, all of us, share a duty to ensure that the common property of the human genome is used freely for the good of the whole human race." Likewise, increasingly more "intelligent" computer technologies are being marketed throughout the world by some of the most globalized businesses. This open and international scientific competition in biotechnology, computer science and other disciplines is bound to lead to a better understanding of triangular interactions between mind, brain and advanced computers. Many competent research teams will continue to explore the mysterious cohabitation of the brain and the human mind. Here, as elsewhere, competition is the motor of progress. Well before the end of this century, the technologically advanced societies might begin to debate the most tempting, but perhaps most dangerous, ambition in all of the history of science: whether to design and build an entity that would enhance the most advanced computers with biological processes and living organisms to achieve an intelligence truly superior to that of human beings today. In contrast–or in addition–to the "artificial intelligence" of computer systems, such entities would need to be capable of insight, creative discoveries, flexible learning and judgments informed by the appreciation of a changing environment and changing values.

Such a "super-brain" would of course include the vast memories and other capabilities of the best computers. But that will not be the end of it. Human thought, to be creative and purposeful, requires willpower and the subtle involvement of emotions. At the center of this dynamic there might even have to be a process that resembles the mysterious innermost sovereign of the human mind–its self-consciousness. If such a super-brain project proves capable of being started, the designers would undoubtedly try to capture these ultimate attributes of human thinking as well, at least to the extent needed for the super-brain to surpass the full panoply of human intelligence.

As of now, no one can describe the specific theoretical and technical problems that would have to be solved for this project to be started in earnest. It seems certain that it would require a large-scale interdisciplinary effort with generous financing. Today, there is no meaningful support for such a venture in any country with the scientific and engineering talents sufficient even to contemplate the first steps. This could change, however. An ambitious dictator, in control of a country strong in biotechnology and computer science, for example, could gamble on a crash project with the objective of acquiring a decisive advantage. And if testing on live human beings served to expedite the super-brain project, the more unscrupulous the ruler the greater would be his advantage over more inhibited and law-abiding governments. International treaties would not stop the ruthless, much as North Korea and Iraq have not been prevented from building nuclear and biological weapons, in clear violation of the treaty obligations they had freely assumed.

Such a turn of events would in all likelihood set in motion a "brain race." If this seems far-fetched, recall that America's costly project for the manned mission to the moon–something for which there had previously been little enthusiasm–easily received congressional support once it appeared that the Soviet Union was about to accomplish the feat first. Recall, too, that it was primarily to prevent Nazi Germany from acquiring nuclear weapons first that President Roosevelt authorized the initial steps to start the Manhattan Project late in 1941. The fear of a Nazi A-bomb was sufficient reason for the United States to launch this immense and uncertain venture, even though none of the scientists involved at the outset could have outlined the full research and development program that produced the first atomic bomb four years later. . . .