The Potential Dangers of Technological Advance

Ray Kurzweil’s work in The Singularity is Near: When Humans Transcend Biology has brought me a new and profound hope for the future of humankind. Through his detailed discussion of genetics, nanotechnology, and robotics, I am now as much as a hopeful optimist as he about the future, and look forward to the day the Singularity arrives.

His definition of the Singularity is “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed” (Kurzweil, 7). However, it is not merely enough only to read those words. One must be able to look at what impact the Singularity will have on the future of humanity, and understand that the world of today will be vastly different from the world of tomorrow, which is approaching rapidly and forever changing the way humans interact with technology.

There is not a single project headed by a single corporation or government being researched and applied to reach the Singularity. Instead, smaller groups and corporations are working on their own particular projects that will eventually lend a hand in this great human transition. A group isn’t simply working in genetics. The group would be working to cure maybe only a specific genetic defect, while other groups work on other genetic defects.

Kurzweil argues that the transformation will take place gradually until we reach the year 2045. Genetics will have the first major breakthroughs, allowing humans to extend their lives by a few years. That, in turn, leads us into the age of nanotechnology, an age in which self-replicating nanobots can be released into our bodies to cure disease and keep us healthy. Robotics will then be merged with our biological bodies to make the next version of the human body. Through the advances in GNR, humans will achieve near immortality.

With new advances in technology, there are inevitably advances in the consequences that the new technology will allow. The more powerful it becomes, the more helpful and harmful it becomes. This is the problem with the Singularity. As we move toward that point where “human life will be irreversibly transformed,” we will also have to overcome the potential problems that are carried along with any advancement in technology (Kurzweil, 7).

One of the most argued criticisms against nanotechnology is that self-replicating nanotechnology will escalate out of control, and we will have no defense against it. Thus, we will ultimately destroy ourselves in the pursuit of immortality. However, if we proceed with caution, we can benefit much more from technological progress than be harmed.

Bill Joy argues in “Why the Future Doesn’t Need Us” that the destructive power in genetics, nanotechnology, and robotics should worry us because of self-replication. “Self-replication is the modus operandi of genetic engineering, which uses the machinery of the cell to replicate its designs, and the prime danger underlying gray goo in nanotechnology” (Joy).

This is a major concern because self-replicating nanotechnology could essentially destroy everything. Nano-engineered plants that grow faster and more efficiently “could out-compete real plants, crowding the biosphere with inedible foliage” (Joy). “Tough omnivorous ‘bacteria’ could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days,” leaving humans without much defense against the bacteria because of the nature of nanotechnology (Joy). The self-replicating bacteria might “be too tough, small, and rapidly spreading to stop” (Joy).

This destruction is knows as the “grey goo problem.” Thus, as we proceed into the future of nanotechnology, we must do so with caution.

If the possibility of uncontrolled replicating nano-plants isn’t enough to warrant concern, then think of this technology in the hands of terrorist organizations or a military at war with our own country.

The outcome could be catastrophic, and billions of lives could potentially be destroyed. We have seen the destruction of lives through the harnessing of nuclear, biological, and chemical technologies in the previous century. With self-replicating nanotechnology, those weapons will largely die out, leading way to a far more powerful threat militarily.

However, “the 21st-century GNR technologies have clear commercial uses and are being developed almost exclusively by corporate enterprises” (Joy). The GNR revolution will no longer be in the hands of the government-controlled militaries. It will be in the hands of the citizens as they reach out for various corporations’ newest technology.

The U.S. is largely consumed by this commercialism. In the last decade, we have already witnessed a technological revolution in our country. Man and machine are irrevocably intertwined.

Today, it is hard to function in this society without some working knowledge of the computer and the Internet, or even access to both. Cell phones, PDAs, portable music devices, digital cameras, and pacemakers have increasingly worked their way into our everyday lives. Obsession with looking younger and healthier through plastic surgery has taken hold of many Americans. New diet pills and easy-diets are fads that don’t seem to be going away.

America and many other more developed countries are seeking the next new thing that will make their lives easier and make their lives last longer. The public will grasp on to the advancing technologies that corporations offer without fully realizing the potential harm that can be done.

In this age of triumphant commercialism, technology – with science as its handmaiden – is delivering a series of almost magical inventions that are the most phenomenally lucrative ever seen. We are aggressively pursuing the promises of these new technologies within the now-unchallenged system of global capitalism and its manifold financial incentives and competitive pressures. (Joy)

The public will want more. Thus far, computer and anti-aging technology have richly enhanced our lives, but not to the point where we declare it enough and draw a line where advancement must end.

In the wide-open fields of genetics, nanotechnology, and robotics, information flows freely with the use of the Internet. Joy argues, “Ideas can’t be put back in a box; unlike uranium or plutonium, they don’t need to be mined and refined, and they can be freely copied. Once they are out, they are out” (Joy). People have their hands out, reaching for new technology, whilst others have free information that can be used in the harm of humankind. Now, “we have the possibility not just of weapons of mass destruction but of knowledge-enabled destruction (KMD), this destructiveness hugely amplified by the power of self-replication” (Joy).

The commercialism of advancing technology requires only knowledge, not “access to both rare – indeed, effectively unavailable – raw materials and highly protected information” or “large-scale activities” (Joy). This could empower “extreme individuals” or small groups looking to take out any number of things: large and small companies, religious groups, ethnic groups, governments, or possibly their own neighborhood (Joy).

We are in the age of free-flowing information, which can be dangerous if potentially harmful information is also free flowing. As a free, democratic nation, can and should we stop this information from being available to anyone? Freedom to share knowledge is, in part, what keeps our economy thriving. If we stop the flow of some particular bit of information, it may potentially decrease the productivity of “good” technology from becoming widely available by using that information. Joy argues that each advance may lead us closer to peril.

Each of these technologies also offers untold promise: The vision of near immortality that Kurzweil sees in his robot dreams drives us forward; genetic engineering may soon provide treatments, if not outright cures, for most diseases; and nanotechnology and nanomedicine can address yet more ills. Together they could significantly extend our average lifespan and improve the quality of our lives. Yet, with each of these technologies, a sequence of small, individually sensible advances leads to an accumulation of great power and, concomitantly, great danger. (Joy)

Are self-replicating nanobots going to destroy the world? We will face this issue in the coming decades. The Center for Responsible Nanotechnology argues in “Grey Goo is a Small Issue” that the likelihood of catastrophic nanotechnology events aren’t likely. “Development and use of molecular manufacturing will create nothing like grey goo, so it poses no risk of producing grey goo by accident at any point,” allowing that the subject “is more of a public issue than a scientific problem” (CRN).

Potential dangers awaken the public by any over-hyped media play on the issues. This could be a reason why Americans worry about the wrong things, instead of more high-risk problems.

This article [“The Grey Goo Problem” by Lawrence Osborne in New York Times Magazine] and other fictional portrayals of grey goo, as well as statements by scientists such as Richard Smalley, are signs of significant public concern. But although biosphere-eating goo is a gripping story, current molecular manufacturing proposals contain nothing even similar to grey goo. The idea that nanotechnology manufacturing systems could run amok is based on outdated information. (CRN)

Scientists’ opinions differ on whether self-replicating nanobots might destroy the biosphere. However, they understand irresponsible misuse of nanotechnology by criminals cannot be allowed.

The brief touches on public concern. When the media puts forth information, whether it is fictionalized or factual, it stirs public opinion and concern. In “Why We Worry about the Things WeShouldn’t and Ignore the Things We Should,” Jeffrey Kluger examines things that we are at the most risks of dying from as opposed to the less risky, near harmless things.

We agonize over avian flu, which to date has killed precisely no one in the U.S., but have to be cajoled into getting a vaccination for the common flu, which contributes to the deaths of 36,000 Americans each year. We wring our hands over the mad cow pathogen that might be (but almost certainly isn’t) in our hamburger and worry far less about the cholesterol that kills 700,000 of us annually. (Kluger, 66)

Will freely spreading self-replicating nanobots be the avian flu or mad cow disease of the future? If any case, no matter how isolated, is reported, then it is almost certain that the public will be on the alert, with some unwilling to take advantage of nanomedicine and other nanotechnology. Nanomedicine is the technology that will be used to combat the illnesses we should be afraid of now, such as, heart disease (685,089 American deaths per year), cancer (556,902), strokes (157,689), and many other diseases killing thousands of Americans each year (Kluger, 68). Maybe we do worry too much, but nanobots that could potentially wipe out the biosphere in a matter of days are cause for more concern than avian flu or mad cow disease, or even nuclear weapons for that matter.

We are coming upon a humanity-altering era, and “we must now choose between the pursuit of unrestricted and undirected growth through science and technology and the clear accompanying dangers,” if we believe these dangers warrant enough concern to relinquish research in genetics, nanotechnology, and robotics (Joy). Our desire for better health, longevity, and technology may be too far along to stop. Therefore, relinquishment may not be an option.

“The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge” (Joy). This, however, could limit our ability to live longer and healthier lives.

How will we harness beneficial productions of genetics, nanotechnology, and robotics without potentially destroying ourselves in the process? There must be regulation on scientific and technological knowledge. Then we must decide which technologies are beneficial and which are harmful, and keep harmful technology locked away from the public.

Yet, some beneficial technologies will also be potentially harmful. I propose that a defense system be set in place against self-replicating nanotechnology before it is allowed widespread use. Almost certainly, this will slow down progress toward the Singularity.

Society and its uses of technology will have to be kept in check, even if this means the loss of some privacy rights that we now have. We will be living in a different world once the transition takes place, a world that will have different rules, different possibilities, and different dangers.

We cannot relinquish our progress because progress is a part of human nature. Society has to move forward, looking ahead to a new world and new possibilities, and we must be willing to adapt to the changes and the new set of rules in order to survive in the coming Singularity.

Works Cited

  • “Grey Goo is a Small Issue.” Center for Responsible Nanotechnology. December 14, 2003. Center for Responsible Nanotechnology.
  • Joy, Bill. “Why the Future Doesn’t Need Us.” Wired 8.04 April 2000.
  • Kluger, Jeffrey. “Why We Worry about the Things We Shouldn’t and Ignore the Things We Should.” Time December 4, 2006: 64-71.
  • Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York, New York: Penguin Group, 2005.