bill joy's bad dream

He's no mad scientist, and that's why it's so scary when this latter-day Einstein talks about robots and other new technologies running amok.

By Joannie Fischer

AP Wide World Photos With his wild hair, mile-a-minute mind, and uncanny way of being right in his hunches about where technology is headed, BillJoy makes a perfect Einstein for the 21st century. Referred to as "the Edison of the Internet," "the other Bill" (a reference to Bill Gates), and "the king of Silicon Valley" (even though he lives a thousand miles to the east in Aspen, Colorado). Joy is a celebrated developer of such world-changing computer languages as Unix and Java. From his earliest days as an electricalengineering undergraduate at the University of Michigan and a graduate student in engineering and computer science at U.C.Berkeley, he has been famous for brilliant, novel ideas.

So when Joy, the co-founder and chief scientist of Sun Microsystems, has a vision, crowds gather around to hear it and the elite bet their money on it, not unlike the way the Old Testament's Egyptian pharaoh ruled his kingdom according toJoseph's prophetic dreams.

This time, though, Bill Joy has announced a nightmare. The emerging technologies of the new millennium, he argues, arerendering humans an endangered species. Between robots that can grow ever more intelligent; nanotechnology that allows speedy, microscopic machines to reassemble chunks of the world in minutes; and genetic engineering that introduces artificial genes into living creatures to create a superior species, mere mortals like you and me may not stand a chance. In short, humans are in danger of extinction due to high technology that displaces us and renders us useless.

Such dystopian scenarios have been batted around for quite awhile, in different forms. There was H. G. Wells's vision of higher life forms ruthlessly wiping out earthlings. There was the legendary film, 2001: A Space Odyssey, in which the evil computer, HAL, murders humans in order to dominate. And more recently, there was unabomber Theodore Kaczynski's manifesto, published in the New York Times, declaring that intelligent machines will eventually reduce human beings to the status of domestic animals. These familiar warnings have been easy to dismiss as the rants of lunatics and the overactive imaginations of science fiction authors.  

AP Wide World Photos
"Robots [have] a dangerous amplifying factor:  They can self-replicate.  A bomb is blown up only once--but one bot can become many, and quickly get out of control."

FUTURE SHOCK

But now, no one is dismissing the arguments of one of the most heralded geniuses of modern times. Joy is no exaggerator, and in fact is overly self-deprecating for someone so successful. He's more likely to be fiddling with wires and gadgets underneath his desk than bragging about himself on a podium. When he does speak, he is known to be understated, thoughtful, reasonable, and open-minded. 

So when such strong words come from him, they carry even greater weight. Ever since his essay "Why the future doesn't need us" appeared as the cover story of the April 2000 edition of Wired magazine (www.wired.com.80/wired/
archive/8.04/joy.htm
, it has been compared to Albert Einstein's 1939 letter to President Roosevelt warning of the consequences of the atomic bomb. For half a year now, the entire world of high technology has been abuzz trying to grapple with the implications of Joy's thesis.

Joy warns that because we live in a time when revolutionary scientific breakthroughs occur almost daily, we have grown complacent about them, paying about as much attention to them as sports scores or the daily local weather forecast.

New scientific developments have often caused unintended problems in the past, such as the overuse of antibiotics leading to the rise of stronger, more deadly viruses, or atomic science leading to the bombing and annihilation of innocents. In the midst of so many different technologies, he says, we haven't noticed that the newest emerging technologies are fundamentally different, and far more dangerous, than those that have come before.

"Specifically," Joy says, "robots, engineered organisms, and nanobots share a dangerous amplifying factor: They can self-replicate. A bomb is blown up only once--but one bot can become many, and quickly get out of control." Furthermore, not only could artificially intelligent machines wreak havoc with our world but, because they don't require rare materials like plutonium, just about anyone could get hold of the technology and use it for massive destruction. "I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil," Joy declares.

Is he crazy? Most people who know anything about the technology that incorporates living, biological material with computerized machinery don't think so. Hans Moravec, who created Carnegie Mellon University's robotics research program and wrote the book Robot: Mere Machine to Transcendent Mind, argues that, just as humans have affected many other animals in negative ways, advanced robots with goals of gaining resources for themselves and their reproduced "children" could conceivably win a battle with humans for those resources. He stresses the importance of designing robots in such a way as to somehow limit the "selfish" behavior it usually takes to reproduce. The few who disagree with Joy, such as Brandeis researcher Jordan Pollack, do so mainly because they believe that humans are far enough from creating such machines that the dangers are so remote as to be silly to worry about now.

Yet, all it takes is reading the current headlines to see how quickly the technology is advancing. In fact, in Pollack's own lab at Brandeis, a colony of machines exists that evolve and give birth to other machines without any human input. So far, the robots only have "the brainpower of bacteria" Pollack says, and their only achievement is to crawl across the floor like crabs.

But most scientists agree that the colony constitutes the first giant leap toward truly lifelike machines. Other advances in the field of molecular electronics convince not only Joy but other experts like Moravec that within 30 to 40 years we will have robots more intelligent than humans and armed with computing know-how a million times as powerful as the most powerful computer today.

Add to that the new understanding of the genome, which will allow scientists to create artificial sets of genes and whole organisms that evolve just as natural species do, only light years faster, says Joy, and suddenly humans have the ability to transform the world--but not the ability to control it or to prevent unintended disasters. Already, molecular biologists can create on the computer "cellular automota" that evolve as much in minutes as real creatures do in a million years. Joy's worries about the destructive power of such evolving systems are echoed by scientist George Dyson, who writes in Darwin Among the Machines: "In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines."

The worst part of these doom-and-gloom scenarios is that the machines don't even have to want to hurt humans in order to do so. For example, the explosion of nanotechnology will allow tiny microscopic computers to perform what seems like pure magic, quickly creating everything from rockets to diamond rings seemingly out of thin air, by combining elemental particles--found more or less everywhere in air and soil--and cheaply assembling any desired product.

But through any given glitch, says Joy, the nanobots could become our worst enemy. Nanotechnology guru Eric Drexler, author of Engines of Creation, is admittedly one of the greatest proponents of the good that nanobots could do for humans. But even he agrees with Joy that there is as yet no way to guarantee that the miniature creators wouldn't become instead "engines of destruction." Several others have predicted that the microscopic machines, multiplying exponentially, could turn Earth into nothing but "gray goo" before any human could take the first step to stop them.  

AP Wide World Photos
The worst part of these doom-and-gloom scenarios is that the machines don't even have to want to hurt humans in order to do so.

AVERTING A CATASTROPHE

For all the warnings, Joy isn't recommending a nostalgic return to a life of simple Luddite happiness. "I have always had a strong belief in the value of the scientific search for truth and the ability of great engineering to bring material progress," he says, and that belief remains firm. He believes, however, that the famous theoretical physicist Freemon Dyson is right when he warns that it is humans' "illusion of illimitable power . . . what you might call technical arrogance" that is "responsible for all our troubles."

Humans have enough trouble getting even simple things to work right, Joy points out, so we should be extremely humble about our ability to make complex, "living" organisms when the consequences of mistakes that create what are known in the field as "dangerous replicators" are truly apocalyptic. Even as the cutting-edge computer designer continues searching for advancements, he advises that inventors steer clear of dangerous creations. Just as the U.S. has decided not to develop biological weapons even though it has the capability, he says, the future will be safer if scientists live by Thoreau's proverb, that humankind will be "rich in proportion to the number of things which we can afford to let alone." That may be an impossible hope.

Joy may not have the answers to the problem, but thanks to him, no one any longer denies that the threat is very real. Maybe we should start comparing Bill Joy to Paul Revere.

Joannie Fischer is a freelance writer living in suburban Washington, D.C.