ADVERTISEMENTS
Learn about diversity at ASEE
ASEE would like to acknowledge the generous support of our premier corporate partners.

 FEATURES

+ BY KEVIN LEWIS
MOORE OR LESS?

MOORE OR LESS?

An Intel cofounder observed that advances in computer chips double their performance roughly every two years. Now, researchers are struggling to prove this “law” still holds.

Standing before a room of investors and analysts last May, Bill Holt, the senior executive at Intel responsible for keeping his company at the forefront of semiconductor innovation, stated flatly, “We believe in Moore’s Law.” That he needed to say so spoke volumes about the mood of uncertainty in the semiconductor industry. For decades, makers of integrated circuits kept pace with a 1965 observation by Intel cofounder Gordon Moore that chip performance doubles approximately every one to two years. And, in fact, computer processing speed and memory capacity grew exponentially, driving dynamic technological advances across the economy.

But now, this longtime article of faith is being tested, shaking industry expectations and potentially affecting not only the many sectors of society that depend on growing computer performance but U.S. technological leadership overall. One reason is the amount of power required to achieve faster speeds, especially in a world of battery-powered portable devices. Another is that the number of transistors that can be crammed onto chips of a given size is nearing its supposed limit. “To ensure that computing systems continue to double in performance every few years, we need to make significant changes in computer software and hardware,” says Samuel H. Fuller, chief technology officer and vice president of research and development for Analog Devices Inc. in Norwood, Mass. He chaired a National Academies panel calling for extensive research to address a “crisis in computing performance.”

“Each time the technology reached the predicted barriers . . . imaginative new solutions were developed to further extend Moore’s Law.” — Intel’s Kelin Kuhn

Nevertheless, even as the computer industry looks to resolve other bottlenecks to keep up with Moore’s Law, the immense strides that the semiconductor industry itself has made in the past offer grounds for optimism. Kelin Kuhn, one of Intel’s semiconductor gurus, points out that “key technologists in each generation of this long history have also looked forward and predicted the ‘end of scaling’” – as in scaling down to smaller sizes – “within one or two generations. However, each time the technology reached the predicted barriers, scaling did not stop. Instead, imaginative new solutions were developed to further extend Moore’s Law and the transistor-scaling road map.” But the stakes are high, development costs will be steep, and only the biggest and best-financed firms may come out winners.

Building Upward

To get a sense of what electrical engineers and computer scientists are tackling, think of an integrated circuit as a big city with thousands of streets and intersections. This “city” is relatively flat – thus the name “chip” – and it’s also very tightly packed, akin to having narrow alleys for streets. The electrical current flows around like traffic, and the transistors are like the intersections. To perform computation, the flow of current is switched on and off across the various transistors around the circuit as quickly as possible while trying not to dissipate energy along the way. Ideally, it would be like having Ferraris driving fullthrottle through the alleyways of Rome and through intersections with stoplights that are only red or green, no yellow – without accidents. Precision is the name of the game, especially when the streets are now only nanometers wide.

Among the hottest areas of research-and-development activity are non-planar transistors, III-V compound semiconductors, and carbon-based nanoelectronics.

Although transistors have traditionally occupied only a very thin layer on the surface of the chip, the idea with non-planar transistors is to take advantage of opportunities in the vertical dimension by re-creating the transistor as a more efficient three-dimensional structure (though still vanishingly small). This could allow more control over the flow of current, with less of a footprint on the surface of the chip. Meanwhile, III-V semiconductors, like gallium arsenide, are already used in special-purpose chips because they have superior electro-optical performance, but they have had trouble competing with silicon in cost and manufacturability. For the most part, non-planar transistors and III-V semiconductors are medium-term innovations to allow the current paradigm to keep scaling.

Some of this kind of work is being done in academia – for example, through the Focus Center Research Program within the Semiconductor Research Corp. (SRC), an industrywide research consortium. But much of the work is being done in the companies themselves as it makes the transition from research to development. One likely course for these medium-term innovations is as specialized enhancements to silicon-based circuitry. “If a company were to use III-V materials to enhance the transistor speed, they would only use it sparingly on certain parts of the chip, not for the entire chip,” says Tsu-Jae King Liu, a professor and associate dean for research at the University of California, Berkeley, College of Engineering. “For the rest of the chip that doesn’t have to operate at the fastest speeds, it would actually be cheaper and lower power to implement in silicon.” And it’s not as if these innovations don’t have hurdles to clear.

In the case of non-planar transistors, “it’s more the patterning,” says Jeff Welser, an IBM semiconductor specialist who directs the Nanoelectronics Research Initiative at SRC. He’s referring to the intricate circuit patterns etched on the surface of a chip in the process of lithography, which advanced to photolithography. “When you do lithography over a 3-D structure, it’s very difficult because you have to focus at different heights. Given how much we’re pushing lithography already, you get very little focal depth, so you really want to have very planar surfaces.”

For him, the “the biggest concern about these structures is the amount of parasitic capacitance and resistance” – the extra energy that has to be expended moving electrical charge around the circuit – “for all these layers of things you have to put on.” On the other hand, Welser says, the challenge with III-V semiconductors, “at least historically, has been that you don’t have a really good oxide to go on top of them to insulate the gate,” a critical component for efficient transistor operation. “Arguably, the reason silicon beat the III-Vs is silicon dioxide was such a beautiful insulator.”

Looking farther out, carbon-based nanoelectronics – using nanotubes and nanoribbons made out of graphene, a single-atomic-layer sheet of carbon – is currently the leading candidate for a more fundamental paradigm shift. According to the 2009 International Technology Roadmap for Semiconductors (ITRS) report, carbon-based nanoelectronics “exhibits high potential and is maturing rapidly.” At the moment, though, the technology is still largely in the hands of academic researchers. And Jeff Welser’s job is to make sure that research gets done. “The thing that’s interesting about carbon right now is it can serve two different purposes: one, you can use it to make a FET [field-effect transistor, the current standard], whether it’s a nanotube or graphene, that can potentially be a higher performance FET than what you get with silicon; it also might be useful for interconnects between silicon transistors; it’s an extremely good thermal conductor, so it might help you with getting rid of some of the heat that you’re trying to dissipate on these chips right now. The other aspect that I think makes carbon even more attractive as an area to put your money into for far-out research is it’s got very different physics, particularly in the graphene, in terms of the way electrons move in it that you could use maybe to try to make totally different types of switches, maybe a switch based on spintronics, where you’re manipulating the spin of the electron, or one based on something called pseudo-spintronics, where you’re taking advantage of a quantum property of an electron in graphene that’s unique to the graphene structure.”

Subhasish Mitra, an assistant professor in the Departments of Electrical Engineering and Computer Science at Stanford, notes that as recently as four years ago, researchers “could not even demonstrate anything, and there were some very fundamental challenges.” However, more recent work “has shown that you can get around all that, and that’s why today, four years later, you can actually build complex-enough designs, and there’s nothing fundamental against being able to build big designs using carbon nanotubes.” Of course, there’s still plenty of research to be done. “It’s really up in the air,” he says, about where carbon-based nanoelectronics is going. “If you’re talking about whether we can build something, yes, we can. If you’re talking about whether we’re far away from the promised benefits, we’re very far away.”

According to Mitra, one concern is “the density of carbon nanotubes. When you grow carbon nanotubes, you would like to have roughly around 250 nanotubes per micron; we would get maybe around 10 carbon nanotubes per micron on a good day.” He also notes that work still needs to be done on doping and contact materials, which are needed to make the semiconductor useful. Moreover, according to Welser and King Liu, carbon-based electronics will also probably be introduced as add-ons to silicon, given the latter’s built-in advantages. Welser says that “one of the things we realized early on was, if you were going to continue to build a FET switch, silicon can probably get you absolutely as far as any other material can get you. Certainly you might want to change to three-dimensional structures, you might want to change to III-V materials, you might even want to change to carbon nanotubes, because they might give you improved performance at that same dimension, but none of them will necessarily scale that much further than silicon, so you really are looking for improved performance without improving the scaling path.” The larger lesson, he points out, is that “too many people have lost their careers betting against silicon.” Indeed, according to Dimitri Antoniadis, an MIT professor who runs one of the Focus Centers doing research to extend the current silicon-based paradigm, the material “scales quite gracefully.” And so, notwithstanding the potential for breakthrough innovation, the industry will probably continue improving today’s approach “until it’s dead,” affirms Robert Trew, director of the electrical, communications, and cybersystems division in the National Science Foundation’s engineering directorate.

Who Will Survive?

Whether a successor approach can be ready before the current one is exhausted is what worries Samuel Fuller and others on his National Academies panel. They urge a major research investment in parallel computing, calling this the only known alternative for improving computer performance without significantly increasing costs and energy use. Parallel computing divides a program into parts that are then executed on distinct processors. The problem is to match parallel software with parallel hardware. “The next generation of discoveries will require advances at both the hardware and software levels,” the panel’s report says.

If the government isn’t prepared to make this investment, it will be up to industry to decide which course promises the best return for its R&D dollars. Moore’s original 1965 article presented a series of improving cost curves. Unfortunately, to get to the bottom of the cost curve, you have to climb to the top of the investment curve. These days, a leading-edge manufacturing facility costs upwards of $4 billion. The 2009 ITRS report noted that it was “difficult for most people in the semiconductor industry to imagine how we could continue to afford the historic trends of increase in process equipment and factory costs for another 15 years!” But if you can afford that kind of investment, economies of scale can allow you to keep up with Moore’s Law at near-constant cost per chip. According to Dean Freeman, a semiconductor industry analyst at Gartner Research, “If I am Intel or a Samsung, I can keep heading down Moore’s Law ahead of my competition, stay profitable, and thus continue to afford to move to the next technology node.” The result is industry concentration. To drive this home, one of the slides at Intel’s meeting with investors and analysts last May was titled, “Fewer Companies Deliver Moore’s Law.”

 

Kevin Lewis is a columnist for the Ideas section of the Boston Globe.

 



TOPˆ

 


ASEE
© Copyright 2011
American Society for Engineering Education
1818 N Street, N.W., Suite 600
Washington, DC 20036-2479
Web: www.asee.org
Telephone: (202) 331-3500