Quantum computing: Harnessing weirdness

Double, double, toil and trouble

After a glorious 50 years, Moore’s law—which states that computer power doubles every two years at the same cost—is running out of steam. Tim Cross asks what might replace it

IN 1971 a small company called Intel released the 4004, its first ever microprocessor. The chip, measuring 12 square millimetres, contained 2,300 transistors—tiny electrical switches representing the 1s and 0s that are the basic language of computers. The gap between each transistor was 10,000 nanometres (billionths of a metre) in size, about as big as a red blood cell. The result was a miracle of miniaturisation, but still on something close to a human scale. A child with a decent microscope could have counted the individual transistors of the 4004.

The transistors on the Skylake chips Intel makes today would flummox any such inspection. The chips themselves are ten times the size of the 4004, but at a spacing of just 14 nanometres (nm) their transistors are invisible, for they are far smaller than the wavelengths of light human eyes and microscopes use. If the 4004’s transistors were blown up to the height of a person, the Skylake devices would be the size of an ant.

The difference between the 4004 and the Skylake is the difference between computer behemoths that occupy whole basements and stylish little slabs 100,000 times more powerful that slip into a pocket. It is the difference between telephone systems operated circuit by circuit with bulky electromechanical switches and an internet that ceaselessly shuttles data packets around the world in their countless trillions. It is a difference that has changed everything from metal-bashing to foreign policy, from the booking of holidays to the designing of H-bombs.

It is also a difference capable of easy mathematical quantification. In 1965 Gordon Moore, who would later become one of the founders of Intel, a chipmaker, wrote a paper noting that the number of electronic components which could be crammed into an integrated circuit was doubling every year. This exponential increase came to be known as Moore’s law.

In the 1970s the rate of doubling was reduced to once every two years. Even so, you would have had to be very brave to look at one of Intel’s 4004s in 1971 and believe that such a law would continue to hold for 44 years. After all, double something 22 times and you have 4m times more of it, or perhaps something 4m times better. But that is indeed what has happened. Intel does not publish transistor counts for its Skylake chips, but whereas the 4004 had 2,300 of them, the company’s Xeon Haswell E-5, launched in 2014, sports over 5 billion, just 22 nm apart.

Moore’s law is not a law in the sense of, say, Newton’s laws of motion. But Intel, which has for decades been the leading maker of microprocessors, and the rest of the industry turned it into a self-fulfilling prophecy.

That fulfilment was made possible largely because transistors have the unusual quality of getting better as they get smaller; a small transistor can be turned on and off with less power and at greater speeds than a larger one. This meant that you could use more and faster transistors without needing more power or generating more waste heat, and thus that chips could get bigger as well as better.

Making chips bigger and transistors smaller was not easy; semiconductor companies have for decades spent heavily on R&D, and the facilities—“fabs”—in which the chips have been made have become much more expensive. But each time transistors shrank, and the chips made out of them became faster and more capable, the market for them grew, allowing the makers to recoup their R&D costs and reinvest in yet more research to make their products still tinier. The demise of this virtuous circle has been predicted many times. “There’s a law about Moore’s law,” jokes Peter Lee, a vice-president at Microsoft Research: “The number of people predicting the death of Moore’s law doubles every two years.” But now the computer industry is increasingly aware that the jig will soon be up. For some time, making transistors smaller has no longer been making them more energy-efficient; as a result, the operating speed of high-end chips has been on a plateau since the mid-2000s (see chart). And while the benefits of making things smaller have been decreasing, the costs have been rising. This is in large part because the components are approaching a fundamental limit of smallness: the atom. A Skylake transistor is around 100 atoms across, and the fewer atoms you have, the harder it becomes to store and manipulate electronic 1s and 0s. Smaller transistors now need trickier designs and extra materials. And as chips get harder to make, fabs get ever more expensive. Handel Jones, the CEO of International Business Strategies, reckons that a fab for state-of-the-art microprocessors now costs around $7 billion. He thinks that by the time the industry produces 5nm chips (which at past rates of progress might be in the early 2020s), this could rise to over $16 billion, or nearly a third of Intel’s current annual revenue. In 2015 that revenue, at $55.4 billion, was only 2% more than in 2011. Such slow increases in revenue and big increases in cost seem to point to an obvious conclusion. “From an economic standpoint, Moore’s law is over,” says Linley Gwennap, who runs the Linley Group, a firm of Silicon Valley analysts.

The pace of advance has been slowing for a while. Marc Snir, a supercomputing expert at Argonne National Laboratory, Illinois, points out that the industry’s International Technology Roadmap for Semiconductors, a collaborative document that tries to forecast the near future of chipmaking, has been over-optimistic for a decade. Promised manufacturing innovations have proved more difficult than expected, arriving years late or not at all.

Brian Krzanich, Intel’s boss, has publicly admitted that the firm’s rate of progress has slowed. Intel has a biennial “tick-tock” strategy: in one year it will bring out a chip featuring smaller transistors (“tick”); the following year it tweaks that chip’s design (“tock”) and prepares to shrink the transistors again in the following year. But when its first 14nm chips, codenamed Broadwell, ticked their way to market in 2014 they were nearly a year behind schedule. The tick to 10nm that was meant to follow the tock of the Skylakes has slipped too; Intel has said such products will not now arrive until 2017. Analysts reckon that because of technological problems the company is now on a “tick-tock-tock” cycle. Other big chipmakers have had similar problems.

Moore’s law has not hit a brick wall. Chipmakers are spending billions on new designs and materials that may make transistors amenable to a bit more shrinkage and allow another few turns of the exponential crank. They are also exploring ways in which performance can be improved with customised designs and cleverer programming. In the past the relentless doubling and redoubling of computing power meant there was less of an incentive to experiment with other sorts of improvement.

Try a different route

More radically, some hope to redefine the computer itself. One idea is to harness quantum mechanics to perform certain calculations much faster than any classical computer could ever hope to do. Another is to emulate biological brains, which perform impressive feats using very little energy. Yet another is to diffuse computer power rather than concentrating it, spreading the ability to calculate and communicate across an ever greater range of everyday objects in the nascent internet of things. Moore’s law provided an unprecedented combination of blistering progress and certainty about the near future. As that certainty wanes, the effects could be felt far beyond the chipmakers faced with new challenges and costs. In a world where so many things—from the cruising speed of airliners to the median wage—seem to change little from decade to decade, the exponential growth in computing power underlies the future plans of technology providers working on everything from augmented-reality headsets to self-driving cars. More important, it has come to stand in the imagination for progress itself. If something like it cannot be salvaged, the world would look a grimmer place. At the same time, some see benefits in a less predictable world that gives all sorts of new computing technologies an opportunity to come into their own. “The end of Moore’s law could be an inflection point,” says Microsoft’s Dr Lee. “It’s full of challenges—but it’s also a chance to strike out in different directions, and to really shake things up.”

New sorts of transistors can eke out a few more iterations of Moore’s law, but they will get increasingly expensive

THANKS to the exponential power of Moore’s law, the electronic components that run modern computers vastly outnumber all the leaves on the Earth’s trees. Chris Mack, a chipmaking expert, working from a previous estimate by VLSI Research, an analysis firm, reckons that perhaps 400 billion billion (4×1020) transistors were churned out in 2015 alone. That works out at about 13 trillion a second. At the same time they have become unimaginably small: millions could fit on the full stop at the end of this sentence.

A transistor is a sort of switch. To turn it on, a voltage is applied to its gate, which allows the current to flow through the channel between the transistor’s source and drain (see first diagram). When no current flows, the transistor is off. The on-off states represent the 1s and 0s that are the fundamental language of computers.

The silicon from which these switches are made is a semiconductor, meaning that its electrical properties are halfway between those of a conductor (in which current can flow easily) and an insulator (in which it cannot). The electrical characteristics of a semiconductor can be tweaked, either by a process called “doping”, in which the material is spiced with atoms of other elements, such as arsenic or boron, or by the application of an electrical field.

In a silicon transistor, the channel will be doped with one material and the source and drain with another. Doping alters the amount of energy required for any charge to flow through a semiconductor, so where two differently doped materials abut each other, current cannot flow. But when the device is switched on, the electric field from the gate generates a thin, conductive bridge within the channel which completes the circuit, allowing current to flow through.

For a long time that basic design worked better and better as transistors became ever smaller. But at truly tiny scales it begins to break down. In modern transistors the source and drain are very close together, of the order of 20nm. That causes the channel to leak, with a residual current flowing even when the device is meant to be off, wasting power and generating unwanted heat.

Heat from this and other sources causes serious problems. Many modern chips must either run below their maximum speeds or even periodically switch parts of themselves off to avoid overheating, which limits their performance.

Chipmakers are trying various methods to avoid this. One of them, called strained silicon, which was introduced by Intel in 2004, involves stretching the atoms of the silicon crystal further apart than normal, which lubricates the passage of charge carriers through the channel, reducing the heat generated.

In another technique, first adopted in 2007, metal oxides are used to combat the effects of tunnelling, a quantum phenomenon in which particles (such as electrons) on one side of a seemingly impermeable barrier turn up on the other side without ever passing through the intervening space. Developing more such esoteric techniques may allow chipmakers to go on shrinking transistors for a little longer, but not much.

The 3D effect Beyond that, two broad changes will be needed. First, the design of the transistor will have to be changed radically. Second, the industry will have to find a replacement for silicon, the electrical properties of which have already been pushed to their limits.

One solution to the problem of leaking current is to redesign the channel and the gate. Conventionally, transistors have been flat, but in 2012 Intel added a third dimension to its products. To enable it to build chips with features just 22nm apart, it switched to transistors known as “finFET”, which feature a channel that sticks up from the surface of the chip. The gate is then wrapped around the channel’s three exposed sides (see second diagram), which gives it much better control over what takes place inside the channel. These new transistors are trickier to make, but they switch 37% faster than old ones of the same size and consume only half as much power.

The next logical step, says Mr Snir of Argonne National Laboratory, is “gate-all-around” transistors, in which the channel is surrounded by its gate on all four sides. That offers maximum control, but it adds extra steps to the manufacturing process, since the gate must now be built in multiple sections. Big chipmakers such as Samsung have said that it might take gate-all-around transistors to build chips with features 5nm apart, a stage that Samsung and other makers expect to be reached by the early 2020s.

Beyond that, more exotic solutions may be needed. One idea is to take advantage of the quantum tunnelling that is such an annoyance for conventional transistors, and that will only get worse as transistors shrink further. It is possible, by applying electrical fields, to control the rate at which tunnelling happens. A low rate of leakage would correspond to a 0; a high rate to a 1. The first experimental tunnelling transistor was demonstrated by a team at IBM in 2004. Since then researchers have been working to commercialise them.

In 2015 a team led by Kaustav Banerjee, of the University of California, reported in Nature that they had built a tunnelling transistor with a working voltage of just 0.1, far below the 0.7V of devices now in use, which means much less heat. But there is more work to be done before tunnelling transistors become viable, says Greg Yeric of ARM, a British designer of microchips: for now they do not yet switch on and off quickly enough to allow them to be used for fast chips. Jim Greer and his colleagues at Ireland’s Tyndall Institute are working on another idea. Their device, called a junctionless nanowire transistor (JNT), aims to help with another problem of building at tiny scales: getting the doping right. “These days you’re talking about [doping] a very small amount of silicon indeed. You’ll soon be at the point where even one or two misplaced dopant atoms could drastically alter the behaviour of your transistor,” says Dr Greer.

Instead, he and his colleagues propose to build their JNTs, just 3nm across, out of one sort of uniformly doped silicon. Normally that would result in a wire rather than a switch: a device that is uniformly conductive and cannot be turned off. But at these tiny scales the electrical influence of the gate penetrates right through the wire, so the gate alone can prevent current flowing when the transistor is switched off.

Whereas a conventional transistor works by building an electrical bridge between a source and a drain that are otherwise insulated, Dr Greer’s device works the other way: more like a hose in which the gate acts to stop the current from flowing. “This is true nanotechnology,” he says. “Our device only works at these sorts of scales. The big advantage is you don’t have to worry about manufacturing these fiddly junctions.”

Material difference

Chipmakers are also experimenting with materials beyond silicon. Last year a research alliance including Samsung, Global Foundries, IBM and State University New York unveiled a microchip made with components 7nm apart, a technology that is not expected to be in consumers’ hands until 2018 at the earliest. It used the same finFET design as the present generation of chips, with slight modifications, but although most of the device was built from the usual silicon, around half of its transistors had channels made from a silicon-germanium (SiGe) alloy.

This was chosen because it is, in some ways, a better conductor than silicon. Once again, that means lower power usage and allows the transistor to switch on and off more quickly, boosting the speed of the chip. But it is not a panacea, says Heike Riel, the director of the physical-sciences department at IBM Research. Modern chips are built from two types of transistor. One is designed to conduct electrons, which carry a negative charge. The other sort is designed to conduct “holes”, which are places in a semiconductor that might contain electrons but happen not to; these, as it turns out, behave as if they were positively charged electrons. And although SiGe excels at transporting holes, it is rather less good at moving electrons than silicon is.

Future paths to higher performance along these lines will probably require both SiGe and another compound that moves electrons even better than silicon. The materials with the most favourable electrical properties are alloys of elements such as indium, gallium and arsenide, collectively known as III-V materials after their location in the periodic table.

The trouble is that these materials do not mix easily with silicon. The spacing between the atoms in their crystal lattices is different from that in silicon, so adding a layer of them to the silicon substrate from which all chips are made causes stress that can have the effect of cracking the chip.

The best-known alternative is graphene, a single-atom-thick (and hence two-dimensional) form of carbon. Graphene conducts electrons and holes very well. The difficulty is making it stop. Researchers have tried to get around this by doping, squashing or squeezing graphene, or applying electric fields to change its electrical properties. Some progress has been made: the University of Manchester reported a working graphene transistor in 2008; a team led by Guanxiong Liu at the University of California built devices using a property of the material called “negative resistance” in 2013. But the main impact of graphene, says Dr Yeric, has been to spur interest in other two-dimensional materials. “Graphene sort of unlocked the box,” he says. “Now we’re looking at things like sheets of molybdenum disulphide, or black phosphorous, or phosphorous-boron compounds.” Crucially, all of those, like silicon, can easily be switched on and off.

If everything goes according to plan, says Dr Yeric, novel transistor designs and new materials might keep things ticking along for another five or six years, by which time the transistors may be 5nm apart. But beyond that “we’re running out of ways to stave off the need for something really radical.”

His favoured candidate for that is something called “spintronics”. Whereas electronics uses the charge of an electron to represent information, spintronics uses “spin”, another intrinsic property of electrons that is related to the concept of rotational energy an object possesses. Usefully, spin comes in two varieties, up and down, which can be used to represent 1 and 0. And the computing industry has some experience with spintronics already: it is used in hard drives, for instance.

Research into spintronic transistors has been going on for more than 15 years, but none has yet made it into production. Appealingly, the voltage needed to drive them is tiny: 10-20 millivolts, hundreds of times lower than for a conventional transistor, which would solve the heat problem at a stroke. But that brings design problems of its own, says Dr Yeric. With such minute voltages, distinguishing a 1 or a 0 from electrical noise becomes tricky.

“It’s relatively easy to build a fancy new transistor in the lab,” says Linley Gwennap, the analyst. “But in order to replace what we’re doing today, you need to be able to put billions on a chip, at a reasonable cost, with high reliability and almost no defects. I hate to say never, but it is very difficult.” That makes it all the more important to pursue other ways of making better computers.

Article source: http://www.economist.com/node/21694347?fsrc=rss

Comments

Powered by Facebook Comments

Leave a Reply

Your email address will not be published. Required fields are marked *