The end of Dennard scaling


Lots of people have heard of Moore’s law and probably think it means computers get twice as fast every 18 months or two years or so.

Replica-of-first-transistor
Replica-of-first-transistor (Photo credit: Revolweb)

Of course, that’s not quite what it really says – which is that we can put twice as many transistors on a chip every two years or so. And for most of the last fifty years that was pretty much equivalent to saying that computers did get twice as fast.

(And, in reality, “Moore’s law” isn’t really a law at all – it began as an observation on the way that commercial pressures were shaping the semi-conductor market, with chip/solid state electronics manufacturers competing by making their offerings more powerful. But it is also a mistake to state, as sometimes happens, that the “law” says we can pack twice as many transistors into a given space every two years – because that is not true either. In fact in the early 70s transistor density was doubling every three years or so, but chip sizes were also increasing, so transistor numbers doubled every 18 months, but by the 1990s the pace of chip growth slowed and so the time taken to double transistor numbers increased to 24 months.)

It’s now nearly a decade since what has been called “the breakdown of Moore’s law” and the switch to multicore processors instead of ever faster single chips. But again, this is wrong. Moore’s law has not broken down at all – transistor numbers are continuing to increase. What has happened is that it is no longer possible to keep running these transistors at ever faster speeds.

Because accompanying the decrease in transistor size which was the fundamental driver what is popularly described as Moore’s law was another, related, effect – “Dennard scaling” (named after Robert Dennard who led the research team from IBM which first described this effect in a 1974 paper). The key effect of Dennard scaling was that as transistors got smaller the power density was constant – so if there was a reduction in a transistor’s linear size by 2, the power it used fell by 4 (with voltage and current both halving).

This is what has broken down – not the ability to etch smaller transistors, but the ability to drop the voltage and the current they need to operate reliably. (In fact the reduction in linear scale of transistors ran slightly ahead of the reduction of other other components at the start of this century, leading to 3+ Ghz chips a bit faster than was expected – though that is all we got to.)

What has happened is that static power losses have increased every more rapidly as a proportion of overall power supplied as voltages have dropped. And static power losses heat the chip, further increasing static power loss and threatening thermal runaway – and complete breakdown.

Reasons for increased static power loss include complex quantum effects as component size falls and the chemical composition of chips is changed to handle the smaller sizes. And there seems to be no way out. “Moore’s law”, as popularly understood in the sense of ever faster chips, is dead and even if we don’t get the science, we understand all the same – as the recent catastrophic fall in PC sales has demonstrated: nobody is interested in buying a new computer for their desktop when the one they already have is not obsolete.

Instead we are buying smaller devices (as Moore’s law makes that possible too) and researchers (me included) are looking at the alternatives. My research is not in electronics, but software for multicore devices, but the electronics researchers have not completely given up hope they can use other methods to build faster chips, but there is nothing to show for it yet (outside the lab – see the links below for some recent developments).

 

Update: I have edited slightly for sense and to reflect the fact that when Gordon Moore first spoke about what was later termed his “law” there were no integrated circuits available and that recent developments by IBM point to at least the potential of solving the problem by developing new materials that might eliminate or minimise static power loss (see the links below).

Advertisements

What really drives Moore’s Law


Gordon Moore on a fishing trip
Gordon Moore on a fishing trip (Photo credit: Wikipedia)

Moore’s Law” is one of the more widely understood concepts in computer hardware. Many ordinary people, including those with little or no understanding of what goes in to making an integrated circuit, understand the idea that computer hardware becomes better (and cheaper) in some sort of geometric way. But, actually, the rate and reasons for this “law” have been in flux ever since it was first proposed in 1965.

As part of the very first baby steps on my road to getting a PhD I have to prepare a literature review and so that means demonstrating an understanding of the processes that mean while Gordon E. Moore‘s predictions about increased transistor density have broadly held-up, the era of ever faster single (or even small number) processor computers is decisively over.

So I have read this paper – Establishing Moore’s Law – which gives some interesting perspectives on what we now refer to as “Moore’s Law”.

The term “Moore’s Law”: Although coined by Carver Mead of CalTech the term was popularised by Robert Noyce, Moore’s co-founder at Intel, in an article in the Scientific American in 1977.

The original formulation was a prediction based on the need of IC makers to compete: In 1965, when Moore first formulated what we now call his “law”, it was actually a prediction based on the need of manufacturers of integrated circuits – then a relatively new and experimental technology – to compete with the manufacturers of single electronic components. His prediction was that manufacturers would need to radically cut the costs and increase the complexity of their products if they were to successfully compete.

The ‘law’ broke down in the late 1960s : From around 1965 to 1969 the ‘law’ didn’t work in the sense that the growth in chip speed and complexity did not match the prediction. In Moore’s view this was because manufacturers did not produce chips “whose complexity came close to the potential limit”.

The ‘law’ is not based on any fundamental character of ICs but on a wide interplay of technology and commercial factors: This is the overall theme of the paper – the early gains were based on the need for ICs to compete at all, then came gains founded on the introduction of computer aided design and then from a state sponsored drive by Japanese companies to gain a foothold in the market (in fact they came to dominate the market in memory technology). Then came the Microsoft-Intel alliance and on to the present…