The end of Dennard scaling


Lots of people have heard of Moore’s law and probably think it means computers get twice as fast every 18 months or two years or so.

Replica-of-first-transistor
Replica-of-first-transistor (Photo credit: Revolweb)

Of course, that’s not quite what it really says – which is that we can put twice as many transistors on a chip every two years or so. And for most of the last fifty years that was pretty much equivalent to saying that computers did get twice as fast.

(And, in reality, “Moore’s law” isn’t really a law at all – it began as an observation on the way that commercial pressures were shaping the semi-conductor market, with chip/solid state electronics manufacturers competing by making their offerings more powerful. But it is also a mistake to state, as sometimes happens, that the “law” says we can pack twice as many transistors into a given space every two years – because that is not true either. In fact in the early 70s transistor density was doubling every three years or so, but chip sizes were also increasing, so transistor numbers doubled every 18 months, but by the 1990s the pace of chip growth slowed and so the time taken to double transistor numbers increased to 24 months.)

It’s now nearly a decade since what has been called “the breakdown of Moore’s law” and the switch to multicore processors instead of ever faster single chips. But again, this is wrong. Moore’s law has not broken down at all – transistor numbers are continuing to increase. What has happened is that it is no longer possible to keep running these transistors at ever faster speeds.

Because accompanying the decrease in transistor size which was the fundamental driver what is popularly described as Moore’s law was another, related, effect – “Dennard scaling” (named after Robert Dennard who led the research team from IBM which first described this effect in a 1974 paper). The key effect of Dennard scaling was that as transistors got smaller the power density was constant – so if there was a reduction in a transistor’s linear size by 2, the power it used fell by 4 (with voltage and current both halving).

This is what has broken down – not the ability to etch smaller transistors, but the ability to drop the voltage and the current they need to operate reliably. (In fact the reduction in linear scale of transistors ran slightly ahead of the reduction of other other components at the start of this century, leading to 3+ Ghz chips a bit faster than was expected – though that is all we got to.)

What has happened is that static power losses have increased every more rapidly as a proportion of overall power supplied as voltages have dropped. And static power losses heat the chip, further increasing static power loss and threatening thermal runaway – and complete breakdown.

Reasons for increased static power loss include complex quantum effects as component size falls and the chemical composition of chips is changed to handle the smaller sizes. And there seems to be no way out. “Moore’s law”, as popularly understood in the sense of ever faster chips, is dead and even if we don’t get the science, we understand all the same – as the recent catastrophic fall in PC sales has demonstrated: nobody is interested in buying a new computer for their desktop when the one they already have is not obsolete.

Instead we are buying smaller devices (as Moore’s law makes that possible too) and researchers (me included) are looking at the alternatives. My research is not in electronics, but software for multicore devices, but the electronics researchers have not completely given up hope they can use other methods to build faster chips, but there is nothing to show for it yet (outside the lab – see the links below for some recent developments).

 

Update: I have edited slightly for sense and to reflect the fact that when Gordon Moore first spoke about what was later termed his “law” there were no integrated circuits available and that recent developments by IBM point to at least the potential of solving the problem by developing new materials that might eliminate or minimise static power loss (see the links below).

Advertisements

Not even on Wikipedia…


If you are old enough, like me, to remember the Cold War before the days of glasnost and perestroika, you will also recall that one of the strategic weaknesses of the Soviet Union was that it was forced to steal and copy advanced western technologies, seemingly unable to invent them itself.

In many cases that was plainly true – spies stole the secrets of the Manhattan Project to give Stalin his atomic bomb (though Soviet scientists devised H-bomb mechanisms independently).

But in the case of computing, the decision to copy the west was a deliberate and conscious one, taken despite real skill and specialism existing inside the Soviet Union. A while back I wrote about how Soviet computer scientists appeared to be some years ahead of the west in the study of certain algorithms that are important for operating system management. In hardware it was not that the Soviets had a lead – but the first electronic computer on continental Europe was build in the Soviet Union and was based on independent research – but they certainly had real know-how. What killed that was a decision by the Soviet leadership to copy out-of-date IBM machines instead of continuing with their own research and development.

All this is recounted, in novelised form, in the brilliant Red Plenty. The book highlights the role of  Sergey Alekseevich Lebedev, the Ukrainian known as “the Soviet Turing“. Like Turing, Lebedev was taken from his work (as an electrical/electronic engineer rather than a mathematician) by the war and played an important role in Soviet tank design. Afterwards he returned to his studies with a vengeance and by the mid-fifties he was building some of the world’s fastest and, arguably, the best engineered, computer systems  – the so-called MESM (a Russian acronym for “Small Electronic Calculating Machine”.)

Yet today he does not even appear to rate an entry in Wikipedia.

The Soviet computer industry was not just killed by poor decisions at the top, but by the nature of the Soviet system. Without a market there was no drive to standardise or commoditise computer systems and so individual Soviet computers were impressive but the “industry” as a whole was a mess. Hopes that computers could revolutionise Soviet society also fell flat as the centralised planning system ran out of steam. Switching to copying IBM seemed like a way of getting a standardised system off the shelf, but it was a blow from which Soviet computing never recovered.

OS/2: killed by Bill?


OS/2 logo
OS/2 logo (Photo credit: Wikipedia)

Here is a fascinating account of the rise and fall of OS/2, the operating system that was supposed to seal IBM’s (and Microsoft’s) global domination. Instead it flopped, being beaten by a poorer quality alternative in the form of Windows 3.0/3.1 after Microsoft pulled out.

I remember when Windows NT was launched in 1993 – one of its selling points was its ability to run OS/2 1.0 software natively via a dedicated subsystem (strange to remember, but then Microsoft went heavy on the modular nature of NT and its ability to run on non-Intel hardware and to support different software on top of the microkernel).

I could only ever find one free piece of native OS/2 software to run – a hex editor. A fundamentally vital workhorse for any programmer yet good implementations always seem to be in short supply (even now – last month I seriously considered writing my own so fed up was I with the basic offerings that come with Linux flavours). This one – its name escapes me – was a goodie though and I was a bit cheesed off when an NT upgrade (to 3.5) broke it. By then Microsoft plainly did not care much for software compatibility (or for NT’s ability to run on different platforms – that was scrapped too).

Still, OS/2 had its fans. As a journalist reporting on housing I went to see a public sector housing manager in rural Somerset at about this time: he was pioneering a new software system for his district housing offices and OS/2, with its revolutionary object-orientated desktop (which is what right clicking is all about) was to be at the core of that – with housing officers treating the desktop like various forms on which they could order repairs and so on. It was difficult not to share his enthusiasm because the idea, now a commonplace, that objects on the desktop could be more than just program icons was so new and exciting.

The article lists the ways in which Microsoft went all out to kill OS/2 and, in every practical sense, they succeeded. Those who doubt the need for free-as-in-freedom software should consider that. But it also lists various places where OS/2 is still in use (in the US). Anyone know of similar examples in the UK?

2012: the year of the death of the desktop?


The first developers of IBM PC computers negle...
Image via Wikipedia

The BBC has an interesting article about the decline of sales of PCs (by which I think they mean “IBM compatibles” as opposed to personal computers in general), including the decline of enthusiast computer building.

Two reasons are given – the slow down in production of games for PCs – that seems to be a major factor in the enthusiast end of the market – and, more importantly, the rise of the tablet.

Now, I understand why people like tablet devices and I think they are good. But they are also hideously underpowered – I find this a constant frustration in business when working with people who are relying on tablet devices, as they (the devices) cannot handle various file formats or tasks.