Going atomic … or concurrency is hard


SVG Graph Illustrating Amdahl's Law
SVG Graph Illustrating Amdahl’s Law (Photo credit: Wikipedia)

In my PhD world a year’s worth of software experimentation has proved what we all knew already … that systems using traditional memory models struggle in the Network-on-Chip environment and so I am now trying something slightly different.

My “model” (it’s all in software) is of a 16 core system, with each core having a small amount of on-chip memory (32k), which are combined together to form a flat memory space. Memory in this space can be accessed quickly, memory outside it, in the next level up in the hierarchy, is roughly 100 times further away.

Using any form of traditional paging model (including Belady’s optimal page replacement algorithm) this system starts to thrash on even moderate loads – the cost of moving pages in and out of the local memory determines performance and so there is no benefit from adding additional processors (in fact it just slows the individual processors down).

Such an outcome makes any promise of improved performance from parallelism void – it does not really matter how efficiently you have parallelised the code (some corner cases excepted – eg if all chips were accessing the same memory at the same time), you are trapped by a memory I/O bound.

So now I want to look at alternatives beyond the usual 4k (or 2k) paging – but I have been struggling all week to get the locking semantics of my code right. Concurrency is hard.

The one thing that debugging parallel code and locks teaches you again and again is never to assume that some event will be so rare you don’t need to bother about it: because when you are executing millions of instructions a second, even rare events tend to happen.

It has also taught me to check return values – code that will “always” work in a single threaded environment may actually turn out to be quite a tricky customer when running in parallel with other instances of itself or when it is accessing shared memory.

But, finally, the main lesson this week has been about going atomic.

I have a tendency to think – if I can release that lock for a few lines of code that might improve overall performance and I can just lock it again a little later. Beware of that thought.

If you need to make a series of actions atomic you need to hold the same lock across them all – releasing it for even a few lines breaks atomicity and will quite likely break your code.

Advertisements

None of the easy things work


This is a post about my PhD research: in fact it is a sort of public rumination, an attempt to clarify my thoughts in writing before I take the next step.

It’s also possibly an exercise in procrastination: a decision to write about what I might do next, rather than to get on with doing it, but I am going to suppress that thought for now.

I am looking for ways to make “network on chip” systems more viable as general use (or any use, I suppose) computing platforms. These systems are a hardware response to the hardware problem that is causing such difficulties for big software and hardware manufacturers alike: namely that we cannot seem to make faster computers any more.

The problem we have is that while we can still get more transistors on a chip (i.e., that “Moore’s Law” still applies), we cannot keep operating them at faster speed (i.e., “Dennard Scaling” has broken down) as they get too hot.

In response we can either build better small devices (mobile phones, tablets) or try to build faster parallel computing devices (so instead of one very fast chip we have several moderately fast chips and try to have better software that makes good use of their ability to compute things in parallel).

Network-on-chip (NoC) processors are a major step along the road of having parallel processors – we put more processing units on a single piece of silicon rather than have them co-operate via external hardware. But the software has not caught up and we just cannot keep these chips busy enough to get the benefit their parallelism might offer.

That is where I hope to make a difference, even if just at the margins. Can I find a way to make the NoC chips busier, specifically by keeping them fed with data and code from the computer memory fast enough?

I have tried the obvious and simple methods: essentially adaptations of methods that have been used for most of the last 50 years in conventional serial computer devices and the answer is ‘no’ if that is all that is on offer.

Messing about with the standard algorithms used to feed code and data to the processors hits a solid brick wall: the chips have a limited amount of ‘fast’ local memory and the time it takes to keep that refreshed with up-to-date code and data places a fundamental limit on performance.

So, while many computer science students might be familiar with “Amdahl’s Law” which stipulates that, for parallel code, the elements that have to be run in serial (even if just setting up the parallel section) place a fundamental limit on how much extra performance we can squeeze out by throwing more and more parallel processors at the problem – we have a different, if related, problem here. Because we can apply more and more parallel processors to the problem but the performance remains constant, because even though we are running parallel code, we are limited by memory performance.

This limit – which implies that as we use more processors they become individually less efficient – even hits the so-called “clairvoyant” or optimal (OPT) memory management/page replacement algorithm: OPT knows which memory page it is most efficient to replace but is still limited by the fundamental barrier of limited on-chip memory.

The limit is manifest in the straight lines we can see in the plot here – the steeper slope of OPT means it runs faster but after the first few processors are brought to bear on the problem (the number of processors being used climbs for the first few billion instructions) the rate of instructions executed per ‘tick’ (an analogue of time) is constant.

OPT and LRU compared - both fundamentally stymied by memory shortage
OPT and LRU compared – both fundamentally stymied by memory shortage

 

Getting NoCs to run faster and so releasing the benefits from the potentially massive parallelism they could bring, depends on beating this memory barrier (and lots of other things too, but one at a time!). So, what are the options?

Well, one thing I can rule out is trying to cache a particular piece of a memory page (in traditional operating systems memory is shifted about the system in blocks called pages – typically 4096 bytes long). Caches typically store memory in 16 byte “lines” – hardware reads from the backing memory store in 16 byte blocks in most cases – and so I tested to see if there was a pattern in which 16 byte line was most likely to be used (see previous blog post). My instinct from looking at the plot is that will not work.

Similarly, a look at which pages were being used doesn’t reveal any immediately obvious pattern – some pages are used heavily by code, some are not – nothing surprising there.

So, the easy things do not work. Now I need to look at the hard things.

I think I need to escape from the page paradigm – one thing to look at is the size of the memory objects that are accessed. 4k pages are simple to handle – load a block in, push it out: but they could be (probably are) very inefficient. Might it be better to base our memory caching system on object sizes? That’s what I plan to check.

Enhanced by Zemanta

Reasoning about a NoC


Thinking about a Network-on-Chip system and what its system software needs to do…

  • Parallelisation is essential to efficiency – in a NoC there are a multitude of cores, but each core has only the fraction of the computational power a “traditional” unicore might be expected to have – therefore it is essential that, where possible, code is parallelised across as many cores as possible;
  • Each core needs to be able to access operating system services (via system calls or some other mechanism), but it is not necessarily the case that each core has to run a full or even a partial operating system – thus RPC or some other mechanism can be used to ‘remotely’ provide system services;
  • Application programmers want, above all, a single address space.

 

A supercomputer on every desktop? Not yet, anyway


My PhD is about operating systems on Network-on-Chip (NoC) computers. I have not actually done any research yet, so don’t expect anything here – but I have been playing with some existing data and I think it gives some interesting results.

NoCs are part of the semiconductor industry’s response to the failure of “Dennard scaling”: Moore‘s law says we can double the number of transistors on a chip every 18 – 24 months and we are still able to do that. Dennard scaling was the thing that made that super useful – because it meant the power requirements for the processors stayed constant even as they acquired more transistors. Now it has broken down, building faster chips becomes that much more difficult because, bluntly, they would burn up unless we limited the power.

NoCs aim to get round this by replacing one fast and power hungry processor on a single chip with several less powerful processors on the same chip – the idea being if we can attack the problem with several slower processors we can get the job done more quickly than if we used just one faster processor.

But there’s a catch, a big one, as captured in Seymour Cray‘s question:

would you rather plough a field with two strong bulls or a thousand chickens?

NoCs do not replace one fast chip with a dozen not quite as fast chips – they parcel out the power eaten by that one chip to the whole network on the chip – it’s not quite as bad as dividing the computing power by the number of chips (for that was the case there would be no advantage at all), but it is not fantastically above that.

Using work published by Hadi Esmaeilzadeh from the University of Washington along with others from the University of Wisconsin – Madison, the University of Texas at Austin and Microsoft Research, my projection is that, if we took one of today’s fast chips and parcelled up the power, then we would see computing power decline like this:

  • One processor: 100% computing power
  • Two processors: 65% computing power each
  • Four processors: 38% computing power each
  • Eight processors: 21% computing power each
  • Sixteen processors: 11% computing power each
  • Thirty-two processors: 6% computing power each
  • Sixty-four processors: 3% computing power each

Now, 64 x 3 = 192, so that might look like quite a good deal – a 92% speed up. But it is not that simple because some part of the code, even if just the bit that starts and ends your process, can only run on one processor even if all the rest can be split into 64 equal parts. And the pieces that will only run on one processor are now 33 times slower than they were before. The key balance is this: how much code can you run at nearly twice the speed (92% speed up) versus how much do you have to run at 33 times slower than before?

The answer is that you have to run a lot of code in the fast zone before you really see a big advantage.

45nm NoC modelled

As the graph suggests you would need to have about 99.9% of your code capable of running in parallel before you saw a guaranteed speedup with 64 processors in your NoC. Plenty of such code exists – such as in image handling and so on – but you are not likely to be running too much of it on your desktop computer at any given time (except perhaps when you are running a streaming video application) and the big disadvantage is that when you are not running the parallel code you are stuck with the 3% performance.

(Actually, it’s not quite as simple as that, as you may have a faster processor equipped to do the single thread of execution stuff, but then your computer starts to become more expensive.)

In the future chips will get faster – but maybe not that much faster. In a decade’s time they could be between 400% and 34% faster than they are today, depending on whether you are optimistic or pessimistic (realistic?) about processor technologies. That will help, but still not enough to put this technology in your desktop – as opposed to your games console or your television set or maybe a specialised processor in a top of the range machine.

So don’t expect your personal supercomputer too soon.

Parallella multicore board


When Kickstarter began I thought the whole idea a bit silly – why would people put up money for a project when they take all the risk and get little reward?

I suppose I still think that way, but I have just pledged $119 to support the development of the “Parallella” multicore board – that money will get me a 16 core device if it ever gets built, but you have to be realistic about these things and say that, in reality, I have just given away the money.

But, as I am studying for a PhD in the very technology that the Parallella targets – multicore systems – it seemed like I should take the risk. As the project has now passed its $750,000 funding target they will be taking the money.

Have we reached “peak silicon” and what can we do about it?


Moore’s Law states that the number of transistors that can be squeezed into a given slice of silicon doubles every two years (or 18 months) – something I wrote about recently and where I declared “More transistors means greater speed, more and cheaper memory and so on … ”

Except, maybe not. As the graph below, shamelessly grabbed from Herb Stutter’s “The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software“, shows, while Moore’s Law (the green graph) holds true, the other associated improvements that we have come to expect to parallel it, such as a similar increase in efficiency per watt (royal blue graph) and clock speed (navy blue) have not. In short, we can build cheaper chips but they are not necessarily much faster.

Herb Sutter's graph of CPU performanceAnd, as this article recounts, we are now talking about “dark silcon” – bits of chips that have to remain unpowered while other parts are in use so as to ensure the whole chip does not fry or fail due to too high power consumption.

So, if we have reached the point of “peak silicon” what can we do about it?

The chip manufacturers have responded by packing more and more cores into their devices and that works up to a point – we do not even need to have very parallel coding outside the operating system to take some advantage of that on even a typical multitasking desktop machine. But none of us are doubling the number of video renderers, MP3 decoders, database queries and spreadsheet calculations we run in parallel every 18 months, so the “Moore paradigm” of computing power doubling in that time will be lost.

A more fundamental alternative is to rewrite our software so that it becomes inherently designed to take advantage of multicore machines. Writing effective parallel software is not easy, but it can be done for lots of tasks. But even then there are limits – “Amdahl’s law” reminds us that parallelisation will only speed the parts of a program that can be run in parallel: if say we had a piece of code that must be run in serial and takes 5 seconds, and some code that currently takes 55 seconds but could be made perfectly parallel, then if we had 2 processors it takes 5 seconds (serial time), plus 27.5 seconds for the parallel code, doubling the processors but not quite halving the time, with a 46% saving. Doubling the number of processors again (to 4) cuts total computing time to 18.75 seconds but the proportional saving has dropped to 42%. In other words, the “Moore paradigm” also disappears.

The third thing we can do is look for better algorithms: the recent announcement of a vastly improved fast fourier transform (FFT) algorithm shows what can be done here – algorithmic improvement can vastly outstrip hardware speedup. But currently for many problems (those in NP space) there is no prior known algorithm available and computing power can be simply dedicated to going through all the possible algorithms looking for the one that works (we do not know what algorithms solves an NP problem but once a solution is found we can verify it ‘easily’). Assuming, as most mathematicians are said to do, that P does not equal NP (ie there is no yet to be discovered algorithm that cracks NP problems) this at least means that “peak silicon” will keep internet commerce safe for the foreseeable future but it is bad news in lots of other ways.

There is a fourth option, of course, which is to get a better physics – either for silcon fabrication, quantum computing or some other physics based innovation. Right now, though, these are probably still the least likely options but as the links below show, lots of people are working .