A few people have asked me what my PhD research is about, one adding “if you told me, would I understand?” So this page is an attempt to explain the basics to non-computer scientists. I have started it on 28 June 2013 – it may get updated as I make progress or may disappear entirely if I crash and burn… (this updated page is from 20 December 2013)
The broad area of research is into “operating systems” – the programs that provide the basic services on your computer (Windows, Linux, MacOSX are all operating systems).
Now, as the three examples I have given above all work reasonably well and are still under active development you may (quite reasonably) ask – what is the point of research? What can an academic, and a junior one at that (which is essentially what a PhD student is) add that commercial or voluntary teams many thousands strong cannot?
The answer is not that these operating systems are flawed (though, in a way, each of them is) but that the hardware they are targeted at is, or will soon be, out of date.
Computers have, since the late 1950s, got faster with every passing year. From the end of the 50s to the middle of the first decade of this century the rate of increase varied between a doubling every two years to a doubling every eighteen months – this has been called, somewhat inaccurately, “Moore’s Law“.
In fact what Gordon Moore pointed out was that commercial pressures meant that electronics manufacturers put twice as many transistors in their products every 18 – 24 months. And they still do (on wafers of silicon). But that also used to be associated with each transistor needing less power than ever before – this is called Dennard Scaling – and the result was a happy situation of more transistors but constant power demands. And that has broken down – it has become ever more difficult to get the power demands of transistors to fall.
The result has been that processor speed increases have almost halted.
But chip manufacturers have found a way round this – instead of having one very fast and power hungry chip, have two quite fast but less power hungry chips and break the software load in half – that way we can get faster execution (as the total processing capacity is higher) but keep the power budget constant. This is why you see computers advertised as “dual” or “quad” core and so on.
That model has lasted nearly a decade but it too is breaking down. To keep the speed of processing high these dual and quad cores have to communicate the results of their computations to each other and to disk and memory – and when they do they block other cores from doing the same. This means the cores spend more and more time waiting for each other and once again the speed advantages start to erode.
So now there is a new way of fixing this – put the cores on a chip together and connect them with a network that allows more than one core to communicate at once – these chips are called “Network-on-Chip” (NoC) and are The Coming Thing.
But the operating system software needed to get them to work is not really with us. Instead various methods are being used – like running mini versions of Linux on each core on the network or doing without an operating system and just writing the software you need – which are not really long term solutions.
My research will be into how we can build operating systems for these NoCs. Specifically, as I am now beginning experimental work, I am examining ways in which these systems could most efficiently manage their interactions with memory.
- A supercomputer on every desktop? Not yet, anyway (cartesianproduct.wordpress.com)
- How to be a successful PhD student (cartesianproduct.wordpress.com)
- PhD Contest Wants You to Explain Your Thesis Through Interpretive Dance (geekosystem.com)
- No work for all foreign PhD students (radionz.co.nz)
- When are you really finished with a PhD? (arishaandhari.wordpress.com)
- So You are Thinking of Doing a PhD … take this simple test (prof.so)
- Show us your PhD – New mathematics to model the brain (blogs.abc.net.au)