Consumer electronic devices, all of which will be running some software and many of which will have what can loosely be described as an operating system, will be eating a massive 22 TWh, almost double where they were in 1990.
Essentially this rise of the computing machines more than matches the falls in electricity use that come from technological improvements in domestic lighting and refrigeration over this time.
Operating systems research has been seriously neglected in our universities in recent years (and I do not just mean in the UK): maybe that ought to be reconsidered and urgently.
How systems order their storage accesses, how they handle virtual memory, sequence their access to the network, and many more questions besides have a big impact on computing power use. And, at 29 TWh, just a 1% saving would lighten domestic bills by about £30 million. And that excludes the positive impact on greenhouse gas emissions.
(There is a Guadian article about this but I cannot see it on their website yet – when I can I’ll link to it.)
- An Operating System For Cities (tech.slashdot.org)
- Understanding Virtual Memory (ualberta.ca)
- Ontario’s Power Trip (opinion.financialpost.com)
- Renewable energy hits record high in UK (guardian.co.uk)
- Which Operating Systems Are Most Valuable to Publishers? (insights.chitika.com)
So, I fixed the problem I was having with ranges by reducing the number of times the range was being accessed by something like 1000 orders of magnitude by pushing the calculation to much later on in the program (ie instead of checking the range on every calculation, check it only at the end when some other filtering has already happened).
And so I can now show the world a graph that illustrates how xterm‘s initialisation does indeed show different phases of locality – ie that in a small enough time frame spacial locality is shown in memory accesses, but over a bigger time space that locality will shift – this is important because it indicates that a good virtual memory manager would be able to flush the pages from the old locality out of the system:
Plainly I need to do a bit more work on the graphing software – but the yellow lines are the axis and the green dots indicate where the memory is being accessed – time (or rather its analogue in terms of instructions executed) runs from left to right and (process) memory address gets greater as you move from the bottom up.
In fact this graph is only looking at the bottom 10% of the range of memory accessed by the application (and we are ignoring the instructions in favour of explicit heap loads, stores and modifications) – but the problem is that for most of that range nothing is accessed – again I need to adjust the Groovy script to make it better.
None of this is new – this experiment was first conducted in the early 1970s – though I am not aware of anyone having done it for Linux – but I suspect they have.
- Introducing the lackeyml format (cartesianproduct.wordpress.com)
Art was the only subject in which I failed a school exam – getting a low 30-something in 1980′s end of year tests. Not that I cared much. But as the years have gone by I have on more than one occasion wished I was rather better at it. Even now, trying to write a scientific paper – and for me, at least back then, art was always the opposite pole to science – my art skills are rather letting me down.
This graphic shows you why – now I have reduced it in size it looks passable (as a hugely simplified explanation of paging and virtual memory, but the arrows are still very ragged. Still, this is better than I would have managed even a year ago.