After paging?


Diagram of relationship between the virtual an...
Diagram of relationship between the virtual and physical address spaces. (Photo credit: Wikipedia)

Paging and virtual memory is at the heart of just about any computing device – more complex than a DVD player – we use everyday.

Paging is the memory management system based on the idea that we can divide the real memory of our computer into a sequence of smallish (typically 4,096 bytes) of “page frames” and then load the bits of data and program in and out of those frames (in “pages”) as we need them.

So, you can have pages from various different running programs in the page frames at any given time and then use a “virtual memory” system to map the pages placed in an arbitrary frame to the memory address the program thinks the page should be resident in.

It is not the only system we could use – “segments”, which involve moving large chunks (as opposed to small pages) of memory about is one approach, while “overlays” – using part of the memory space as sort of scratchpad working area – is another. More recently, with bigger “traditional” computers very large pages have been used as a way of making, at least in theory, more efficient use of memory now measured in billions (as opposed to a few tens) of bytes.

But paging is easily the most widely used approach and has been integral to the development of “multitasking” and similar shared resources approaches to computing – because paging allows us to just have the useful bits of a program and its data in memory we can have many more programs “running” at a given time.

But my PhD research is pointing me towards some of the weaknesses of the paging approach.

At the heart of the case for paging is the idea of “locality” in a computer’s use of resources: if you use one memory address at one instant there is a high probability you will use a nearby address very soon: think of any sort of sequential document or record and you can see why that idea is grounded in very many use cases of computing devices.

Locality means that it ought to make sense to read in memory in blocks and not just one little drop at a time.

But this principle may be in opposition to efficient usage of memory when competition for space in fierce: such as for the limited local memory resources we have on a Network-on-Chip computer.

Right now I am collecting data to measure the efficiency of 4k pages on such (simulated) devices. With 16 simulated cores trying to handle up to 18 threads of execution competition for pages is intense and the evidence suggests that they are resident, in some cases at least, for many fewer “ticks” than it takes to load them from the next lowest level in the memory hierarchy. On top of that many pages show that the principle of locality can be quite a weak one – pages of code are, in general, quite likely to demonstrate high locality (especially in loops) but pages of read-write memory may not.

I don’t have all the data to hand – essentially I am transforming one 200GB XML file into another XML file which will likely be around the same size and that takes time, even on quite a high spec computer (especially when you have to share resources with other researchers). But I expect some interesting results.

Enhanced by Zemanta
Advertisements

C, C, glorious C


gdb icon, created for the Open Icon Library
gdb icon, created for the Open Icon Library (Photo credit: Wikipedia)

This blog – The Unreasonable Effectiveness of C – is really very good, and makes a lot of points that are very difficult to argue with.

Though, lately, I have (re-)discovered the joys of C++ – even if I do write code in that language like a C programmer.

In the last six months I have written a lot of code designed to process a lot of data. I started off with Groovy – my data was XML and Groovy comes with tools that are specifically designed to make a programmer’s life a lot easier when handling XML.

But, in the end, Groovy just was not up to the task – because for my needs: processing 200 GB of XML, Groovy (or, rather, the Java VM on which it runs) is just not fast or flexible enough.

My response to that was to go “route 1” and reach for C. I know and knew I could build what I wanted in C and that it would be as fast as I could reasonably hope to get it (going down to Assembly was just not a serious option).

However, the code I am actually using is in C++. I found that the abstractions offered by the STL were worth the price in terms of speed of execution and code that is somewhat opaque to the GDB debugger.

It’s a compromise, of course, but I suspect if I had tried to write the equivalent of the STL map and set classes in C, I’d still be debugging my implementation many weeks later – after all I found that my C++ red-black tree code was actually broken despite using it (successfully) for a number of years.

Real programmers have to make these compromises – and C++ struck the right note between language expressiveness and language power for me.

Enhanced by Zemanta

Passing by reference in C++ – a downside


Code
Code (Photo credit: Riebart)

Once you realise what is happening this is obvious, but it took me a while…

I wanted to write some longs out as character strings in C++, so wrote some code like this:

void writeLongToFile(ofstream& xmlFile, long& value)
{
stringstream stringy;
stringy << value;
xmlFile << stringy.rdbuf();
}

But compile time error after compile timer error followed as I was told that I could not use this code with unsigned long or with const long and so on … but surely these are simple conversions I thought…

…Just me thinking like a C programmer again. Passing by reference is just that – there is no copying or conversion available and so long and unsigned long are fundamentally incompatible.

The solution? Just pass by value – afterall the saving in passing a more or less primative type like long by reference must be pretty minimal – passing the stream by reference is the real saving here (actually, it’s also practical – as we want the output to go to the real stream and not a copy: in C we’d use a pointer here).

Enhanced by Zemanta

Universal Credit: the fiasco continues


English: Cover from a leaflet published by the...
English: Cover from a leaflet published by the Liberal Publication Department (Liberal Party – UK) in 1911 promoting the future National Insurance Act. (Photo credit: Wikipedia)

Once-upon-a-time Universal Credit (UC), the UK government’s plan for a comprehensive rebuild of the country’s social welfare payments system, was going to be an IT showcase.

It would use the increased power of networks to update people’s benefit payments in “real time” – aiding labour market flexibility as work patterns changed as well as being designed to take advantage of increased computing power and capacity to ensure that the benefit system was operated as an integrated whole and not a disjointed patchwork.

More than even all this, it would be built using “agile” methods to ensure that it was on time and to budget and that the client – namely the UK taxpayer – got the system wanted as opposed to the system that could be bolted together most simply by the software writers.

None of this is now going to happen.

And to the government’s shame they have even debased their own, quite successful, project management system by pretending that UC is something other than a catastrophic failure.

Today, under the cover of a media focused on reporting a major set of domestic election results, the government has slipped out the annual report of the “major projects authority” (MPA) – the body it established to take a grip on the biggest pieces of government procurement.

The MPA has undoubtedly been a big success – as its report shows it has made a positive intervention in a number of projects and forced ministers to take firm action on weaker projects – even to the extent of scrapping them. But on UC the clarity of that process has been sacrificed to political considerations.

For UC has plainly totally failed but rather than admit it, the MPA has created a whole new assessment category – “reset” – and decided that UC is now a new project to be assessed from scratch.

What we should be getting instead is an honest appraisal of why the project failed and what lessons should be learned – especially in development of major software projects. But that would involve admitting that the government has consistently misled people about the nature of the development process, its timetable and its effectiveness. And that is just not going to be allowed to happen.

Enhanced by Zemanta

Time to improve the image of non-drinkers


Drinks
Drinks (Photo credit: nathanmac87)

 

“Never trust a man who doesn’t drink,” runs the saying and it is certainly true that I find myself thinking people who come to boozy social occasions and who don’t partake are slightly “odd”.

 

Or, at least, I used to. For the last two and a half years I have significantly cut down my drinking – mainly because I have found that a large volume of exercise takes away the need to “self medicate” in this way (eg., by leaving more relaxed).

 

As a result I now find myself sometimes turning down the opportunity to go to social events because I don’t want to drink (not least because it makes going for a run or to the gym more or less impossible – drunk running might not be an offence but it’s not fun).

 

There just is not enough alternatives to alcohol available – not least because sugar-filled soft drinks are not much of a step up in health terms.

 

And then there is the social stigma – why be the one you “don’t trust”?

 

Dominic Conroy and Richard De Visser at the University of Sussex have written about the social stigma of not drinking alcohol and have found that changing attitudes could make a significant difference in young people’s alcohol consumption.

 

So maybe all those public service ads that emphasise the negative consequences of drinking need to be balanced by both new drinks and new images for “non-drinkers”?

 

Though, it has to be said, a cold beer on a hot summer’s night is hard to beat.

 

 

Enhanced by Zemanta

Integration by parts


Saw this referred to in A Most Incomprehensible Thing: Notes Towards a Very Gentle Introduction to the Mathematics of Relativity and it made me shudder – as I always seemed to struggle with it at ‘A’ level maths – so here, for my own benefit, is a quick proof/explanation of the method.

The general rule is:

\int v(x) \frac{du}{dx} dx = u(x)v(x) - \int u(x) \frac{dv}{dx} dx

And why is this so?

From the product rule we know:

\frac{d}{dx} (u(x)v(x)) = \frac{du}{dx} v(x) + \frac{dv}{dx}u(x)

So \frac{du}{dx} v(x) = \frac{d}{dx}(u(x)v(x)) - \frac{dv}{dx}u(x)

And, integrate both sides and we have:

\int v(x) \frac{du}{dx} dx = u(x)v(x) - \int u(x) \frac{dv}{dx} dx

Of course, we still have to apply this sensibly to make our problem easier to integrate!

 

Enhanced by Zemanta

Deconstructing Max Tegmark’s argument against a simulated universe


In the end Max Tegmark‘s Our Mathematical Universe: My Quest for the Ultimate Nature of Reality has proved to be something of a disappointment – somewhere along the way the science got lost and was replaced by a lot of metaphysical speculation.

English: Max Tegmark Cropped from a photograph...
English: Max Tegmark(Photo credit: Wikipedia)

I haven’t quite finished it yet – so I’ll do a fuller review when I do (and there were good bits too), but what I want to take issue with here is his case (or, perhaps more accurately, the cases he quotes with approval) against the idea that we live in some sort of great computer simulation.

I am not arguing in favour of such a view of our universe – but it certainly has its strengths – if you think computer power will keep on growing then it is pretty difficult, if we apply the basic “Copernican” principal that we are nothing special, to avoid the conclusion that we are in such a universe.

Tegmark uses two major arguments against this idea that I want to take issue with.

The first, I think, is not an argument against it at all – namely that we are more likely to be a simulation within a simulation if we accept this basic thought. Well, maybe – but this is completely untestable/falsifiable and so beyond science. (In contrast the basic idea that we are in a simulated universe is testable – for instance if we find that our universe has a “precision limit” that would be a strong pointer.)

The second is the degree of complexity of simulating a many worlds quantum multiverse. But, of course, the simulator does not need to actually “run” all those other quantum worlds at all – because it’s not a physical reality, merely a simulation. All it has to do is leave the signs (eg the traces of superposition we can detect) in our environment that such alternate universes exist, but once “decoherence” takes place those alternate universes go straight to your garbage collection routines. So too for more anything much beyond the solar system – all the simulation has to do is provide us with the signals – it doesn’t have to actually, for instance, “run” a supernova explosion in a distant galaxy.

Enhanced by Zemanta

Why is this not treated as a scandal?


The British Health Secretary, Jeremy Hunt, ordered the Chief Medical Officer for England to conduct a study into the medical effectiveness of two homeopathic products.

Hunt has a record of pro-quackery, which, in a rational word, would lead to not just appointment as Health Secretary being questioned, but any senior role in government.

Enhanced by Zemanta

If you are feeling flush


You could always buy a copy of this: Computing Handbook, Third Edition: Two-Volume Set – which I think is due out this coming week (although Amazon says early June) and verify that it does, indeed, reference my MSc project on working set page replacement policies.

Personally I think it might be a vanity purchase too far at that price, but it would still be nice to know!