## Not even an April Fool

I can remember my first encounter with the metric system very clearly…
In the Summer of 1972 the British Army mobilised in massive numbers to end the “no go areas” that had sprung up in nationalist areas of Belfast and elsewhere over the previous two and a half years. Operation Motorman was to be the biggest ever “peacetime” military operation in the UK since the end of the Second World War.

 Embed from Getty Images 

One of those areas was in Lenadoon where my school, Blessed Oliver Plunkett, was based.  As the Army took over the two tower blocks on the estate as observation posts, the residents left and bedded down in the only place available – the school.

The result was that, to all effects and purposes, the school was closed and I – like hundreds of others – started the next academic year somewhere else – in my case at Holy Child Primary School (in consequence of “Olly Plunkett’s” closure, and the rapid expansion of the Catholic population of West Belfast,  Holy Child that year became the largest ever school in the UK – with over 2700 pupils.)

On my first day in Mrs MacManus’s class we had a maths lesson and I was confronted, for the first time, with these strange metric measurements, the centimetre, the gramme and the millilitre.

My point is this – in the UK, even in the bits of the UK that were least keen on being in the UK, we have been teaching our children in metric measurements since 1972 – 42 years.

Yet, along comes the Prime Minister, David Cameron, last night, and this happens:

“I think I’d still go for pounds and ounces, yes I do,” Cameron told BBC2’s Newsnight when asked which should be taught predominantly.

I admit, I am no fan of David Cameron to begin with. But where do you begin when faced with such idiocy?

I suppose you can start with the fact that Cameron is a graduate of PPE – politics, philosophy and economics – and plainly knows nothing, or next to nothing, about science and the fact that sciences have been taught, the world over, in some form of metric measurement since at least the 1950s. Teaching our children in imperial measurements would be to actively seek to disadvantage them.

Then again I could just recall that John Stuart Mill was moved to remark to the House of Commons: “What I stated was, that the Conservative party was, by the law of its constitution, necessarily the stupidest party. Now, I do not retract this assertion; but I did not mean that Conservatives are generally stupid; I meant, that stupid persons are generally Conservative. I believe that to be so obvious and undeniable a fact that I hardly think any hon. Gentleman will question it.” (My emphasis).

With Cameron one never quite knows of course – and this may just be the latest piece of his rather desperate efforts to appease the anti-Europeans in his party who want him to follow the lead of the hard-right United Kingdom Independence Party (UKIP).

UKIP’s platform is built on two propositions – that it would be better if we returned to the 1950s and that all that is bad in the UK emanates from the European Union. In metrication they find both enemies – modernity and Europe. At least in their minds they do – given that the UK did not join the then EEC until 1973 I am not sure Europe really is “to blame” for the end of the rood and the chain and the rise of the hectare and the metre.

Already the Tories have thrown a bone to the cave men of UKIP by bringing back rote learning of “twelve times table” – once an essential for a country where twelve pennies made a shilling but an anachronism since “decimalisation day” – 15 February 1971. So maybe this is next.

Or, more likely, it is Cameron not having the guts to stand up to them in public, even on such an obviously rational issue as the use of the metric system.

## Pointers versus references

Some people don’t like pointers – and for that reason, I think, we have references in C++. But as a confirmed pointer person, I find references very hard going.

I had a piece of C++ code that did this:

PartialPage& DoubleTree::oldestPage()
{
PartialPage& pageToKill = pageTree.begin()->second);
long timeToKill = pageTree.begin()->second.getTime();
map<long, PartialPage&>::iterator itOld;
for (itOld = pageTree.begin(); itOld != pageTree.end(); itOld++) {
if (itOld->second.getTime() < timeToKill) {
timeToKill = itOld->second.getTime();
pageToKill = itOld->second;
}
}
return pageToKill;
}


This produced rubbish results – because re-assigning the reference didn’t make it refer to a new element of the map. Essentially you cannot mutate a reference in C++ at all.

Switching to pointers fixed the problem though.

PartialPage* DoubleTree::oldestPage()
{
PartialPage* pageToKill = &(pageTree.begin()->second);
long timeToKill = pageTree.begin()->second.getTime();
map<long, PartialPage>::iterator itOld;
for (itOld = pageTree.begin(); itOld != pageTree.end(); itOld++) {
if (itOld->second.getTime() < timeToKill) {
timeToKill = itOld->second.getTime();
pageToKill = &(itOld->second);
}
}
return pageToKill;
}


## Going atomic … or concurrency is hard

In my PhD world a year’s worth of software experimentation has proved what we all knew already … that systems using traditional memory models struggle in the Network-on-Chip environment and so I am now trying something slightly different.

My “model” (it’s all in software) is of a 16 core system, with each core having a small amount of on-chip memory (32k), which are combined together to form a flat memory space. Memory in this space can be accessed quickly, memory outside it, in the next level up in the hierarchy, is roughly 100 times further away.

Using any form of traditional paging model (including Belady’s optimal page replacement algorithm) this system starts to thrash on even moderate loads – the cost of moving pages in and out of the local memory determines performance and so there is no benefit from adding additional processors (in fact it just slows the individual processors down).

Such an outcome makes any promise of improved performance from parallelism void – it does not really matter how efficiently you have parallelised the code (some corner cases excepted – eg if all chips were accessing the same memory at the same time), you are trapped by a memory I/O bound.

So now I want to look at alternatives beyond the usual 4k (or 2k) paging – but I have been struggling all week to get the locking semantics of my code right. Concurrency is hard.

The one thing that debugging parallel code and locks teaches you again and again is never to assume that some event will be so rare you don’t need to bother about it: because when you are executing millions of instructions a second, even rare events tend to happen.

It has also taught me to check return values – code that will “always” work in a single threaded environment may actually turn out to be quite a tricky customer when running in parallel with other instances of itself or when it is accessing shared memory.

But, finally, the main lesson this week has been about going atomic.

I have a tendency to think – if I can release that lock for a few lines of code that might improve overall performance and I can just lock it again a little later. Beware of that thought.

If you need to make a series of actions atomic you need to hold the same lock across them all – releasing it for even a few lines breaks atomicity and will quite likely break your code.

## A probably entirely naïve question about the principle of relativity

Surely I can quite easily design an experiment that shows the relativity principle is false.

If turn around on the spot the principle, as I understand it, asserts that I cannot build an experiment that proves it was me that moved as opposed to everything else that moved while I stayed still.

But the rest of the universe is very massive – possibly of infinite mass – and so to move it through $2\pi$ radians takes a hell of a lot more energy than moving me.