Running a half marathon


A year ago today I ran the Hackney Half Marathon – my first race at that distance (actually only my second true race at any distance). I was fit – a week before I’d done the Finsbury Park Parkrun in 23 minutes and 17 seconds, a PB I am yet to beat. I felt great at the start and ran fast – too fast, as I even knew at the time but couldn’t get myself to slow down properly. I ran the first 10km in what was, until a month ago, my PB. If I had kept that up I’d have finished somewhere at around 1 hour 50 minutes.

By 15km I was slowly badly, by 17km I was desperate. By 19km I could run no more and was walking. I did run most of the last 1000 metres and I certainly ran over the line, but I was in a terrible state and nearly fainted. The finishing time – 2 hours 15 minutes – was a real disappointment, but at least I had done it. But never again, surely.

My second run at this distance was the Royal Parks Half Marathon last October. For the first 10km I followed the 1 hour 55 minutes pacer but after that I couldn’t keep up – I had not prepared as well for this race as the Hackney half and that fundamental lack of fitness had let me down, but still I wasn’t doing too badly.

Coming into the final mile both my legs buckled. I knew I had to walk. After a few hundred metres I tried running again only to get a very painful attack of cramp. I walked to about the 800 metres-to-go mark and started running again, slowly. I made it over the line. But whereas I’d got to 20k in 2 hours and a minute, it was 2 hours and 12 minutes before I finished.

And now I had really injured myself quite badly. Not badly as in get to hospital but badly as in blisters on both feet (don’t rely on Nike’s running socks), bad chafing – something like this – fixes that and most seriously of all, I had very painful Achilles’s Tendons. I didn’t run again at all for two weeks and, effectively, my 2014 running season was over.

Roll around 2015 and two big pieces of technology come into my life. Firstly the Garmin Forerunner 10 – a simple but very easy to use runners’ watch which meant I could really judge my pace properly and then, perhaps even more importantly, a Karrimor Roller which has worked wonders on my legs and hence my Achilles’s Tendons.

So, last week I ran the St. Albans Half Marathon. I had a realistic target – a 5′ 50″ per kilometre pace – and a means to judge whether I was hitting it or not. That wouldn’t take me under two hours, but it would take me close and it was realistic and achievable on what was a very tough course. I prepared properly – tapering even when I wanted to run. And I did it: 2 hours, 3 minutes and 34 seconds – a 5′ 50″ pace.

I still made mistakes – too fast (about 5′ 40″ pace) for much of the start and running the end in a semi-zombified state due to, fundamentally, mental weakness. But it was good.

Even better – I’ve run 30km in the last week – so no injuries.

Interstellar


A 3D projection of an tesseract performing an ...
A 3D projection of an tesseract performing an isoclinic rotation. (Photo credit: Wikipedia)

I watched Interstellar last night. It’s rare that I don’t like any half-decent science fiction movie, so it gets a thumbs up, though it had its high- and low-points.

It would be difficult to get away with describing Interstellar as truly a “hard science” movie – but it makes quite a few nods in that direction, my favourite being its insistence that a worm hole, as an anomaly in three-dimensional space, should actually be a “worm sphere”.

The fundamental conceit of the film – that a hick farmer from the western US (or somewhere meant to look like the western US) was really a top quality pilot – was difficult to buy into while Michael Caine’s performance was universally dismal.

And, of course, the overall plot feels like an attempt to reimagine 2001: A Space Odyssey – which, despite being nearly 50 years old now, remains unsurpassed as filmic musing on humanity’s destiny in space.

Reaching a decision


English: Distributed Memory
English: Distributed Memory (Photo credit: Wikipedia)

A week further on and not much C++ has been written – and now I think I need to make a new start.

Up to this point I have been trying to write a software model of the hardware and my thought was I could think put a software-modelling layer on top of that. But that simply is not going to work – it is just too complex.

Instead I am going to have to make some policy decisions in the software – essentially over how I model the local memory on the chip: each tile will process memory reads and writes and needs to know where that memory is – it could be in the global off-chip memory store or it could be on-chip.

The difference matters because, at least in theory, the on-chip memory is speedily accessible, while the off-chip memory is 50 to 100 to 500 times “further away”.  Because memory accesses exhibit locality it makes sense to ship blocks of addressed memory from the global to the local store – but doing so takes time and if there are a lot of memory movements then we get thrashing.

What I now have to do is think of what policy I will use to decide what memory gets stored locally (or, more likely, what policy I use to map the addresses). I’ll start by once again reviewing papers that propose some schemes for existing Networks-on-Chip.

In other news: I have had a paper (of which I am co-author and first named author) accepted by OSPERTS 15 – so I will be off to Sweden to get mauled by the audience there in early July. It will be an experience, and I am looking forward to it, but I also think it might be not so much a baptism, but a destruction by fire.

Further thoughts on the simulation task


A Motorola 68451 MMU (for the 68010)
A Motorola 68451 MMU (for the 68010) (Photo credit: Wikipedia)

Lying in bed this morning and puzzling over what to do …

At first I thought what I should do is copy one of the existing operating system models for NoCs, but that simply would not be flexible enough.

What I have to do is model the hardware (including the modifications to the MMU I want to see) as, essentially, some form of black box, and build other layers – including the memory tree – above that. That means I need to separate the CPU/tiles from global memory: sounds simple in theory but implementing this is going to be very far from easy.

Struggling


Die of an Intel 80486DX2 microprocessor (actua...
Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging. (Photo credit: Wikipedia)

Been a while since I’ve written here – been avoiding writing about politics, which has obviously not been so great for me in the last couple of weeks… but now I have something else to ruminate on.

I have reached a milestone, or perhaps basecamp, in my PhD research: having a model for memory management that needs further exploration. (Hopefully there may even be a paper on this soon.)

Some of that exploration will have to be in hardware, and that’s really beyond me but I can and should build a software model to test how a many core system built using this new model might operate.

So far I have been testing or proving concepts with OVPSim but it does not allow me to build a true asynchronous multi-core model, so I need to do that in software myself.

But where to begin – I have a list of classes that you might want to have in C++:

  • Computer – which would aggregate…
    • DRAM
    • Storage
    • NoC – which would aggregate…
      • Mesh
      • Tiles – which would aggregate…
        • CPU
        • Cache
        • Ports (to the Mesh)

I hope you can see how quickly this becomes complex – and all we are talking about here is a simple software framework to allow me to do the research (ie., delivering the software, complex as it is, is only the very start.)

I am struggling to know where to begin – C++ seems like the logical choice for this, but it’s not proving to be much fun. Particularly because my CPU class has to be able to “execute” some code – I thought about using a DSL but may stick to the XML output I got from my hacked Valgrind Lackey – as at least I can then use existing XML engines.

Should I build from the XML up – eg., get a CPU class that can hack the XML and pass the requests up the chain (eg via the cache up to the Mesh and up to the DRAM etc), or what?

The agony and the ecstasy of debugging


If you have ever written a computer program with any degree of seriousness then you will know the feeling: your heart sinking as you realise what you thought was a perfectly good piece of code has a bug somewhere less than obvious.

In my case this has happened twice in a week and both times has meant the work I had done as part of my PhD has had to start again (not all of it, obviously, but this, most recent bit). Yesterday evening’s realisation was particularly annoying because it came after I had sent my supervisor an email suggesting I had some quite interesting and counter-intuitive results to share.

Since then I had spent quite a few hours trying to work out what on Earth was wrong – debugging assembly is not complex in the sense that most instructions do simple things – but it also reminds you of the essential state-machine-nature of a general computing device: there are lots of things to track.

Of course, that also brings pleasure – there is no point in denying that solving these problem is one of the truly engaging things about computing.

Job is done now and I am once again collecting results and hoping that I do not spot another flaw.

Take your injuries seriously


Muscles of lower extremity
Muscles of lower extremity (Photo credit: Wikipedia)

I have a torn calf muscle. To say this is an inconvenience is something of an understatement – especially as the tear came just a few days after I had managed to run 10k for the first time in months and – more than that – did it in a time that suggested I could look forward to a decent season of long distance running.

Unfortunately I didn’t take the injury seriously for about three days and so probably made things worse. That means I may have added weeks to my recovery time. I hope not, but in any case March seems to be a right-off.