Further thoughts on the simulation task

A Motorola 68451 MMU (for the 68010)
A Motorola 68451 MMU (for the 68010) (Photo credit: Wikipedia)

Lying in bed this morning and puzzling over what to do …

At first I thought what I should do is copy one of the existing operating system models for NoCs, but that simply would not be flexible enough.

What I have to do is model the hardware (including the modifications to the MMU I want to see) as, essentially, some form of black box, and build other layers – including the memory tree – above that. That means I need to separate the CPU/tiles from global memory: sounds simple in theory but implementing this is going to be very far from easy.


Die of an Intel 80486DX2 microprocessor (actua...
Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging. (Photo credit: Wikipedia)

Been a while since I’ve written here – been avoiding writing about politics, which has obviously not been so great for me in the last couple of weeks… but now I have something else to ruminate on.

I have reached a milestone, or perhaps basecamp, in my PhD research: having a model for memory management that needs further exploration. (Hopefully there may even be a paper on this soon.)

Some of that exploration will have to be in hardware, and that’s really beyond me but I can and should build a software model to test how a many core system built using this new model might operate.

So far I have been testing or proving concepts with OVPSim but it does not allow me to build a true asynchronous multi-core model, so I need to do that in software myself.

But where to begin – I have a list of classes that you might want to have in C++:

  • Computer – which would aggregate…
    • DRAM
    • Storage
    • NoC – which would aggregate…
      • Mesh
      • Tiles – which would aggregate…
        • CPU
        • Cache
        • Ports (to the Mesh)

I hope you can see how quickly this becomes complex – and all we are talking about here is a simple software framework to allow me to do the research (ie., delivering the software, complex as it is, is only the very start.)

I am struggling to know where to begin – C++ seems like the logical choice for this, but it’s not proving to be much fun. Particularly because my CPU class has to be able to “execute” some code – I thought about using a DSL but may stick to the XML output I got from my hacked Valgrind Lackey – as at least I can then use existing XML engines.

Should I build from the XML up – eg., get a CPU class that can hack the XML and pass the requests up the chain (eg via the cache up to the Mesh and up to the DRAM etc), or what?

The agony and the ecstasy of debugging

If you have ever written a computer program with any degree of seriousness then you will know the feeling: your heart sinking as you realise what you thought was a perfectly good piece of code has a bug somewhere less than obvious.

In my case this has happened twice in a week and both times has meant the work I had done as part of my PhD has had to start again (not all of it, obviously, but this, most recent bit). Yesterday evening’s realisation was particularly annoying because it came after I had sent my supervisor an email suggesting I had some quite interesting and counter-intuitive results to share.

Since then I had spent quite a few hours trying to work out what on Earth was wrong – debugging assembly is not complex in the sense that most instructions do simple things – but it also reminds you of the essential state-machine-nature of a general computing device: there are lots of things to track.

Of course, that also brings pleasure – there is no point in denying that solving these problem is one of the truly engaging things about computing.

Job is done now and I am once again collecting results and hoping that I do not spot another flaw.

Take your injuries seriously

Muscles of lower extremity
Muscles of lower extremity (Photo credit: Wikipedia)

I have a torn calf muscle. To say this is an inconvenience is something of an understatement – especially as the tear came just a few days after I had managed to run 10k for the first time in months and – more than that – did it in a time that suggested I could look forward to a decent season of long distance running.

Unfortunately I didn’t take the injury seriously for about three days and so probably made things worse. That means I may have added weeks to my recovery time. I hope not, but in any case March seems to be a right-off.

Chase down that bug

Français :
(Photo credit: Wikipedia)

If there are rules for software development, one of them should be never let a bug go unsquashed.

This has been demonstrated to me again this week – when I had a bug in some Microblaze interrupt code. I realised I no longer needed to use the code and then puzzled over whether to find out what was wrong anyway.

I got some sound advice:

And I am very glad I acted on it – not least because it seems to me my problem was actually caused by a small bug in the OVP model for the Microblaze (the simulator model allows interrupts to be executed while an exception is in progress, but a real processor would not) and, hopefully, tracking down the bug there will benefits others in the future.

Struggling to return to form

Last October I ran in the Royal Parks Half Marathon – with the aim of finishing in under two hours. By the time I got to the final mile it was obvious I was not going to make that, but I was actually doing quite well – and then my legs buckled and I sort of walked, ran and hobbled my way to a 2 hours 12 minutes finish (I was at 20k at about 2 hours and 1 minute).

My injuries weren’t that serious but I did have problems walking without painful calves for more than a few weeks, and my Achilles’s Tendons were sore for longer. All-in-all a bit of a disaster. Running all but stopped and even though I was still going to the gym I managed to put on about 7kg in weight (not helped by Christmas excesses – and US portions).

I did not really help myself – I refused to rest the injury properly and kept trying to run – which usually resulted in poor performance and renewed inflammation.

But by the turn of this year I thought I was in a good enough place to really try again. And I am back running…


….but I am still struggling with it.

Last spring, in the run up to my first half marathon I was running close to 50k a week and I remember thinking I could manage running 10k every day. All that feels like a very long time ago.

Since 1 January I have gone out a few times with the intention of running 10k but have yet to manage it – the latest failure was today when a blister (wrong socks) combined with willpower failure to make me stop at 7k (if I had worn the correct socks maybe I would have made it – and stopping was the right thing to do, but it also felt like a good excuse).

To be sure, the cold weather (this winter is colder than last, though today, at last, had an air of spring about it) doesn’t help, but getting that extra 1, 2 or 3 k in seems like a very tall order just now.

I am turning in decent pace, though – at least for training runs (I’m never going to trouble any leader board). But I haven’t made much progress in shifting the weight – I’ve knocked about 2 kilos off from the very worst, but I am still a fair bit heavier than a year ago.

It’s all a bit disappointing really. Perhaps I was too optimistic about how quickly I could recover and should just keep plugging away?


Hard fault count drives performance

Just to emphasise how hard faults are determining for performance – here is a plot of hard faults versus page count for the same application mentioned in the previous post.

hard faults The pattern is very similar, though it should be noted that increasing the page count does still keep the fault count coming down at the far end – but not by enough to outweigh the negative effects of having larger page tables to search through when checking for faults and looking for the least recently used page and the like.


Get every new post delivered to your Inbox.

Join 1,165 other followers

%d bloggers like this: