Why I owe @slugonamission a (or several) large drink(s)


gdb icon, created for the Open Icon Library
gdb icon, created for the Open Icon Library (Photo credit: Wikipedia)

I have been struggling with a problem with my Microblaze simulation on OVPsim all week.

I have been trying to make a start on implementing a simple demand paging system and to trigger the hardware exception that the Microblaze CPU manual says should happen when you either try to read from a mapped page that is not in the TLB (ie, a “minor” page fault) or has not been mapped at all (a “major” page fault).

My problem was that – despite having, I thought, specified all the correct parameters for the Microblaze and knowing that I had virtual memory mappings work well, it simply was not happening for me. Instead of the expected exception handling, the simulation reported an error and exited.

But Jamie Garside from the University of York’s computer science department saved my life by (correctly) pointing out that what I also needed to do was turn on ICM_ATTR_SIMEX. Otherwise the simulation will always exit on an exception (or halt in GDB).

I can see why that might make sense – at least the halting in the debugger bit: if an exception is about to be fired you can halt the machine while it is still executing in virtual, as opposed to real, mode and see what has caused it.

It was also a reminder to RTFM – in this case not just the Microblaze manual, but also the OVPsim manual. I cannot rely on Jamie doing that for me everytime.

So to fix my problem I ORed in the missing attribute:

#define SIM_ATTRS (ICM_ATTR_DEFAULT|ICM_ATTR_SIMEX)

Advertisements

Time to write a signal handler?


Unix Creators at DEC PDP11
Unix Creators at DEC PDP11 (Photo credit: PanelSwitchman)

I am trying to execute some self-written pieces of software that require a lot of wall clock time – around three weeks.

I run them on the University of York‘s compute server which is rebooted on the first Tuesday of every month, so the window for the software is limited. I have until about 7 am on 5 August before the next reboot.

To add to the complication the server runs Kerberos which does not seem to play well with the screen/NFS combination I am using.

And – I keep killing the applications in error – this time, just half an hour ago I assume I was on a dead terminal session (ie an ssh login which had long since expired) and pressed ctrl-C, only to my horror to discover it was a live screen (it had not responded to ctrl-A, ctrl-A for whatever reason).

Time to add a signal handler to catch ctrl-C to at least give me the option of changing my mind!

She’ll die soon, but everybody dies


Blade Runner 1982
Blade Runner 1982 (Photo credit: Dallas1200am)

If you recognise the line then chances are, like me, you saw the original cut of Blade Runner in the cinema (the line was changed in a subtle but very important way in later cuts).

The film is now over thirty years old – but like a true science fiction classic (2001: A Space Odyssey is the archetype) – it seems almost timeless and everytime you watch it you see something new.

And now the University of York is to consider the film alongside cave art and Shakespeare’s sonnets:

We chose these particular case studies for several reasons. We wanted examples of totally different art forms and media; we wanted a wide historical and cultural reach; we wanted artefacts that have already been subject to extensive debate (part of the interest is in the nature of those debates); and we wanted examples that might usefully reveal different aspects of the two principal kinds of values in our study. 

We are planning three intensive workshops on these case studies bringing together experts from different perspectives and disciplines: archaeologists and palaeontologists for the cave paintings, Shakespeare scholars and literary theorists for the Sonnets, film theorists and critics for the film. We are delighted, for example, that Jill Cook, who curated the highly successful exhibition on “Ice Age Art: Arrival of the Modern Mind” at the British Museum, will contribute to the Chauvet workshop. Throughout there will be an input also from aesthetics and philosophy of art. The interdisciplinary nature of the enquiry is crucial to it. 
The day long seminar on Blade Runner is on 11 April. Plus, there’s a showing: it’s not often you get to see the classics back in a theatre, so that might be of interest
Enhanced by Zemanta

On an Airbus 380, the toilet is a safety critical system


Airbus A380
Airbus A380 (Photo credit: Wikipedia)

As pointed out at the University of York’s real time systems group meeting yesterday…

The Airbus A380 is the world’s largest passenger airline and it flies long distances. As such its human waste management systems have to handle a large volume of material.

Of course the material that ends up in the system was on the plane from the moment in took off but at the moment of takeoff the weight is distributed throughout the plane while the longer the flight continues the more of that weight gets concentrated in the waste management system.

More than that – the plane is getting lighter all the time – because it is burning fuel – so not only does the weight get shifted to a more confined region of the plane, it is relatively more important.

Hence the software on the A380 that manages the toilets is a safety critical system – and has to meet some quite exacting standards.

Enhanced by Zemanta

How slow is a fast computer?


Valgrind
Valgrind (Photo credit: Wikipedia)

I am working on a simulation environment for a NoC. The idea is that we can take a memory reference string – generated by Valgrind – and then test how long it will take to execute with different memory models for the NoC. It’s at an early stage and still quite crude.

The data set it is working with is of the order of 200GB, though that covers 18 threads of execution and so, very roughly speaking, it is 18 data sets of just over 10GB each. I have written some parallel Groovy/Java code to handle it and the code seems to work, though there is a lot of work to be done.

I am running it on the University of York’s compute server – a beast with 32 cores and a lot of memory. But it is slow, slow, slow. My current estimate is that it would take about 10 weeks to crunch a whole dataset. The code is slow because we have to synchronise the threads to model the inherent parallelism of the NoC. The whole thing is a demonstration – with a vengeance – of Amdahl’s Law.

Even in as long a project as a PhD I don’t have 10 weeks per dataset going free, so this is a problem!

Computer science in the UK: in the wrong direction?


Servers designed for Linux
Servers designed for Linux (Photo credit: Wikipedia)

Two big thoughts strike me as a result of the literature review I have just completed for my PhD:

  • Linux is not the centre of the universe, in fact it is a bit of an intellectual backwater;
  • The UK may have played as big a role in the invention of the electronic computer as the US, but these days it is hardly even in the game in many areas of computing research.

On the first point I am in danger of sounding like Andy “Linux is obsolete” Tanenbaum – but it is certainly the case that Linux is far from the cutting edge in operating system research. If massively parallel systems do break through to the desktop it is difficult to imagine they will be running Linux (or any monolithic operating system).

In fact the first generation may do – because nobody has anything else right now – but Linux will be a kludge in that case.

Doing my MSc which did focus on a Linux related problem, it seemed to me that we had come close to “the end of history” in  operating system research – ie the issue now was fine tuning the models we had settled on. The big issues had been dealt with in the late 60s, the 70s and the 80s.

Now I know different. Operating systems research is very vibrant and there are lots of new models competing for attention.

But along the way the UK dropped out of the picture. Read some papers on the development of virtual memory and it will not be long before you come across the seminal experiment conducted on EMAS – the Edinburgh University Multiple Access System – which was still in use when I was there in 1984. Now you will struggle to find any UK university – with the limited exceptions of Cambridge and York (for real-time) – making any impact (at least that’s how it seems to me).

It’s not that the Americans run the show either – German, Swiss and Italian universities are leading centres of research into systems software.

I am not sure how or why the UK slipped behind, but it feels like a mistake to me – especially as I think new hardware models are going to drive a lot of software innovation in the next decade (well, I would say that, wouldn’t I?)

Spaceapps challenge at York


An email from the University which may be of interest:

 

Hi there,

On the weekend of 20th and 21st April 2013, the Department of
Computer Science, University of York will be hosting the International
Space Apps Challenge, a free 2-day event on developing apps as
solutions that address real-world critical challenges set by NASA.

Participants compete against teams globally to win prizes, with the
winning team receiving special attention from NASA including a support
package to further develop the winning app.

No programming experience is necessary, just an enthusiasm for solving
problems. However, if you are a programmer and want to develop an app
that’s great, you are also very welcome to sign up.

The International Space Apps Challenge is a unique codeathon event,
which is happening in 50 locations worldwide simultaneously across the
weekend. It is your chance to develop solutions to real world problems
set by NASA, and your solution could have an immediate impact.

At the event, you will form a team with others taking part in the
codeathon, and you and your team will be focused on solving a particular
challenge. You will compete with other teams from around the world and
you will be able to use publicly available data to create your own solution
to NASA’s global challenges.

If you’d like more details on the event, visit our website at
http://2013.spaceappschallenge.org/locations/york/

or visit the official NASA Space Apps Challenge website at
http://spaceappschallenge.org/

You can keep in touch via Facebook at
https://www.facebook.com/SpaceAppsYork

and follow us on twitter https://twitter.com/spaceappsyork

If you have any questions, please don’t hesitate to contact us on
Spaceappsyork@gmail.com

Best wishes

SpaceAppsChallenge-York Team

“Go and invent something”


This, in terms, is what my supervisor said to me this afternoon – his point being that while I had successfully presented my literature review, the time had come to start looking at real things that could be put on Network-on-Chip systems.

So, some thinking required over Christmas.

I do have my parallel filesystem idea to look further at, but he’s also suggested one or two other areas to look at.

Anyway, Kansas is going bye bye.

End of term feeling


Christmas in the post-War United States
(Photo credit: Wikipedia)

Forty years ago about this time I had my nice-est ever educational experience when, to my complete surprise, I won the P3 Christmas story prize in Mrs MacManus’s class in Holy Child Primary School. I still remember the way disappointment at not coming third or second was turned into pure joy by actually winning, and being able to go home and tell my mother and show her the prize (some chocolate I think).

Twelve years later I remember the last Friday of my first (not very happy) term at Edinburgh University. I had no lectures to go to and a small amount of money to spend – my mother sent me a cheque which was enough to allow me to get more money from the bank, and that too was a care-free and happy day.

Today is another last Friday of term. I have just presented my literature review seminar to the real time systems group here at York and it was deemed more than acceptable, so I have made it through the first stage of the PhD process.

But care-free? Not really, because it is absolutely bucketing with rain outside (I am sitting in the library) and the BBC forecast is that this will go on all day. Even the 1km between here and my supervisor’s office on the eastern campus is going to see me absolutely drenched.

Cannot win them all.

Similarity, difference and compression


Perl
Perl (Photo credit: Wikipedia)

I am in York this week, being a student and preparing for the literature review seminar I am due to give on Friday – the first staging post on the PhD route, at which I have to persuade the department I have been serious about reading around my subject.

Today I went to a departmental seminar, presented by Professor Ulrike Hahne of Birkbeck College (and latterly of Cardiff University). She spoke on the nature of “similarity” – as is the nature of these things it was a quick rattle through a complex subject and if the summary that follows is inaccurate, then I am to blame and not Professor Hahne.

Professor Hahne is a psychologist but she has worked with computer scientists and so her seminar did cut into computer science issues. She began by stating that it was fair to say that all things are equally the same (or different) – in the sense that one can find an infinite number of things by which two things can be categorised in the same way (object A is weighs less that 1kg, object B weighs less than 1kg, they both weigh less than 2kgs and so on). I am not sure I accept this argument in its entirity – in what way is an object different from itself? But that’s a side issue, because her real point was that similarity and difference is a product of human cognition, which I can broadly accept.

So how do we measure similarity and difference? Well the “simplest” way is to measure the “distance” between two stimuli in the standard geometric way – this is how we measure the difference between colours in a colour space (about which more later) ie., the root of the sum of the squares of the distances. This concept has even been developed into the “universal law of generalisation”. This idea has achieved much but has major deficiencies.

Professor Hahne outlined some of the alternatives before describing her interest (and defence of) the idea that the key to difference was the number of mental transformations required to change one thing from another – for instance, how different is a square from a triangle? Two transformations are required, first to think of the triangle and then to replace the square with the triangle and so on.

In a more sophisticated way, the issue is the Kolmogorov complexity of the transformation. The shorter the program we can write to make the transformation, the more similar the objects are.

This, it strikes me, has an important application in computer science, or it least it could have. To go back to the colour space issue again – when I wrote the Perl module Image::Pngslimmer I had to write a lot of code that computed geometrical distances between colour points – a task that Perl is very poor at, maths is slow there. This was to implement the so-called “median cut” algorithm (pleased to say that the Wikipedia article on the median cut algorithm cites my code as an example, and it wasn’t even me who edited it to that, at least as far as I can remember!) where colours are quantised to those at the centre of “median cut boxes” in the colour space. Perhaps there is a much simpler way to make this transformation and so more quickly compress the PNG?

I asked Professor Hahne about this and she confirmed that her collaborator Professor Nick Chater of Warwick University is interested in this very question. When I have got this week out the way I may have a look at his published papers and see if there is anything interesting there.