Making the case that #PartTimeMatters


As a Birkbeck alumnus I have received an email from the college master, David Latchman, asking me to help promote part time higher education – and I am more than happy to do just that.

Professor Latchman writes:

Please help Birkbeck and support an important national #PartTimeMatters campaign to protect part-time higher education.  In Adult Learners’ Week (18-24 May), a cross sector group including CBI, NUS, Universities UK, NIACE, WEA, UCU, Million Plus and of course Birkbeck and the OU are very keen to generate noise about some key messages:

• Part-time HE matters – there is a wealth of recent research that says that part-time HE matters for a whole variety of reasons.  It supports the skills agenda and economic growth, it allows employees to upskill and reskill while still working, employers really value it, it creates opportunity for social mobility and has a significant impact on the life of the individual student

• Part-time HE was the biggest casualty of the 2012 reforms with a 40% downturn in enrolments across England.  Part-time students make up a third of the undergraduate population and their future must be safeguarded

• Something must be done and part-time HE must be protected for future generations of adult and non-traditional learners
HOW YOU CAN HELP

The #PartTimeMatters campaign encourages you to act this week:

• Write to your MP – click here for a template letter that outlines the case for part-time HE and your MPs contact details. An early day motion in the House of Commons and a question in the House of Lords will support this campaign
• Tell your story about why part-time HE matters to you – click here to record why #PartTimeMatters to you.  Whether you’re a student, a former student, a member of staff, a governor, a donor, a friend – please record your story 

Advertisements

Yes, go to Birkbeck


Shield from the arms of Birkbeck College (103x...
Shield from the arms of Birkbeck College (103x132px, 4,538 bytes) (Photo credit: Wikipedia)

I know that, around this time of year, a lot of people in London are considering whether to press on with their application to go to Birkbeck college, so here I am hoping to pick up the passing Google traffic and urging you to go if:

  • You are prepared to do the work;
  • You want to realise your potential;
  • You want to change your life.

Going to Birkbeck will not create a new you – but it could allow the real you to escape for wherever it has been hidden these last few years.

And it is never too late to change.

So, go for it!

And so to York


UNIVERSITY OF YORK
UNIVERSITY OF YORK (Photo credit: CARLOS62)

I am just back from two intellectually intoxicating days at the University of York, where I made an early start to my PhD (technically I am not starting until July) in real time operating systems.

In a few years this may all look like a mad escapade, perhaps I am old enough to know better. But right now I am admit to a slight giddy excitement about it all.

The peak moment was when I walked into the library, a building which you could live the rest of your life in. Walking up to the top floor, where the computer science texts were stored, I stopped off to read a few pages in a book about the correspondence between Mary McCarthy and Hannah Arendt. I don’t know much about either woman, but  I do know I want to know.

Th library also reminded me of the elevated status of research students – apparently, when I am issued with a card, I can borrow up to 50 (! Seems so outlandish I wonder if I misheard ’15’) items for as long as three months.

Roughly speaking there are about three times as many under-graduates in the UK as there were when I was one and so York (a middle sized institution) is about 50% “bigger” than Edinburgh was when I was there but numbers are not the only change one notices – there are far more overseas students than quarter of a century ago, sought by the universities because of the fees they pay and attracted to the UK by our universities’ reputation for excellence (both factors make the current government’s attitude of distrust towards foreign students look seriously flawed).

Cannot wait till I go back!

(And thanks again to Birkbeck for making it all possible).

Mission accomplished


Licensed for reuse under creative commons - credit required, no commercial use permittedToday was graduation day for my MSc and I have to say I rather enjoyed it.

The master of Birkbeck, David Latchman, gave a very good speech I thought, emphasising the college’s commitment to its part-time students and to helping them get the funding to which they are entitled and we also got a speech by Gulam Noon – being made a fellow – who made his views very clear when he quoted the Hadith “the ink of scholars outweighs the blood of martyrs”.

The master also made the point that Birkbeck’s graduates are its greatest recruiters and that we should do our bit to encourage applications – which I am more than happy to do.

So, onwards and upwards.

Working set heuristics and the Linux kernel: my MSc report


My MSc project was titled “Applying Working Set Heuristics to the Linux Kernel” and my aim was to test some local page replacement policies in Linux, which uses a global page replacement algorithm, based on the “2Q” principle.

There is a precedent for this: the so-called “swap token” is a local page replacement policy that has been used in the Linux kernel for some years.

My aim was to see if a local replacement policy graft could help tackle “thrashing” (when a computer spends so much time trying to manage memory resources – generally swapping pages back and forth to disk – it makes little or no progress with the task itself).

The full report (uncorrected – the typos have made me shudder all the same) is linked at the end, what follows is a relatively brief and simplified summary.

Fundamentally I tried two approaches: acting on large processes when the number of free pages fell to one of the watermark levels used in the kernel and acting on the process last run or most likely to run next.

For the first my thinking – backed by some empirical evidence – was that the largest process tended to consume much more memory than even the second largest. For the second the thought was that make the process next to run more memory efficient would make the system as a whole run faster and that, in any case the process next to run was also quite likely (and again some empirical evidence supported this) to be the biggest consumer of memory in the system.

To begin I reviewed the theory that underlies the claims for the superiority of the working set approach to memory management – particularly that it can run optimally with lower resource use than an LRU (least recently used) policy.

Peter Denning, the discoverer of the “working set” method and its chief promoter, argued that programs in execution do not smoothly and slowly change their fields of locality, but transition from region to region rapidly and frequently.

The evidence I collected – using the Valgrind program and some software I wrote to interpret its output, showed that Denning’s arguments appear valid for today’s programs.

Here, for instance is the memory access pattern of Mozilla Firefox:

Mozilla Firefox memory usageWorking set size can therefore vary rapidly, as this graph shows:

Working set size for Mozilla FirefoxIt can be seen that peaks of working set size often occur at the point of phase transition – as the process will be accessing memory from the two phases at the same time or in rapid succession.

Denning’s argument is that the local policy suggested by the working set method allows for this rapid change of locality – as the memory space allocated to a given program is free to go up and down (subject to the overall constraint on resources, of course).

He also argued that the working set method will – at least in theory – deliver a better space time product (a measure of overall memory use) than a local LRU policy. Again my results confirmed his earlier findings in that they showed that, for a given average size of a set of pages in memory, the working set method will ensure longer times between page faults, compared to a local LRU policy – as shown in this graph:

Firefox lifetime curvesHere the red line marks the theoretical performance of a working set replacement policy and the blue line that of a local LRU policy. The y-axis marks the average number of instructions executed between page faults, the x-axis the average resident set size. The working set method clearly outperforms the LRU policy at low resident set values.

The ‘knee’ in either plot where \frac{dy}{dx} is maximised is also the point of lowest space time product – at this occurs at a much lower value for the working set method than for local LRU.

So, if Denning’s claims for the working set method are valid, why is it that no mainstream operating system uses it? VMS and Windows NT (which share a common heritage) use a local page replacement policy, but both are closer to the page-fault-frequency replacement algorithm – which varies fixed allocations based on fault counts – than a true working set-based replacement policy.

The working set method is just too difficult to implement – pages need to be marked for the time they are used and to really secure the space-time product benefit claimed, they also need to be evicted from memory at a specified time. Doing any of that would require specialised hardware or complex software or both, so approximations must be used.

“Clock pressure”

For my experiments I concentrated on manipulating the “CLOCK” element of the page replacement algorithm: this removes or downgrades pages if they have not been accessed in the time been alternate sweeps of an imaginary second hand of an equally imaginary clock. “Clock pressure” could be increased – ie., pages made more vulnerable to eviction – by systematically marking them as unaccessed, while pages could be preserved in memory by marking them all as having been accessed.

The test environment was compiling the Linux kernel – and I showed that the time taken for this was highly dependent on the memory available in a system:

Compile time for the unpatched kernelThe red line suggests that, for all but the lowest memory, the compile time is proportional to M^{-4} where M is the system memory. I don’t claim this a fundamental relationship, merely what was observed in this particular set up (I have a gut feeling it is related to the number of active threads – this kernel was built using the -j3 switch and at the low memory end the swapper was probably more active than the build, but again I have not explored this).

Watermarks

The first set of patches I tried were based on waiting for free memory in the system to sink to one of the “watermarks” the kernel uses to trigger page replacement. My patches looked for the largest process then either looked to increase clock pressure – ie., make the pages from this large process more likely to be removed – or to decrease it, ie., to make it more likely these pages would be preserved in memory.

In fact the result in either case was similar – at higher memory values there seemed to be a small but noticeable decline in performance but at low memory values performance declined sharply – possibly because moving pages from one of the “queues” of cached pages involves locking (though, as later results showed also likely because the process simply is not optimal in its interaction with the existing mechanisms to keep or evict pages).

The graph below shows a typical result of an attempt to increase clock pressure – patched times are marked with a blue cross.

patched and unpatched compilation timesThe second approach was to interact with the “completely fair scheduler” (CFS) and increase or decrease clock pressure on the process lease likely to run or most likely to run.

The CFS orders processes in a red-black tree (a semi-balanced tree) and the rightmost node is the process least likely to run next and the leftmost the process most likely to run next (as it has run for the shortest amount of virtual time).

As before the idea was to either free memory (increase clock pressure) or hold needed pages in memory (decrease clock pressure). The flowchart below illustrates the mechanism used for the leftmost process (and decreasing clock pressure):

decreasing clock pressure on the leftmost process

But again the results were generally similar – a general decline, and a sharp decline at low memory values.

(In fact, locking in memory of the leftmost process actually had little effect – as shown below:)

promoting pages in the leftmost process in CFS treeBut when the same approach was taken to the rightmost process – ie the process that has run for the longest time (and presumably may also run for a long time in the future), the result was a catastrophic decline in performance at small memory values:

locking oages in rightmost process inAnd what is behind the slowdown? Using profiling tools the biggest reason seems to be that the wrong pages are being pushed out of the caches and  need to be fetched back in. At 40MB of free memory both patched and unpatched kernels show similar profiles with most time spent scheduling and waiting for I/O requests – but the slowness of the patched kernel shows that this has to be done many more times there.

Profile of unpatched kernel at 40MBProfile for patched kernel at 40MBThere is much more in the report itself – including an examination of Denning’s formulation of the space-time product  – I conclude his is flawed (update: in fairness to Peter Denning, who has pointed this out to me, this is as regards his approximation of the space-time product: Denning’s modelling in the 70s also accounted for the additional time that was required to manage the working set) as it disregards the time required to handle page replacement – and the above is all a (necessary) simplification of what is in the report – so if you are interested please read that.

Applying working set heuristics to the Linux kernel

Master of Science


I am very pleased to report that I have been awarded the degree of Master of Science in Computer Science – with distinction.

So, a PhD next? Maybe, need to explore the options.

I think I will post up my project report in a few days for those of you who might be interested in Linux memory management and page reclaim.

 

 

Coming close to the end


Shows the kernel's role in a computer.
Image via Wikipedia

Suddenly, the writing phase of my MSc project feels close to the end.

It’s not over by any means – I have a whole substantial section to write on experimental results and then another stating why the two main sets of experiments had similar (negative) results. But the end is visible.

That is actually a bit scarey. Because while the writing goes on it is possible to think “I can smooth that contradiction out later” – now later is getting close to now.

Actually, the remarkable thing so far is how coherent it all feels. Ten days ago, my results in the notebook seemed to be little more than a collection of random numbers produced by equally random software patches. But that was, it is now clear, mainly panic driven – actually I seemed to have approached the experimental phase in a reasonably sensible and rational way.

It is quite reassuring to discover you are not an idiot after all!

Birkbeck


Birkbeck College
Image by Matt From London via Flickr

I get a lot of people coming to the site clearly considering whether it is worth their while taking up their offer of a place to do an MSc in Computer Science at Birkbeck. So, I want to say that they should, but they should think about:

  • Have they got the time – for a part-timer this can mean three evenings a week of three-hour’s worth lectures: you can get away with missing some (as I had to because I was out of the country) but you will suffer as a result.
  • Have they got the right mental attitude – I have to say I was a bit iffy on this in the first year (until the shock of the exams), as I thought it was all C++ (been there, done that) and a bit of ‘A’ level maths – turned out to be a bit harder than that after all!

It’s quite hard work in the end – especially the very end, with the project – your friends and relations will think that because you have passed the exams it must be a doddle from this part on, when it actually gets worse. You can think of it as passing the exams meaning you have reached the level of a graduate, you then have to complete the project to earn the Masters – so three years in two years (or one if you are full-timer) and then another year in three months!

But it’s also great, an intellectual stretch and a gateway to lots of other things.

If you do take it up, try and read up some stuff in advance – I really wish I had done that in my first year – I have a few recommendations here, but there are lots of other books to read.

Exams are over, project remains


Tux, the Linux penguin
Image via Wikipedia

Today saw my last exam for the MSc, though I still have to complete the project.

So, lots of posts about Linux page management to follow – I hope.

What’s next?


Students taking a test at the University of Vi...
Image via Wikipedia

My first exam in the second year of the (part-time) MSc is tomorrow and I guess I am writing this blog partly as a way of avoiding more revision, but partly also because if last year’s experience is any guide, that exam will knock the stuffing out of any optimism I have, so I shall write something now while I still have some hope.

The exams are not the end of the degree if I pass them then technically I can claim a post-graduate diploma, but I already have one of them, in Journalism Studies from Westminster and as was said to me at the time “it’s just about worth the paper it is printed on”: I learnt a lot but nobody much else is impressed.

To get the degree I need to complete my project on memory management in the Linux kernel – it’s an ambitious project and time will be short so it may get frantic.

But when it’s over, what will I do? I don’t plan to work in IT: 45 seems quite an age to go from reasonable success and some prospects in one career to starting at the bottom in any case.

But nor do I want to abandon science for a second time. A part-time PhD? That really is a long term commitment, though.