Why choice in software matters … a story from the real world


Ten years ago today something happened that has had a significant impact on many millions of people across the world … Mozilla 1.0 was released.

Image representing Mozilla as depicted in Crun...
Image via CrunchBase

Above all else Mozilla, and it’s leaner, fitter, offspring, Mozilla Firefox, is the most important piece of free (as in freedom) software ever produced. For sure, it stood on the shoulders of giants to get there, but by giving the world a real choice in browsers the Mozilla Foundation changed the rules for the Internet, forced Microsoft to get its act together and crushed that company’s attempts to bind us all into a proprietary software future (remember ActiveX anyone?) online.

It is probably going too far to say that without Mozilla there would be no Arab Spring, for instance, but maybe not by much. Because Mozilla and Firefox also taught the public that there were alternatives out there and so the future did not have to be about what ever Baby Blue said it was. And that willingness to experiment online is helping power the mass adoption of smart phones, which are the weapons of choice for online revolutionaries.

It is easy to forget how bad it had got before Mozilla came along … Internet Explorer was a truly atrocious application that was not updated for several years. Microsoft had no interest in open standards because it had no competition. Mozilla changed all that. Not instantly, but the pressure began immediately.

Working set heuristics and the Linux kernel: my MSc report


My MSc project was titled “Applying Working Set Heuristics to the Linux Kernel” and my aim was to test some local page replacement policies in Linux, which uses a global page replacement algorithm, based on the “2Q” principle.

There is a precedent for this: the so-called “swap token” is a local page replacement policy that has been used in the Linux kernel for some years.

My aim was to see if a local replacement policy graft could help tackle “thrashing” (when a computer spends so much time trying to manage memory resources – generally swapping pages back and forth to disk – it makes little or no progress with the task itself).

The full report (uncorrected – the typos have made me shudder all the same) is linked at the end, what follows is a relatively brief and simplified summary.

Fundamentally I tried two approaches: acting on large processes when the number of free pages fell to one of the watermark levels used in the kernel and acting on the process last run or most likely to run next.

For the first my thinking – backed by some empirical evidence – was that the largest process tended to consume much more memory than even the second largest. For the second the thought was that make the process next to run more memory efficient would make the system as a whole run faster and that, in any case the process next to run was also quite likely (and again some empirical evidence supported this) to be the biggest consumer of memory in the system.

To begin I reviewed the theory that underlies the claims for the superiority of the working set approach to memory management – particularly that it can run optimally with lower resource use than an LRU (least recently used) policy.

Peter Denning, the discoverer of the “working set” method and its chief promoter, argued that programs in execution do not smoothly and slowly change their fields of locality, but transition from region to region rapidly and frequently.

The evidence I collected – using the Valgrind program and some software I wrote to interpret its output, showed that Denning’s arguments appear valid for today’s programs.

Here, for instance is the memory access pattern of Mozilla Firefox:

Mozilla Firefox memory usageWorking set size can therefore vary rapidly, as this graph shows:

Working set size for Mozilla FirefoxIt can be seen that peaks of working set size often occur at the point of phase transition – as the process will be accessing memory from the two phases at the same time or in rapid succession.

Denning’s argument is that the local policy suggested by the working set method allows for this rapid change of locality – as the memory space allocated to a given program is free to go up and down (subject to the overall constraint on resources, of course).

He also argued that the working set method will – at least in theory – deliver a better space time product (a measure of overall memory use) than a local LRU policy. Again my results confirmed his earlier findings in that they showed that, for a given average size of a set of pages in memory, the working set method will ensure longer times between page faults, compared to a local LRU policy – as shown in this graph:

Firefox lifetime curvesHere the red line marks the theoretical performance of a working set replacement policy and the blue line that of a local LRU policy. The y-axis marks the average number of instructions executed between page faults, the x-axis the average resident set size. The working set method clearly outperforms the LRU policy at low resident set values.

The ‘knee’ in either plot where \frac{dy}{dx} is maximised is also the point of lowest space time product – at this occurs at a much lower value for the working set method than for local LRU.

So, if Denning’s claims for the working set method are valid, why is it that no mainstream operating system uses it? VMS and Windows NT (which share a common heritage) use a local page replacement policy, but both are closer to the page-fault-frequency replacement algorithm – which varies fixed allocations based on fault counts – than a true working set-based replacement policy.

The working set method is just too difficult to implement – pages need to be marked for the time they are used and to really secure the space-time product benefit claimed, they also need to be evicted from memory at a specified time. Doing any of that would require specialised hardware or complex software or both, so approximations must be used.

“Clock pressure”

For my experiments I concentrated on manipulating the “CLOCK” element of the page replacement algorithm: this removes or downgrades pages if they have not been accessed in the time been alternate sweeps of an imaginary second hand of an equally imaginary clock. “Clock pressure” could be increased – ie., pages made more vulnerable to eviction – by systematically marking them as unaccessed, while pages could be preserved in memory by marking them all as having been accessed.

The test environment was compiling the Linux kernel – and I showed that the time taken for this was highly dependent on the memory available in a system:

Compile time for the unpatched kernelThe red line suggests that, for all but the lowest memory, the compile time is proportional to M^{-4} where M is the system memory. I don’t claim this a fundamental relationship, merely what was observed in this particular set up (I have a gut feeling it is related to the number of active threads – this kernel was built using the -j3 switch and at the low memory end the swapper was probably more active than the build, but again I have not explored this).

Watermarks

The first set of patches I tried were based on waiting for free memory in the system to sink to one of the “watermarks” the kernel uses to trigger page replacement. My patches looked for the largest process then either looked to increase clock pressure – ie., make the pages from this large process more likely to be removed – or to decrease it, ie., to make it more likely these pages would be preserved in memory.

In fact the result in either case was similar – at higher memory values there seemed to be a small but noticeable decline in performance but at low memory values performance declined sharply – possibly because moving pages from one of the “queues” of cached pages involves locking (though, as later results showed also likely because the process simply is not optimal in its interaction with the existing mechanisms to keep or evict pages).

The graph below shows a typical result of an attempt to increase clock pressure – patched times are marked with a blue cross.

patched and unpatched compilation timesThe second approach was to interact with the “completely fair scheduler” (CFS) and increase or decrease clock pressure on the process lease likely to run or most likely to run.

The CFS orders processes in a red-black tree (a semi-balanced tree) and the rightmost node is the process least likely to run next and the leftmost the process most likely to run next (as it has run for the shortest amount of virtual time).

As before the idea was to either free memory (increase clock pressure) or hold needed pages in memory (decrease clock pressure). The flowchart below illustrates the mechanism used for the leftmost process (and decreasing clock pressure):

decreasing clock pressure on the leftmost process

But again the results were generally similar – a general decline, and a sharp decline at low memory values.

(In fact, locking in memory of the leftmost process actually had little effect – as shown below:)

promoting pages in the leftmost process in CFS treeBut when the same approach was taken to the rightmost process – ie the process that has run for the longest time (and presumably may also run for a long time in the future), the result was a catastrophic decline in performance at small memory values:

locking oages in rightmost process inAnd what is behind the slowdown? Using profiling tools the biggest reason seems to be that the wrong pages are being pushed out of the caches and  need to be fetched back in. At 40MB of free memory both patched and unpatched kernels show similar profiles with most time spent scheduling and waiting for I/O requests – but the slowness of the patched kernel shows that this has to be done many more times there.

Profile of unpatched kernel at 40MBProfile for patched kernel at 40MBThere is much more in the report itself – including an examination of Denning’s formulation of the space-time product  – I conclude his is flawed (update: in fairness to Peter Denning, who has pointed this out to me, this is as regards his approximation of the space-time product: Denning’s modelling in the 70s also accounted for the additional time that was required to manage the working set) as it disregards the time required to handle page replacement – and the above is all a (necessary) simplification of what is in the report – so if you are interested please read that.

Applying working set heuristics to the Linux kernel

Some thoughts on the Linux page cache


I am still waiting the result of the Birkbeck exam board to find out whether I have been awarded the MSc, but I think I have waited long enough before discussing my thoughts on some of the issues around page caching my project raised.

Linux‘s page caching is based on the “2Q” concept – pages in use by the system are stored in one of two queues (hence “2Q”) – in one of two states – see the diagram below, though this also includes states for pages being flushed from the cache:

Page lifecycle

Typically a page enters the cache in the INACTIVE UNREF state. If the cache needs to be trimmed pages in this state are the most likely to go (I am simplifying this for ease of exposition). On reference the page will be promoted to the INACTIVE REF state and on a further reference could join the ACTIVE queue – again in the ACTIVE UNREF state.

A “clock” mechanism with two sweeping hands controls the demotion of pages: the first hand marks any page in the REF state as UNREF and if a subsequent reference has not pushed the page back to the REF state then the page could be either removed from the ACTIVE queue or, if in the INACTIVE queue, from the cache completely (this is actually a bit more complicated, for instance pages which are “dirty”, ie requiring write back to disk, are less likely to be demoted than clean pages which can just be dropped, and so on.)

The 2Q mechanism is designed to stop pages which are only referenced once (eg a page required on initialisation) from being kept in the cache in preference to a page that might be referenced more than once – essentially frequently referenced pages get bumped up to the ACTIVE queue are so are less likely to be pushed out of the cache – eg a page in the ACTIVE REF state would face three calls to shrink_xxxx_list before being removed and if another reference happens in that time (crudely, three sweeps of the clock) it will be preserved in the cache.

This behaviour is generally thought to be a strength, but what if it were also a weakness?

The graphic below shows the memory access patterns of a simple Firefox open and close:

Firefox memory access mapping

It can immediately be seen that memory access falls into clear phases. The problem with 2Q is that it works well within those phases but as the application moves from one phase of memory access to another, 2Q risks holding pages from the old phase in memory for too long – indeed pages from the new phase could even face eviction to preserve pages (once) in the ACTIVE list in memory if memory pressure was high. The old pages have a very high or even infinite reuse distance, but are privileged by 2Q.

How can this be fixed? That’s not so easy to answer – especially as the pages from the old phase may be in use elsewhere in the system (for instance if they came from a shared library). But I am convinced that finding an answer to this could have a big impact on the performance of low memory systems.

In your face, fan bois


Image representing Safari as depicted in Crunc...
Image via CrunchBase

There is something very annoying about Apple users: they pay twice as much as the rest of us for stuff and then think that they are the clever and cool ones. How does that work?

Well, here’s some bad news for them: their browser is almost certainly broken.

If you look at the graphs on this post, you can see that Safari seems to get worse with every iteration – the opposite to the way Internet Explorer and Firefox are going.

Ha ha.

That is all.

Using Firefox version 7


Image representing Firefox as depicted in Crun...
Image via CrunchBase

I am writing this using “Aurora”, version 7.0a2 of Firefox, served to me using X/Windows from a server in Germany (of course X11 officianados will point out that the X server is on my laptop in North London).

Partly I am just doing it because I can and partly just to show off!

Still only manages 97/100 on the Acid 3 test.

Time to ditch Firefox?


The results of the Acid3 test on Google Chrome 4.0
Image via Wikipedia

Just about every internet user, even if they have never used Firefox, owes the Mozilla Foundation an enormous amount for the creation of Firefox.

It’s injection of competition back into the mass browser market stimulated a new drive towards standards and speed that has made a huge difference to all users: Internet Explorer 6 came out in 2001 and languished, largely unimproved until Firefox’s success finally prompted a new version of IE, in 2006 and since then competition has been fierce.

But is Firefox really up to it any more? I have just tried the javascript bench marks on http://jsbenchmark.celtickane.com and my copy of Firefox (it is still my browser of choice at home) scored just 89 on average over 10 runs. Google Chrome managed 313 on the same box (a now quite old Pentium D).

And, of course, Chrome is more standards compliant. It doesn’t quite smoothly progress to 100/100 on the Acid 3 test – but it does get there, while my version of Firefox only manages 94/100.