# Cambridge University and computer science

The University of Cambridge Computer Laboratory in West Cambridge, south of the Madingley Road. (Photo credit: Wikipedia)

Cambridge University has a stellar reputation for Computer Science in the UK.

The Computer Laboratory can trace its history back over more than 75 years (to a time when ‘computers’ where humans making calculations), while the wider University can claim Alan Turing for one of its own. And Sinclair Research, ARM, the Cambridge Ring – the list of companies and technical innovations associated with the University is a long one: they even had what was possibly the world’s first webcam.

But, according to today’s Guardian, they might need to work a bit harder with their undergraduates – the Guardian’s 2014 University Guide rates Cambridge as the best University in Britain overall but slots it in only at 8th in computer science and conspicuously gives it the worst rating (1/10) for “value added” – namely the improvement from entry to degree for students.

Now, possibly this is because it is the toughest computer science course in the country to get a place in – the average student needs more than 3 A* grades at A level (and 3 As at AS) to get a place, compared to Imperial, the next place down where 3 A*s would probably set you right – but there has to be more to it than that. It is even harder to get into biosciences at Cambridge and yet they are rated 8/10 in the value added score.

Don’t get me wrong – I am sure Cambridge is fantastic at teaching computer science, but it is also given a lot of money on the basis that it is an elite institution and so it seems reasonable to ask for an explanation (from the Guardian too of course!)

(Incidentally, it seems that Oxford teaches so few undergraduates computer science it cannot be rated at all.)

# More on P and NP

English: Euler diagram for P, NP, NP-Complete, and NP-Hard set of problems. (Photo credit: Wikipedia)

From Frank Vega:

I wanted to answer you one of your comments in your post “Even if P=NP we might see no benefit“, but I saw I can’t do it anymore in that page, maybe due to problem with my internet. I was the person who claim a possible demonstration of problem “P versus NP” in my paper “P versus UP” which is published in IEEE,

http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6272487

I’m working in that problem as a hobby since I graduated. I sent a preprint version in Arxiv with the intention of find some kind of collaboration. I also tried to find help in another blogs. But, finally I decided to sent to a journal of IEEE a new version which differs of the preprints in Arxiv that I withdrew because it had some errors. Then, I waited and after a revision in a IEEE journal I was published in 17th August 2012.

However, I wrote this paper in Spanish and I had the same version in English. So, I decided to sent again to Arxiv, but they denied me that possibility, therefore, I used a pseudonymous. I also uploaded other papers with that name which are not so serious but reflect the work that   I’m doing right now as a hobby too.

I love Computer Science and Math. I’m working right now in a project so important as P versus NP, but I do all this as a way of doing the things that I like most although my environment doesn’t allow me at all. I also tried to work with other scientists which have invited me to work with them since I published my paper in IEEE. Indeed, I don’t want to be annoying with my comments, I just searching the interchange with another people who have the capacity to understand my work, that’s all.

Good Luck

Frank’s website is here:
http://the-point-of-view-of-frank.blogspot.com/

# Similarity, difference and compression

Perl (Photo credit: Wikipedia)

I am in York this week, being a student and preparing for the literature review seminar I am due to give on Friday – the first staging post on the PhD route, at which I have to persuade the department I have been serious about reading around my subject.

Today I went to a departmental seminar, presented by Professor Ulrike Hahne of Birkbeck College (and latterly of Cardiff University). She spoke on the nature of “similarity” – as is the nature of these things it was a quick rattle through a complex subject and if the summary that follows is inaccurate, then I am to blame and not Professor Hahne.

Professor Hahne is a psychologist but she has worked with computer scientists and so her seminar did cut into computer science issues. She began by stating that it was fair to say that all things are equally the same (or different) – in the sense that one can find an infinite number of things by which two things can be categorised in the same way (object A is weighs less that 1kg, object B weighs less than 1kg, they both weigh less than 2kgs and so on). I am not sure I accept this argument in its entirity – in what way is an object different from itself? But that’s a side issue, because her real point was that similarity and difference is a product of human cognition, which I can broadly accept.

So how do we measure similarity and difference? Well the “simplest” way is to measure the “distance” between two stimuli in the standard geometric way – this is how we measure the difference between colours in a colour space (about which more later) ie., the root of the sum of the squares of the distances. This concept has even been developed into the “universal law of generalisation”. This idea has achieved much but has major deficiencies.

Professor Hahne outlined some of the alternatives before describing her interest (and defence of) the idea that the key to difference was the number of mental transformations required to change one thing from another – for instance, how different is a square from a triangle? Two transformations are required, first to think of the triangle and then to replace the square with the triangle and so on.

In a more sophisticated way, the issue is the Kolmogorov complexity of the transformation. The shorter the program we can write to make the transformation, the more similar the objects are.

This, it strikes me, has an important application in computer science, or it least it could have. To go back to the colour space issue again – when I wrote the Perl module Image::Pngslimmer I had to write a lot of code that computed geometrical distances between colour points – a task that Perl is very poor at, maths is slow there. This was to implement the so-called “median cut” algorithm (pleased to say that the Wikipedia article on the median cut algorithm cites my code as an example, and it wasn’t even me who edited it to that, at least as far as I can remember!) where colours are quantised to those at the centre of “median cut boxes” in the colour space. Perhaps there is a much simpler way to make this transformation and so more quickly compress the PNG?

I asked Professor Hahne about this and she confirmed that her collaborator Professor Nick Chater of Warwick University is interested in this very question. When I have got this week out the way I may have a look at his published papers and see if there is anything interesting there.

# Venn diagrams for 11 sets

Ask most people about set theory and you will get a blank look, but ask them about a Venn diagram and they are much more likely to understand: indeed Venn diagrams are so well grasped that Mitt Romney’s campaign for the US Presidency recently attempted to make use of them (though I am not sure it was much of a success, but that’s another story…)

So called 2-Venn (two circles) and 3-Venn diagrams are very familiar. But higher dimension Venn diagrams that are (relatively) easy to grasp (I’ll explain what I mean by that below) are actually difficult to produce – and until last month nobody had managed to get beyond 7.

So, let’s state a few basic properties of any Venn diagram (here is a good general survey of Venn diagrams)- firstly – each region (face) is unique – there is only one region where the bottom curve intersects with the right curve alone, and only one where it intersects with the left curve alone and only one where all three curves intersect (the grey region) and so on.

This image (taken from that survey, apologies) – shows a series of set intersections which are not a Venn diagram:

For instance, we can see the two shaded areas both represent intersections of the ‘blue’ and ‘red’ sets.

A second point is that there is a finite number of intersections. In other words segments of curves cannot lie on top of one another (in fact this rule means the intersections must be in the form of Eulerian points of zero length – as, following on from the last post about Aristotle’s Wheel Paradox, any segment of a curve is continuous and has an uncountable infinite number of points).
The 3-Venn example above illustrates some of the key points about easier to understand Venn diagrams – firstly it is simple: no intersection is of more than two curves and secondly it is symmetric. In fact, if we are willing to ignore these points we can draw Venn diagrams of any number of sets, each with less intelligibility than the last.

Drawing higher number simple and symmetric Venn diagrams is exceptionally difficult and it has been proved that such $n-Venn$ diagrams only exist when $n$ is a prime.

So we have 2-Venns and 3-Venns, and mathematicians have managed 5-Venns:And 7-Venns:

But, until now, simple symmetric 11-Venns have been elusive. Certainly 11-Venn’s have been around – as the example below shows:

This example is symmetric but it is not simple.

Now, though, a breakthrough has been made. Named newroz – the Kurdish name for the new year – the first simple, symmetric 11-Venn has come from Khalegh Mamakani and Frank Ruskey, both of the Department of Computer Science at the University of Victoria, Canada.

And it is beautiful:

That said, I don’t think it will be featuring in any presidential campaigns just yet – by definition there are $2^{11} - 1= 2047$ intersecting regions, probably a bit more than even the keenest voter would care for.

Update: hello to students from F. W. Buchholz High School, hope you find the page useful.

# OLPC “fails in Peru”: Economist

Image via CrunchBase

The One Laptop Per Child (OLPC) project has failed to live up to its sponsors’ expectations in Peru, reports The Economist.

Part of the problem is that students learn faster than many of their teachers, according to Lily Miranda, who runs a computer lab at a state school in San Borja, a middle-class area of Lima. Sandro Marcone, who is in charge of educational technologies at the ministry, agrees. “If teachers are telling kids to turn on computers and copy what is being written on the blackboard, then we have invested in expensive notebooks,” he said. It certainly looks like that.

I was working for the Labour Party when, in his 1995 conference speech, Tony Blair made a pledge to deliver laptops to kids in schools (the exact details escape my memory over this distance but it was not quite at the OLPC level of provision). Even then I was a bit dubious – a computer needs to be for something – but the pledge was also extremely popular.

The problem in Britain – and I suspect in Peru also – was that computers were handed out to people to write documents, spreadsheets and presentations. But if you cannot write good English, or understand percentages, then having a new wordprocessor or spreadsheet is not going to help.

In Britain we have created a culture where computer science has been neglected in favour of teaching children how to use (as in type in) wordprocessors. It bores kids of all abilities and no wonder.

Computers need to be used as educational tools aligned with the core curriculum subjects if they are going to make a difference. This is why teaching some programming would be far more useful than how to manipulate the last-but-one version of Microsoft Powerpoint.

I sent off my order for the Raspberry Pi – the device which many hope will lead to a revival of computer science (as opposed to ECDL type teaching) in British schools today – I registered for it close to three months ago but was only give the option to “pre-order” it this week, so huge has the demand been. Reminds me of the “28 days” of the Sinclair era – though I am sure Raspberry Pi’s makers are not making their money from cashing money in the bank, given today’s interest rates.

# Computer science in English schools: the debate rages on

World cup England (Photo credit: @Doug88888)

In recent months a new consensus has emerged about teaching ICT (information and communications technology) in England’s schools: namely that it has been sent up a blind alley where kids are taught little more than how to manipulate Microsoft’s “Office” products.

That recognition is a good thing, though the way in which the government were finally roused into action – by a speech from a Google bigwig – was not so edifying. If the previous Labour government had a distressing and disappointing attitude of worshipping the ground Bill Gates trod upon, the Conservative wing of the coalition seems mesmerised by Google (not least because of some very strong personal and financial ties between Google and leading Conservatives).

But recognising there is a problem and fixing it are two very different things. The proposals from the Education Secretary, Michael Gove, seen contradictory at best: on the one hand he’s said we need a new curriculum, on the other he’s seemingly refused to do anything to establish one. The revelation last week that he’s axed the bit of his department that might create such a curriculum did not inspire confidence.

But the pressure for change is still mounting. In tomorrow’s Observer John Naughton, author of the celebrated A Brief History of the Future: Origins of the Internet – launches his manifesto for ICT (as it’s a manifesto I have copied it in full, but you should really also read his article here):

1. We welcome the clear signs that the government is alert to the deficiencies in the teaching of information and communications technology (ICT) in the national curriculum, and the indications you and your ministerial colleagues have made that it will be withdrawn and reviewed. We welcome your willingness to institute a public consultation on this matter and the various responses you have already made to submissions from a wide spectrum of interested parties.

2. However, we are concerned that the various rationales currently being offered for radical overhaul of the ICT curriculum are short-sighted and limited. They give too much emphasis to the special pleading of particular institutions and industries (universities and software companies, for example), or frame the need for better teaching in purely economic terms as being good for “UK plc”. These are significant reasons, but they are not the most important justification, which is that in a world shaped and dependent on networking technology, an understanding of computing is essential for informed citizenship.

3. We believe every child should have the opportunity to learn computer science, from primary school up to and including further education. We teach elementary physics to every child, not primarily to train physicists but because each of them lives in a world governed by physical systems. In the same way, every child should learn some computer science from an early age because they live in a world in which computation is ubiquitous. A crucial minority will go on to become the engineers and entrepreneurs who drive the digital economy, so there is a complementary economic motivation for transforming the curriculum.

4. Our emphasis on computer science implies a recognition that this is a serious academic discipline in its own right and not (as many people mistakenly believe) merely acquiring skills in the use of constantly outdated information appliances and shrink-wrapped software. Your BETT speech makes this point clearly, but the message has not yet been received by many headteachers.

5. We welcome your declaration that the Department for Education will henceforth not attempt to “micro-manage” curricula from Whitehall but instead will encourage universities and other institutions to develop high-quality qualifications and curricula in this area.

6. We believe the proper role of government in this context is to frame high-level policy goals in such a way that a wide variety of providers and concerned institutions are incentivised to do what is in the long-term interests of our children and the society they will inherit. An excellent precedent for this has in fact been set by your department in the preface to the National Plan for Music Education, which states: “High-quality music education enables lifelong participation in, and enjoyment of, music, as well as underpinning excellence and professionalism for those who choose not to pursue a career in music. Children from all backgrounds and every part of the UK should have the opportunity to learn a musical instrument; to make music with others; to learn to sing; and to have the opportunity to progress to the next level of excellence.” Substituting “computing” for “music” in this declaration would provide a good illustration of what we have in mind as a goal for transforming the teaching of computing in schools. Without clear leadership of this sort, there is a danger schools will see the withdrawal of the programme of study for ICT in England as a reason for their school to withdraw from the subject in favour of English baccalaureate subjects.

7. Like you, we are encouraged by the astonishing level of public interest in the Raspberry Pi project, which can bring affordable, programmable computers within the reach of every child. But understanding how an individual machine works is only part of the story. We are rapidly moving from a world where the PC was the computer to one where “the network is the computer”. The evolution of “cloud computing” means that the world wide web is morphing into the “world wide computer” and the teaching of computer science needs to take that on board.

8. In considering how the transformation of the curriculum can be achieved, we urge you to harness a resource that has hitherto been relatively under-utilised – school governors. It would be very helpful if you could put the government’s weight behind the strategic information pack on Teaching Computer Science in Schools prepared by the Computing at School group, which has been sent to every head teacher of a state-maintained secondary school in England to ensure that this document is shared with the governors of these schools.

9. We recognise that a key obstacle to achieving the necessary transformation of the computing curriculum is the shortage of skilled and enthusiastic teachers. The government has already recognised an analogous problem with regard to mathematics teachers and we recommend similar initiatives be undertaken with respect to computer science. We need to a) encourage more qualified professionals to become ICT teachers and b) offer a national programme of continuing professional development (CPD) to enhance the teachers’ skills. It is unreasonable to expect a national CPD programme to appear out of thin air from “the community”: your department must have a role in resourcing it.

10. We recognise that teaching of computer science will inevitably start from a very low base in most UK schools. To incentivise them to adopt a rigorous discipline, computer science GCSEs must be added to the English baccalaureate. Without such incentives, take-up of a new subject whose GCSE grades will be more maths-like than ICT-like will be low. Like it or not, headteachers are driven by the measures that you create.

11. In summary, we have a once-in-a-lifetime opportunity to prepare our children to play a full part in the world they will inherit. Doing so will yield economic and social benefits – and ensure they will be on the right side of the “program or be programmed” choice that faces every citizen in a networked world.

One thing has occupied my free time more than anything else these last few days – Francis Spufford‘s marvellous work of history and imagination, Red Plenty.

The book is a marvel in joining linear programming, economics, mathematics, cybernetics, computing, chemistry, textiles, politics, sociology, popular music, genetics and history all in one long fabric. The book is not quite a novel but nor is it history, the author himself calls it a “fairy tale”.

The ground on which it works is the Soviet Union between Stalin’s death in 1953 and what might be considered as the cementing in of what was later called the “era of stagnation” in 1970. The main characters are the scientists and engineers who saw, in that time, a new hope for the USSR in Khrushchev‘s claims that “this generation will know communism” – with a 1980 deadline – and who was willing to indulge their hopes of a rational, mathematical reshaping of the Soviet system.

Novelisations of actual events and the actions of real and fictional people are interwoven with passages of historical and scientific commentary and the effect is that we can sympathise with the hopes and dreams of the scientists but also know that they are destined for heart-breaking (for many at least) failure as the essential gangsterised and cynical nature of the state created by Stalin crushes their hopes which, in any case, were always naive at best.

But along the way we get to understand why the Soviet Union excelled at maths (and to a lesser extent computing science) – as it was both free of the pollutant of Marxism-Leninism but also valued by the Marxist-Leninists – scared the west with epic economic growth in the 1950s and so failed its citizens economically – nobody lost their job or reputation by failing the consumer, but if you failed to deliver a capital good you risked both.

We also get a portrait of a society that is much more granulated than the simple riffs of anti-communism would let us believe.

At the top we see Khrushchev was a fool who had done many evil things but he also hoped to make amends, Brezhnev and Kosygin the champions of a new wave of repression and stultification but also men frightened by how earlier reforms led to massacres and desperate not to see that return.

But most of all we see the scientists and their hopes get ground down. They all begin as believers and have only three choices in the end: to rebel and lose every physical thing, to compromise and lose hope or to opt out of the real world and chose only science.

The bitterness of their defeat, and that of all those who hoped for a better world after Stalin, is summed up in the words of a (real) song, sung in the book by Alexander Galich, a writer of popular songs turned underground critic and, after the shock of the public performance, recounted here, of his satirical works, exile.

We’ve called ourselves adults for ages

We don’t try to pretend we’re still young

We’ve given up digging for treasure

Far away in the storybook sun.

(As a companion work I’d also recommend Khrushchev: The Man and His Era)

# Computer scientists’ lousy citation style

I am reading this book: Soft Real-Time Systems: Predictability vs. Efficiency, and I am struck, once again, by the truly lousy style of publication reference that seem to be preferred by so many

Image via Wikipedia

computer scientists,

The style used in the book appears to be that favoured by the American Mathematical Society – the so-called “authorship trigraph” – with references made up of letters from the author’s name followed by the last two figures of the year of original publication eg., [Bak91] which references in the bibliography:

[Bak91]        T.P. Baker. Stack-based scheduling of real-time processes. Journal of Real Time Systems, 3, 1991.

Now it is possible, if I were an expert in the field that I might recognise this reference, but it is far from helpful. When referencing papers written by multiple authors the system is hopeless – using the first letters of the first three authors and ignoring the rest, eg., $[DGK^+02]$ is a real reference in the book to a paper with eight authors. I really doubt many people would get that straight away.

But at least this reference system contains slightly more than the IEEE‘s citation system, which demands papers are merely referenced by a bracketed number in the text, eg., [1].

These reference systems are so widely used that I worried that my own use of the Chicago system – which specifies author and year, eg., (Denning, 1970), would be frowned upon in Birkbeck – but a re read of the regulations showed their demand was for a consistent and well-recognised system.

The ACM, too, promote a sensible citation format eg., [Denning 1970].

Does this matter? Yes. I am sure many readers of these books and papers are students who are conducting literature reviews or similar exercises. Reading the original reference may often be important and having to flick back and forth to a bibliography to check the meaning of an incomprehensible reference is not calaculated to add to the sum of human happiness.

(I don’t have any real complaints about the book though – except that the translation is plainly a bit stilted – for instance, the first sentence of the book refers to real time systems being investigated “in the last years” – a fairly common mistake in syntax from non-English speakers and one that the editors really ought to have corrected. But the occasional infelicity of language does not detract from the book’s overall appeal.)

# Working set heuristics and the Linux kernel: my MSc report

My MSc project was titled “Applying Working Set Heuristics to the Linux Kernel” and my aim was to test some local page replacement policies in Linux, which uses a global page replacement algorithm, based on the “2Q” principle.

There is a precedent for this: the so-called “swap token” is a local page replacement policy that has been used in the Linux kernel for some years.

My aim was to see if a local replacement policy graft could help tackle “thrashing” (when a computer spends so much time trying to manage memory resources – generally swapping pages back and forth to disk – it makes little or no progress with the task itself).

The full report (uncorrected – the typos have made me shudder all the same) is linked at the end, what follows is a relatively brief and simplified summary.

Fundamentally I tried two approaches: acting on large processes when the number of free pages fell to one of the watermark levels used in the kernel and acting on the process last run or most likely to run next.

For the first my thinking – backed by some empirical evidence – was that the largest process tended to consume much more memory than even the second largest. For the second the thought was that make the process next to run more memory efficient would make the system as a whole run faster and that, in any case the process next to run was also quite likely (and again some empirical evidence supported this) to be the biggest consumer of memory in the system.

To begin I reviewed the theory that underlies the claims for the superiority of the working set approach to memory management – particularly that it can run optimally with lower resource use than an LRU (least recently used) policy.

Peter Denning, the discoverer of the “working set” method and its chief promoter, argued that programs in execution do not smoothly and slowly change their fields of locality, but transition from region to region rapidly and frequently.

The evidence I collected – using the Valgrind program and some software I wrote to interpret its output, showed that Denning’s arguments appear valid for today’s programs.

Here, for instance is the memory access pattern of Mozilla Firefox:

Working set size can therefore vary rapidly, as this graph shows:

It can be seen that peaks of working set size often occur at the point of phase transition – as the process will be accessing memory from the two phases at the same time or in rapid succession.

Denning’s argument is that the local policy suggested by the working set method allows for this rapid change of locality – as the memory space allocated to a given program is free to go up and down (subject to the overall constraint on resources, of course).

He also argued that the working set method will – at least in theory – deliver a better space time product (a measure of overall memory use) than a local LRU policy. Again my results confirmed his earlier findings in that they showed that, for a given average size of a set of pages in memory, the working set method will ensure longer times between page faults, compared to a local LRU policy – as shown in this graph:

Here the red line marks the theoretical performance of a working set replacement policy and the blue line that of a local LRU policy. The y-axis marks the average number of instructions executed between page faults, the x-axis the average resident set size. The working set method clearly outperforms the LRU policy at low resident set values.

The ‘knee’ in either plot where $\frac{dy}{dx}$ is maximised is also the point of lowest space time product – at this occurs at a much lower value for the working set method than for local LRU.

So, if Denning’s claims for the working set method are valid, why is it that no mainstream operating system uses it? VMS and Windows NT (which share a common heritage) use a local page replacement policy, but both are closer to the page-fault-frequency replacement algorithm – which varies fixed allocations based on fault counts – than a true working set-based replacement policy.

The working set method is just too difficult to implement – pages need to be marked for the time they are used and to really secure the space-time product benefit claimed, they also need to be evicted from memory at a specified time. Doing any of that would require specialised hardware or complex software or both, so approximations must be used.

“Clock pressure”

For my experiments I concentrated on manipulating the “CLOCK” element of the page replacement algorithm: this removes or downgrades pages if they have not been accessed in the time been alternate sweeps of an imaginary second hand of an equally imaginary clock. “Clock pressure” could be increased – ie., pages made more vulnerable to eviction – by systematically marking them as unaccessed, while pages could be preserved in memory by marking them all as having been accessed.

The test environment was compiling the Linux kernel – and I showed that the time taken for this was highly dependent on the memory available in a system:

The red line suggests that, for all but the lowest memory, the compile time is proportional to $M^{-4}$ where $M$ is the system memory. I don’t claim this a fundamental relationship, merely what was observed in this particular set up (I have a gut feeling it is related to the number of active threads – this kernel was built using the -j3 switch and at the low memory end the swapper was probably more active than the build, but again I have not explored this).

Watermarks

The first set of patches I tried were based on waiting for free memory in the system to sink to one of the “watermarks” the kernel uses to trigger page replacement. My patches looked for the largest process then either looked to increase clock pressure – ie., make the pages from this large process more likely to be removed – or to decrease it, ie., to make it more likely these pages would be preserved in memory.

In fact the result in either case was similar – at higher memory values there seemed to be a small but noticeable decline in performance but at low memory values performance declined sharply – possibly because moving pages from one of the “queues” of cached pages involves locking (though, as later results showed also likely because the process simply is not optimal in its interaction with the existing mechanisms to keep or evict pages).

The graph below shows a typical result of an attempt to increase clock pressure – patched times are marked with a blue cross.

The second approach was to interact with the “completely fair scheduler” (CFS) and increase or decrease clock pressure on the process lease likely to run or most likely to run.

The CFS orders processes in a red-black tree (a semi-balanced tree) and the rightmost node is the process least likely to run next and the leftmost the process most likely to run next (as it has run for the shortest amount of virtual time).

As before the idea was to either free memory (increase clock pressure) or hold needed pages in memory (decrease clock pressure). The flowchart below illustrates the mechanism used for the leftmost process (and decreasing clock pressure):

But again the results were generally similar – a general decline, and a sharp decline at low memory values.

(In fact, locking in memory of the leftmost process actually had little effect – as shown below:)

But when the same approach was taken to the rightmost process – ie the process that has run for the longest time (and presumably may also run for a long time in the future), the result was a catastrophic decline in performance at small memory values:

And what is behind the slowdown? Using profiling tools the biggest reason seems to be that the wrong pages are being pushed out of the caches and  need to be fetched back in. At 40MB of free memory both patched and unpatched kernels show similar profiles with most time spent scheduling and waiting for I/O requests – but the slowness of the patched kernel shows that this has to be done many more times there.

There is much more in the report itself – including an examination of Denning’s formulation of the space-time product  - I conclude his is flawed (update: in fairness to Peter Denning, who has pointed this out to me, this is as regards his approximation of the space-time product: Denning’s modelling in the 70s also accounted for the additional time that was required to manage the working set) as it disregards the time required to handle page replacement – and the above is all a (necessary) simplification of what is in the report – so if you are interested please read that.

Applying working set heuristics to the Linux kernel

# Master of Science

I am very pleased to report that I have been awarded the degree of Master of Science in Computer Science – with distinction.

So, a PhD next? Maybe, need to explore the options.

I think I will post up my project report in a few days for those of you who might be interested in Linux memory management and page reclaim.