# Better algorithm found

Late last night, in a state of some despair, I pleaded for computing power help with my MSc project. Today, having thought about it long and hard I found a better algorithm and speeded the whole thing up by a factor of 50.

Like, I am sure, many people previously in my position, my inspiration was the classic Programming Pearls.

This is a Birkbeck set text but it’s also one that I did not read when I should have last year – or rather I just read a small part of it. Luckily I am systematically going through it now and was yesterday reading about how much of a difference a better algorithm can make – not that I did not know this, but it helped provide the determination to find one.

So what did I do. Firstly I realised that there was a specialist implementation of the Java Map interface that, as the Javadoc file explicitly says, is good for LRU caches (essentially what I am seeking to model):  LinkedHashMap.

This sets up a last-accessed order and so means that it is no longer necessary to search through the whole Map to find the out-of-date pages. Using an Iterator and an EntrySet I only need to get to the first page that is not out-of-date and stop.

When I was checking that worked I noticed that temporal locality meant that in many cases the LRU page was still inside the time frame I was seeking to check – in other words literally billions of checks for outdated pages were taking place needlessly. As pages in the cache cannot get “older” (ie there reference time cannot go backwards), at time $\tau + \delta$ the oldest page cannot be any older than it was at $\tau$ – hence if we do not check until we have reached the point here we need to expire pages with time $\tau$ we will not miss any needed evictions.

The result of these two changes is a massive speed up in the code – by a factor of 40 – 50.