I think there are now going to be a few posts here which essentially are about me rediscovering some A level maths probability theory and writing it down as an aid to memory.

All of this is related as to whether the length of time pages are part of the working set is governed by a stochastic (probabilistic) process or a deterministic process. Why does it matter? Well, if the process was stochastic then in low memory situations a first-in, first-out approach, or simple single queue LRU approach to page replacement might work well in comparison to the 2Q LRU approach currently in use. It is an idea that is worth a little exploring, anyway.

So, now the first maths aide memoire – simple random/probabilistic processes are binomial – something happens or it does not. If the probability of it happening in a unit time period is (**update: **is this showing up as ‘nm’? It’s meant to be ‘p’!) then the probability it will not happen is . For instance this might be the probability that an atom of Uranium 235 shows –particle decay (the probability that one U 235 atom will decay is given by its half-life of 700 million years ie., seconds, or a probability, if my maths is correct, of a given individual atom decaying in any particular second of approximately .

*(In operating systems terms my thinking is that if the time pages spent in a working set were governed by similar processes then there will be a half life for every page that is read in. If we discarded pages after they were in the system after such a half life, or better yet some multiple of the half life, then we could have a simpler page replacement system – we would not need to use a CLOCK algorithm, just record the time a page entered the system and stick it in a FIFO queue and discard it when the entry time was more than a half life ago.*

*An even simpler case might be to just discard pages once the stored number reached above a certain ‘half life’ limit. Crude, certainly, but maybe the simplicity might compensate for the lack of sophistication.*

*Such a system would not work very well for a general/desktop operating system – as the graph for the MySQL daemon referred to in the previous blog shows, even one application could seem to show different distributions of working set sizes. But what if you had a specialist system where the OS only ran one application – then tuning might work: perhaps that could even apply to mass electionics devices, such as Android phones – after all the Android (Dalvik) VM is what is being run each time.)*

###### Related articles

- What are four requirements for binomial distribution (wiki.answers.com)
- Is the time pages are in the working set stochastic? (cartesianproduct.wordpress.com)
- The foundations of Statistics: a simulation-based approach (r-bloggers.com)
- Using math symbols in wordpress pages and posts – LaTex (1) (etidhor.wordpress.com)
- Using the binomial GLM instead of the Poisson for spike data (xcorr.net)
- Diagram for a Bernoulli process (using R) (r-bloggers.com)
- LaTeX (climbinggecko.wordpress.com)
- How to understand segmented binomial heaps described in (stackoverflow.com)

Pingback: The binomial distribution, part 2 « cartesian product