Trying to find a way to calculate the factorials of large (very large) numbers, so as to at least work through my example for Uranium 235 that I considered when working out, for my own benefit, how the binomial distribution worked.
(Part 1 is here – these notes are to assist me, rather than contain any real news!)
So, if the probability that an event will happen to a single entity in a unit of time is and the probability it will not happen is , what is the probability that a large number of events, , will take place?
Considering the Uranium-235 example again, lets say there are a very large number, of atoms: how many will decay and emit an -particle in a given second?
Well we know what the mean number of decays we should expect is: . But this is a random process and so will not always deliver that result, instead there will be a distribution of results around this mean.
To show where the pmf comes from, let’s look at a much simpler example – tossing a coin four times and seeing if we get exactly one head, ie .
One way we could get this is like this: HTTT. The probability of that happening is . But that is not the only way exactly one head could be delivered: obviously there are four ways: HTTT, THTT, TTHT, TTTH and so the probability of exactly one head is .
(For two heads we have six ways of arranging the outcome: HHTT, HTHT, HTTH, THHT, THTH, TTHH and so the probability is . For three heads the probability is the same as three tails (ie for one head), and the probabilities for all heads and all tails are both . Cumulatively this covers all the possibilities and adds up to .)
The generalisation of this gives us a pmf thus: , where is the binomial coefficient and can be spoken as “N choose k” and is the number of ways of distributing successes from trials.
There are approximately Uranium atoms in a gramme of the substance and calculating factorals of such large numbers efficiently requires an awful lot of computing power – my GNU calculator has been at it for some time now, maxing out one CPU on this box for the last 14 minutes, so I guess I am going to have to pass on my hopes of showing you some of the odds.
I think there are now going to be a few posts here which essentially are about me rediscovering some A level maths probability theory and writing it down as an aid to memory.
All of this is related as to whether the length of time pages are part of the working set is governed by a stochastic (probabilistic) process or a deterministic process. Why does it matter? Well, if the process was stochastic then in low memory situations a first-in, first-out approach, or simple single queue LRU approach to page replacement might work well in comparison to the 2Q LRU approach currently in use. It is an idea that is worth a little exploring, anyway.
So, now the first maths aide memoire – simple random/probabilistic processes are binomial – something happens or it does not. If the probability of it happening in a unit time period is (update: is this showing up as ‘nm’? It’s meant to be ‘p’!) then the probability it will not happen is . For instance this might be the probability that an atom of Uranium 235 shows –particle decay (the probability that one U 235 atom will decay is given by its half-life of 700 million years ie., seconds, or a probability, if my maths is correct, of a given individual atom decaying in any particular second of approximately .
(In operating systems terms my thinking is that if the time pages spent in a working set were governed by similar processes then there will be a half life for every page that is read in. If we discarded pages after they were in the system after such a half life, or better yet some multiple of the half life, then we could have a simpler page replacement system – we would not need to use a CLOCK algorithm, just record the time a page entered the system and stick it in a FIFO queue and discard it when the entry time was more than a half life ago.
An even simpler case might be to just discard pages once the stored number reached above a certain ‘half life’ limit. Crude, certainly, but maybe the simplicity might compensate for the lack of sophistication.
Such a system would not work very well for a general/desktop operating system – as the graph for the MySQL daemon referred to in the previous blog shows, even one application could seem to show different distributions of working set sizes. But what if you had a specialist system where the OS only ran one application – then tuning might work: perhaps that could even apply to mass electionics devices, such as Android phones – after all the Android (Dalvik) VM is what is being run each time.)
Reading about the Monte Carlo method has set me thinking about this and how, if at all, it might be applied to page reclaim in the Linux kernel.
In my MSc report I show that my results show that working set size is not normally distributed – despite occasional claims to the contrary in computer science text books. But it is possible that a series of normal distributions are overlaid – see the graphic below:
The first question is: how do I design an experiment to verify that these are, indeed a series of normal distributions?
(I may find out how I have done in the degree in the next week or so – wish me luck)
I imagine in Michael Gove‘s world, this has been a good week. The UK’s secretary of state for education has been in the news a lot this week, and that seems to be the key metric for him – after all his qualifications for the job essentially seem to be that he was once a journalist (and a militant and active trade unionist – a friend who worked with him at the BBC once told me he was deployed to ensure that “the Tories all came out” during disputes at the Corporation in 1994.)
The two equal pinnacles of Mr Gove’s week would appear to be his writing a preface (!) to the Bible that he is sending to all schools (he doesn’t seem to understand that Catholic schools – of which there are rather a lot – will not use the text he is sending them, never mind the questions of what the state-maintained Jewish and Muslim schools will think) and a speech he gave to Cambridge University earlier in the week where he waxed lyrical about high literature but seemed to have nothing or next-to-nothing to say about engineering, maths and science.
Email is no longer the killer application of the internet, certainly. Designed by scientists to send messages to other scientists and so built around the notion that all users were acting in good faith, it is weak and that is, no doubt, contributing to the rise of social media as an alternative means of communication.
But there is no reason why we should replace one broken security model – that of email – with another – a reliance on proprietary software (Facebook is proprietary after all).
Email will last because it is open. But maybe someone could and should write a better email.
Here is a complete graph of the memory use and fault count for a run of ls -lia.
As you can see there are only soft/minor faults here – as one would expect (this was a machine with a lot of memory), as the C library provides the ls function and it will be loaded in memory (presumably the filesystem information was also in memory).
But there are a lot of soft faults – and these too have a cost, even if nothing like the cost of a hard fault. For a start each soft page fault almost certainly indicates a miss in the processor cache.
The paper linked here also gives a lot of information about Linux’s generation of soft/minor faults and their performance impact – it seems that the kernel is designed to deliver system wide flexibility at the expense of performance.
So, the book is discussing how to serialize classes using Android’s Parcelable interface, and makes this, correct point about serializing an enum type:
“Be sure, though, to think about future changes to data when picking the serialized representation. Certainly it would have been much easier in this example to represent state as an int whose value was obtained by calling state.ordinal. Doing so, however, would make it much harder to maintain forward compatibility for the object. Suppose it becomes necessary at some point to add a new state … this trivial change makes new versions of the object completely incompatible with earlier versions.”
But then discussing de-serialization, the book states, without comment:
“The idiomatic way to do this is to read each piece of state from the Parcel in the exact same order it was written in writeToParcel (again, this is important), and then to call a constructor with the unmarshaled [sic] state.”
Now, technically, these passages are not in disagreement – but it is clearly the case that the de/serialization process is highly coupled with the data design – something that ought to be pointed out, especially if we are going to make a big deal of it on the serialization phase.
Back in about 1997 I bought a book about this new programming environment – it seemed something bigger than a language but smaller than an operating system – called Java.
Back then the idea seemed great – write once, run anywhere – but there was a lot of scepticism and, of course, Microsoft tried to poison the well through the tactics of “embrace and extend” with their J++ offering. All of that made it look as though Java was going nowhere.
I wrote a couple of applets – one was a “countdown” timer for Trevor Philips‘s mayoral election website in 1999, another was a SAX based parser for the largely Perl-based content management system I wrote for the Scottish Labour Party the following year, ahead of the 2001 election. But no one seemed to like applets much – it seems ridiculous now, but the 90K download needed for the SAX parser really slowed down the Scottish party’s site, even though I was pretty proud of the little newsticker it delivered (along with annoying teletype noises as it went). I forgot about Java.
But, of course, that was wrong. Java is programming language du jour these days, though Microsoft’s responses to the success of Java and the failure of J++, C# and .net, are also big.
Android is, of course, Java’s most prominent offer these days – literally millions of people will be running Android apps even as I write this and thousands of Android phones are being bought across the world daily. Time to get reacquainted, especially as my new job is once more about political communications.
But, as I discovered with C++ when I came back to it after over a decade for my MSc, Java has moved on a fair bit in that time and, unlike C++, I cannot say all the progress seems to be positive. Indeed Java seems to thrive on a particularly ugly idiom with developers being encouraged to write constructors of anonymous classes in the headers of functions – ugh.
I can certainly see the beauty of Groovy more clearly than ever, too. Though being an old time Perl hacker makes me resent Java’s heavy duty static typing in any case.
To help me through all this I have been reading O’Reilly‘s Programming Android: Java Programming for the New Generation of Mobile Devices. Now, usually O’Reilly’s books are all but guaranteed to be the best or close to the best of any on offer, but I have my doubts that is the case with this one – it seems to be sloppily edited (eg at different times it is difficult to follow whether one is being advised to use the Android SDK or the Eclipse editor) and falls between being a comprehensive introduction to Android programming and a guide for Java hackers to get on with it. It feels less than ordered, to be honest.
Now, maybe this is a function of the language and the complexity of the environment, I don’t know. But I would welcome any alternative recommendations if anyone has some.