Very small Turing machines


State diagram of a Turing Machine
Image by Center for Image in Science and Art _ UL via Flickr

I am pretty busy with work now, so one of the things I had planned to do – write a simple Turing Machine in Groovy – will have to wait.

In the meantime here are some very small Turing machines to wonder over.

Advertisements

Stirling’s approximation


Trying to find a way to calculate the factorials of large (very large) numbers, so as to at least work through my example for Uranium 235 that I considered when working out, for my own benefit, how the binomial distribution worked.

Got this via Twitter:

https://twitter.com/#!/Our_Frank/status/141032793961529344

In fact, wikipedia states the approximation in a slightly less approximate way:

n! \sim \sqrt{2\pi n} (\frac{n}{e})^n

The binomial distribution, part 2


Binomial probability mass function.
Image via Wikipedia

(Part 1 is here – these notes are to assist me, rather than contain any real news!)

So, if the probability that an event will happen to a single entity in a unit of time is p and the probability it will not happen is 1 - p = q , what is the probability that a large number of events, k , will take place?

Considering the Uranium-235 example again, lets say there are a very large number, N of atoms: how many will decay and emit an \alpha -particle in a given second?

Well we know what the mean number of decays we should expect is: Np . But this is a random process and so will not always deliver that result, instead there will be a distribution of results around this mean.

What does the this distribution look like – ie., what is its probability mass function (pmf)?

For exactly k decays let’s call this f(k; N,p) .

To show where the pmf comes from, let’s look at a much simpler example – tossing a coin four times and seeing if we get exactly one head, ie k=1, N=4 .

One way we could get this is like this: HTTT. The probability of that happening is p^kq^{N-k} = pq^3 = \frac{1}{16} . But that is not the only way exactly one head could be delivered: obviously there are four ways: HTTT, THTT, TTHT, TTTH and so the probability of exactly one head is \frac{4}{16} .

(For two heads we have six ways of arranging the outcome: HHTT, HTHT, HTTH, THHT, THTH, TTHH and so the probability is \frac{6}{16} . For three heads the probability is the same as three tails (ie for one head), and the probabilities for all heads and all tails are both \frac {1}{16} . Cumulatively this covers all the possibilities and adds up to 1 .)

The generalisation of this gives us a pmf thus: f(k; N,p) = _NC_k\ p^kq^{(N-k)}, where _NC_k is the binomial coefficient and can be spoken as “N choose k” and is the number of ways of distributing k successes from N trials.

_NC_k = \frac{N!}{k!(N-k)!}

There are approximately 10^{21} Uranium atoms in a gramme of the substance and calculating factorals of such large numbers efficiently requires an awful lot of computing power – my GNU calculator has been at it for some time now, maxing out one CPU on this box for the last 14 minutes, so I guess I am going to have to pass on my hopes of showing you some of the odds.

The binomial distribution, part 1


Lognormal
Image via Wikipedia

I think there are now going to be a few posts here which essentially are about me rediscovering some A level maths probability theory and writing it down as an aid to memory.

All of this is related as to whether the length of time pages are part of the working set is governed by a stochastic (probabilistic) process or a deterministic process. Why does it matter? Well, if the process was stochastic then in low memory situations a first-in, first-out approach, or simple single queue LRU approach to page replacement might work well in comparison to the 2Q LRU approach currently in use. It is an idea that is worth a little exploring, anyway.

So, now the first maths aide memoire – simple random/probabilistic processes are binomial – something happens or it does not. If the probability of it happening in a unit time period is p (update: is this showing up as ‘nm’? It’s meant to be ‘p’!) then the probability it will not happen is 1 - p = q .  For instance this might be the probability that an atom of Uranium 235 shows \alpha particle decay (the probability that one U 235 atom will decay is given by its half-life of 700 million years ie., 2.21\times10^{16} seconds, or a probability, if my maths is correct, of a given individual atom decaying in any particular second of approximately 4.4\times10^{-16} .

(In operating systems terms my thinking is that if the time pages spent in a working set were governed by similar processes then there will be a half life for every page that is read in. If we discarded pages after they were in the system after such a half life, or better yet some multiple of the half life, then we could have a simpler page replacement system – we would not need to use a CLOCK algorithm, just record the time a page entered the system and stick it in a FIFO queue and discard it when the entry time was more than a half life ago.

An even simpler case might be to just discard pages once the stored number reached above a certain ‘half life’ limit. Crude, certainly, but maybe the simplicity might compensate for the lack of sophistication.

Such a system would not work very well for a general/desktop operating system – as the graph for the MySQL daemon referred to in the previous blog shows, even one application could seem to show different distributions of working set sizes. But what if you had a specialist system where the OS only ran one application – then tuning might work: perhaps that could even apply to mass electionics devices, such as Android phones – after all the Android (Dalvik) VM is what is being run each time.)

Is the time pages are in the working set stochastic?


Reading about the Monte Carlo method has set me thinking about this and how, if at all, it might be applied to page reclaim in the Linux kernel.

In my MSc report I show that my results show that working set size is not normally distributed – despite occasional claims to the contrary in computer science text books. But it is possible that a series of normal distributions are overlaid – see the graphic below:

Working set size for MySQL daemonThe first question is: how do I design an experiment to verify that these are, indeed a series of normal distributions?

(I may find out how I have done in the degree in the next week or so – wish me luck)

The tyranny of the arts graduates continues


Michael Gove speaking at the Conservative Part...
Image via Wikipedia

I imagine in Michael Gove‘s world, this has been a good week. The UK’s secretary of state for education has been in the news a lot this week, and that seems to be the key metric for him – after all his qualifications for the job essentially seem to be that he was once a journalist (and a militant and active trade unionist – a friend who worked with him at the BBC once told me he was deployed to ensure that “the Tories all came out” during disputes at the Corporation in 1994.)

The two equal pinnacles of Mr Gove’s week would appear to be his writing a preface (!) to the Bible that he is sending to all schools (he doesn’t seem to understand that Catholic schools – of which there are rather a lot – will not use the text he is sending them, never mind the questions of what the state-maintained Jewish and Muslim schools will think) and a speech he gave to Cambridge University earlier in the week where he waxed lyrical about high literature but seemed to have nothing or next-to-nothing to say about engineering, maths and science.

Matt Pearson puts it so much better than I ever could:

Gove rarely talks of skills which can be used in the modern economy, he does not mention collaboration and teamwork, communication skills and the ability to use a range of technologies to get a job done. He does not talk of creativity and entrepreneurship, of engaging with the information society and introducing young people to the rigours of engineering or computer programming. Presumably as his own education did not cover these elements, and Jane Austen wrote very little in JavaScript, these disciplines have not entered his purview.

Email is ill, but is not dying


Illustration of Facebook mobile interface
Image via Wikipedia

The BBC’s website has an interesting article on the prospects for email, a subject I have written on here a few times.

Email is no longer the killer application of the internet, certainly. Designed by scientists to send messages to other scientists and so built around the notion that all users were acting in good faith, it is weak and that is, no doubt, contributing to the rise of social media as an alternative means of communication.

But there is no reason why we should replace one broken security model – that of email – with another – a reliance on proprietary software (Facebook is proprietary after all).

Email will last because it is open. But maybe someone could and should write a better email.

The cost of soft/minor faults


Here is a complete graph of the memory use and fault count for a run of ls -lia.

ls -lia memory useFault count for ls -liaAs you can see there are only soft/minor faults here – as one would expect (this was a machine with a lot of memory), as the C library provides the ls function and it will be loaded in memory (presumably the filesystem information was also in memory).

But there are a lot of soft faults – and these too have a cost, even if nothing like the cost of a hard fault. For a start each soft page fault almost certainly indicates a miss in the processor cache.

The paper linked here also gives a lot of information about Linux’s generation of soft/minor faults and their performance impact – it seems that the kernel is designed to deliver system wide flexibility at the expense of performance.

An example of the poor editing in O’Reilly’s “Programming Android”


IMG_3030s
Image by 小宗宗 via Flickr

OK, I don’t really want to sound like I am bashing this book – Programming Android: Java Programming for the New Generation of Mobile Devices – because, by its very nature, writing a technical book must be highly demanding in terms of accuracy and I see no signs of any mistakes – just what I think is poor editing. See if you agree…

So, the book is discussing how to serialize classes using Android’s Parcelable interface, and makes this, correct point about serializing an enum type:

“Be sure, though, to think about future changes to data when picking the serialized representation. Certainly it would have been much easier in this example to represent state as an int whose value was obtained by calling state.ordinal. Doing so, however, would make it much harder to maintain forward compatibility for the object. Suppose it becomes necessary at some point to add a new state … this trivial change makes new versions of the object completely incompatible with earlier versions.”

But then discussing de-serialization, the book states, without comment:

“The idiomatic way to do this is to read each piece of state from the Parcel in the exact same order it was written in writeToParcel (again, this is important), and then to call a constructor with the unmarshaled [sic] state.”

Now, technically, these passages are not in disagreement – but it is clearly the case that the de/serialization process is highly coupled with the data design – something that ought to be pointed out, especially if we are going to make a big deal of it on the serialization phase.

Making sense of Android’s complex development process


Image representing Android as depicted in Crun...
Image via CrunchBase

Back in about 1997 I bought a book about this new programming environment – it seemed something bigger than a language but smaller than an operating system – called Java.

Back then the idea seemed great – write once, run anywhere – but there was a lot of scepticism and, of course, Microsoft tried to poison the well through the tactics of “embrace and extend” with their J++ offering. All of that made it look as though Java was going nowhere.

I wrote a couple of applets – one was a “countdown” timer for Trevor Philips‘s mayoral election website in 1999, another was a SAX based parser for the largely Perl-based content management system I wrote for the Scottish Labour Party the following year, ahead of the 2001 election. But no one seemed to like applets much – it seems ridiculous now, but the 90K download needed for the SAX parser really slowed down the Scottish party’s site, even though I was pretty proud of the little newsticker it delivered (along with annoying teletype noises as it went). I forgot about Java.

But, of course, that was wrong. Java is programming language du jour these days, though Microsoft’s responses to the success of Java and the failure of J++, C# and .net, are also big.

Android is, of course, Java’s most prominent offer these days – literally millions of people will be running Android apps even as I write this and thousands of Android phones are being bought across the world daily. Time to get reacquainted, especially as my new job is once more about political communications.

But, as I discovered with C++ when I came back to it after over a decade for my MSc, Java has moved on a fair bit in that time and, unlike C++, I cannot say all the progress seems to be positive. Indeed Java seems to thrive on a particularly ugly idiom with developers being encouraged to write constructors of anonymous classes in the headers of functions – ugh.

I can certainly see the beauty of Groovy more clearly than ever, too. Though being an old time Perl hacker makes me resent Java’s heavy duty static typing in any case.

To help me through all this I have been reading O’Reilly‘s Programming Android: Java Programming for the New Generation of Mobile Devices. Now, usually O’Reilly’s books are all but guaranteed to be the best or close to the best of any on offer, but I have my doubts that is the case with this one – it seems to be sloppily edited (eg at different times it is difficult to follow whether one is being advised to use the Android SDK or the Eclipse editor) and falls between being a comprehensive introduction to Android programming and a guide for Java hackers to get on with it. It feels less than ordered, to be honest.

Now, maybe this is a function of the language and the complexity of the environment, I don’t know. But I would welcome any alternative recommendations if anyone has some.