My (computer science) book of the year


The Annotated Turing
Image via Wikipedia

It wasn’t published in 2011 but The Annotated Turing: A Guided Tour Through Alan Turing’s Historic Paper on Computability and the Turing Machine is without doubt my computer science book of the year.

If books can change your life then The Annotated Turing changed mine – because it showed me just how strong the link between maths and computer science is and how fundamental maths is to an understanding of the nature of reality and truth. The world, in my eyes, has not been the same since I read this book last January.

If you are a computer science student the you must read this book!

And finally, Happy New Year to all.

Advertisements

Soviet computer science: once ahead of the west?


Coat of arms of the Union of Soviet Socialist ...

In my old blog I made a point that I still believe – that the key factor in the downfall of the Soviet Union was not pressure from the arms race but the complete failure of the system: once the country was led by leaders, like Mikhail Gorbachev and his team, who were open to learning from the west, then the Soviet system was doomed, because why would any enlightened leader what to keep a system that was obviously a fiasco? (I recommend Archie Brown‘s The Rise and Fall of Communism if you want to know more.)

But the collapse of the Soviet Union was just that – a collapse. There was no “transition” as there was in the case of central Europe – this obituary of Vaclav Havel in The Economist is a moving and powerful reminder of those times – the place fell apart, life expectancies and incomes crashed and scientists fled to anywhere they could make a living – something that continues to worry those who are concerned about nuclear and germ warfare proliferation.

Science in the USSR was dominated by the demands of weapon production and even the most brilliant scientists and mathematicians were not exempted from repression, as Andrei Sakharov‘s case showed  – but there was also a very strong culture of maths for maths’s sake – as this article in The Wall Street Journal relates.

From an early age those who showed promise in maths were hot housed in special schools (The Honors Class: Hilbert’s Problems and Their Solvers relates how this, authoritarian system shaped the life of Yuri Matiyasevich, one of the co-solvers of Hilbert’s tenth problem) – and those who showed promise were both privileged and tightly controlled.

Why am I writing all this now? Because I have just obtained a copy of the out of print Stochastic Analysis of Computer Storage (I think I got the last cheap-ish copy) by the Soviet computer scientists O. I. Aven and Y. A. Kogan, and AT&T’s E. G. Coffman and was rather shocked to discover it contained what looks to me like an outline of the LRU-k page replacement algorithm a full six years ahead of O’Neil, O’Neil and Welkum’s 1993 paper “The Iru-k page replacement algorithm for database disk buffering.” (NB I am not for an instant suggesting that the 1993 paper was cribbed from the earlier work)

Even more remarkably, the book refers to papers written in Russian and published in Minsk in 1977 as its source.

A little bit more on the Taylor expansion


Yesterday finished with:

F(x) = P(x) + \theta(x)(x -a) + \theta^\prime(x)(x - a)^2

Can we expand this further?

Well, applying the product rule \theta^\prime(x) = \frac{F^\prime(x) - P^\prime(x)}{x-a} - \frac{F(x) - P(x)}{(x-a)^2}

And we know F(x) - P(x) = \theta(x)(x - a) and F^\prime(x) - P^\prime(x) = \theta^\prime(x)(x -a) + \theta(x)

So, \theta^\prime = \frac{\theta^\prime(x)(x - a) + \theta(x)}{x - a} - \frac{\theta(x)(x -a)}{(x -a)^2}

Applying L’Hospital’s rule again:

\theta^\prime(x) = \theta^\prime(x) +\theta^{\prime\prime}(x)(x - a) + \theta^\prime(x) - \frac{\theta(x) + \theta^\prime(x)(x - a)}{2(x - a)}

And \theta^\prime(x)(x - a)^2 = 2\theta^\prime(x)(x - a)^2 + \theta^{\prime\prime}(x)(x - a)^3 - \frac{\theta(x)(x - a) + \theta^\prime(x)(x - a)^2}{2}

This process can continue – delivering higher order derivatives on each iteration. It even starts to look like we might generate the factoral terms, as now we can see:

F(x) = P(x) + \frac{1}{2!}\theta(x)(x - a) + \theta^\prime(x)(x-a)^2 + \theta^{\prime\prime}(x)(x-a)^3

Taylor series expansions and L’Hospital’s rule


A brief diversion on to Taylor series expansions – partly based on the Wikipedia article on Taylor’s theorem. I have been working on this for a few days now (Fields medal always going to elude me, I’m afraid!) – and still not fully worked it all out, but it is close enough for me to post it here and hope that someone might explain the bits I have missed…

We have a function F(x) which we want to approximate at a point x_i by a Taylor expansion:

F(x) = F(a) + \frac{F^{\prime}(a)}{1!}(x - a) + \frac{F^{\prime\prime}(a)}{2!}(x - a)^2 + \cdots + \frac{F^{k\prime}(a)}{k!}(x - a)^k + \cdots

Simplifying this to the first few terms:

F(x) = F(a) + F^{\prime}(a)(x- a) + \theta(x)(x - a) with < \lim_{a \to x} \theta(x) = 0 > should be \lim _{x \to a} \theta(x) = 0

It is fairly obvious. I hope, why the zeroth term is F(a) and the first term is F^\prime(a)(x - a): just the starting point and the distance along the tangent F^\prime. Then \theta(x) becomes the correction for the ‘error’ this crude approximation will have. (The first term here is the linear approximation.)

Essentially, we can generate a Taylor expansion (or in my case something that starts to look very similar) with repeated application of L’Hospital’s rule. Here goes…

F(x) = F(a) +F^\prime(a)(x - a) +\theta(x)(x-a)

P(x) = F(a) + F^\prime(a)(x -a)

F(x) = P(x) + \theta(x)(x-a)

\theta(x) =\frac{F(x) - P(x)}{x -a}

Applying L’Hospital’s rule \theta(x) = F^\prime(x) - P^\prime(x)

F(x) = P(x) + (F^\prime(x) - P^\prime(x))(x - a)

P^\prime(x) = F^\prime(a)

F^\prime(x) = P^\prime(x) + \theta^\prime(x)(x - a) + \theta(x)

F(x) = P(x) + (\theta(x) + \theta^\prime(x)(x - a))(x-a)

F(x) = P(x) + \theta(x)(x -a) + \theta^\prime(x)(x - a)^2

Starting to look like a Taylor expansion now…

Update: Professor Rubin comments (I have moved this up here both because he knows what he is talking about and because I can get the LaTeX to work):

In the second formula, I’m pretty sure you want x \rightarrow a rather than a \rightarrow x. Unfortunately, I think you went off the rails around “Applying L’Hospital’s rule”: F’(x) – P’(x) would be the \lim_{x \rightarrow a} \theta(x) , which (assuming continuity of \theta ) would be \theta(a), not \theta(x) . If you go back to the first line after “Here goes…” and differentiate (we’ll assume \theta is arbitrarily smooth), you get F^\prime(x) - P^\prime(x) = \theta(x) + \theta^\prime(x)(x-a) .

Filesystem works


Not much of a surprise really, as the old code worked too, but the updated VMUFAT filesystem code works with 3.2.0-rc7.

Just have to clean it up so it is of an acceptable standard.

Update: In the unlikely event that someone is testing the code I should warn you that the above was overly optimistic. On unmounting the VMUFAT volume the whole kernel crashed – a bug in the evict_inode function I think.

Taylor expansion of a probability density function


To further improve my understanding of probabilities I am reading Data Analysis: A Bayesian Tutorial – which I thoroughly recommend for anyone with basic mathematical knowledge but wanting to grasp the key concepts in this area.

But I have a query about one of the concepts it introduces: a Taylor expansion of a probability density function around its maximum to give a confidence interval (this stuff matters for the day job as well as the computer science).

We have a probability density function P(x) (in this context this is a posterior pdf from Bayes‘s equation, but that is not important, as far as I can tell, to the maths).

The maximum of P(x) = P(x_0) and \frac{d}{dx}P(x_0) = 0.

To get a smoother function we look at log_e(P) = L. As logarithms are a strictly monotonic function \frac{d}{dx}L(x_0) = 0.

So expanding L, according to the book gives this:

L = L(x_0) + \frac{1}{2}\frac{d^2L}{dx^2}_{x_0}(x - x_0)^2 + \ldots + (we ignore higher terms).

I am not sure why the derivatives are the coefficients of the expansion, but I can read up on that later, but given that I understand why there is no \frac{dL}{dx} term as \frac{dL}{dx}_{x_0} = 0.

OK … well this is the power of blogging as a means of clarifying thought: because just as I was about to ask my question – why isn’t the first term dependent on (x - x_0) – I realised the answer. The first term is, in fact the zeroth term of the expansion and so the dependency on (x - x_0) is in fact a dependency on (x - x_0)^0 = 1.

Working on filesystem code


English: A North American Sega Dreamcast VMU, ...
Image via Wikipedia

Nearly a decade ago I wrote a crude, but working, filesystem for the Sega Dreamcast VMU on Linux. I then put ported a very simple web server to the Dreamcast and got the whole thing on Slashdot.

I never managed to get the thing into mainline – indeed the battering I got last time I tried, in 2009, more or less made me give up writing anything for the kernel and the Dreamcast was put away.

I am not pretending my code was pretty or even particularly good but it is no wonder people get put off from contributing when you get pig ignorant comments like these:

Everything about the on-disk format is tied to the VMU. Until that sinks in, don’t bother sending me email, thanks.

This was someone, who ought to have known better, claiming that it was not possible to have a free standing filesystem for this at all – though they were making their, incorrect, claim in the manner seen all too frequently on the Linux Kernel Mailing List.

No comments.  Really.  There must be some limits on the language one is willing to use on public maillist, after all.

As you can tell this person – a top flight Linux hacker – did not like my code. And, looking back, I can hardly blame him, it was pretty ugly. But as a help to fix it this is of next to no use – and only serves to demotivate. Nasty computer scientists, again.

Ok, so I have got that off my chest. And I am once more placing myself in the firing line.

The filesystem code, a work in progress (so check where the code has got to once you click the link), can be found here. A filesystem that you should be able to mount under loopback, can be found here. All testers welcomed with open arms.

Death of a bridge


A 1990s network interface card supporting both...
Image via Wikipedia

It looks as though my wireless bridge, which relied on equipment more than a decade old – a Ricoh pci pcmcia bridge and a 801.11b DLink card – has died. The network interface won’t come up on booting the system and lspci just lists the device as an unrecognised non-VGA card (interestingly the BIOS still sees it as a network controller though).

So, I need to replace it. But with what? My understanding, last year at least, was that Linux bridging software won’t work with anything more modern (this card had an old Prism II chipset)

. Has that changed?

A broadband commons and economic prosperity


Logo of the United States Federal Communicatio...
Image via Wikipedia

Much internet traffic in recent weeks has been devoted to efforts by US lawmakers to give the executive the power to “shut down the internet”, but other news shows that not all the ideas from the regulators over there are bad ones:

FCC Chairman Julius Genachowski said, “With today’s approval of the first TV white spaces database and device, we are taking an important step towards enabling a new wave of wireless innovation. Unleashing white spaces spectrum has the potential to exceed even the many billions of dollars in economic benefit from Wi-Fi, the last significant release of unlicensed spectrum, and drive private investment and job creation.”
Unused spectrum between TV stations – known as “white spaces” – represents a valuable opportunity for provision of broadband data services in our changing wireless landscape. This unused TV spectrum provides a major new platform for innovation and delivery of service, with potential for both research and commercial
applications. Development of unlicensed radio transmitting devices has already led to a wave of new consumer technologies, including Wi-Fi and other innovations like digital cordless phones and in-home video distribution systems that have generated billions of dollars of economic growth. This new technology will build on that track record and provide even more benefits to the U.S. economy.

How random is random?


English: German-born theoretical physicist Alb...
Image via Wikipedia

What is a truly random event?

We are used to the idea that flipping a coin is likely to generate a random sequence of heads or tails but, of course, it is perfectly possible to predict, using the rules of classical mechanics, the outcome of a series of coin tosses if we know the values of a not very long list of parameters. In other words, the outcome of flipping a coin is entirely deterministic, it is just that humans are unlikely to be able to faithfully replicate the same flick over and over again.

Quantum events – such as the \alpha particle decay are, as far as our knowledge today tells us, truly random – in the sense they have a probability of occurring in a given time period but we have no way of knowing if a given nucleus will decay at any given time.

This is really a very profound finding – it implies that two physical objects, in this case atomic nuclei, behave in completely different ways despite all the physical parameters describing their existence being the same. That sounds like the exact opposite of everything that science has taught us about the nature of the universe.

Thinking about this, one can quickly come to agree with Einstein that it must be based on a flawed understanding of physical reality as “God does not play dice”. But it is also the best explanation we have for that physical reality.

But why would a nucleus decay in one time period and not another? Can this really be an event without specific cause? Just a ‘randomly‘ chosen moment? But chosen by what?

Of course, some will say by “God” but that really is metaphysics – a completely untestable and unverifiable proposition that merely kicks the physical puzzle in a domain beyond physics.