If books can change your life then The Annotated Turing changed mine – because it showed me just how strong the link between maths and computer science is and how fundamental maths is to an understanding of the nature of reality and truth. The world, in my eyes, has not been the same since I read this book last January.
If you are a computer science student the you must read this book!
In my old blog I made a point that I still believe – that the key factor in the downfall of the Soviet Union was not pressure from the arms race but the complete failure of the system: once the country was led by leaders, like Mikhail Gorbachev and his team, who were open to learning from the west, then the Soviet system was doomed, because why would any enlightened leader what to keep a system that was obviously a fiasco? (I recommend Archie Brown‘s The Rise and Fall of Communism if you want to know more.)
But the collapse of the Soviet Union was just that – a collapse. There was no “transition” as there was in the case of central Europe – this obituary of Vaclav Havel in The Economist is a moving and powerful reminder of those times – the place fell apart, life expectancies and incomes crashed and scientists fled to anywhere they could make a living – something that continues to worry those who are concerned about nuclear and germ warfare proliferation.
A brief diversion on to Taylor series expansions – partly based on the Wikipedia article on Taylor’s theorem. I have been working on this for a few days now (Fields medal always going to elude me, I’m afraid!) – and still not fully worked it all out, but it is close enough for me to post it here and hope that someone might explain the bits I have missed…
We have a function which we want to approximate at a point by a Taylor expansion:
Simplifying this to the first few terms:
with < > should be
It is fairly obvious. I hope, why the zeroth term is and the first term is : just the starting point and the distance along the tangent . Then becomes the correction for the ‘error’ this crude approximation will have. (The first term here is the linear approximation.)
Essentially, we can generate a Taylor expansion (or in my case something that starts to look very similar) with repeated application of L’Hospital’s rule. Here goes…
Applying L’Hospital’s rule
Starting to look like a Taylor expansion now…
Update: Professor Rubin comments (I have moved this up here both because he knows what he is talking about and because I can get the LaTeX to work):
In the second formula, I’m pretty sure you want rather than . Unfortunately, I think you went off the rails around “Applying L’Hospital’s rule”: F’(x) – P’(x) would be the , which (assuming continuity of ) would be , not . If you go back to the first line after “Here goes…” and differentiate (we’ll assume is arbitrarily smooth), you get .
Not much of a surprise really, as the old code worked too, but the updated VMUFAT filesystem code works with 3.2.0-rc7.
Just have to clean it up so it is of an acceptable standard.
Update: In the unlikely event that someone is testing the code I should warn you that the above was overly optimistic. On unmounting the VMUFAT volume the whole kernel crashed – a bug in the evict_inode function I think.
I am not sure why the derivatives are the coefficients of the expansion, but I can read up on that later, but given that I understand why there is no term as .
OK … well this is the power of blogging as a means of clarifying thought: because just as I was about to ask my question – why isn’t the first term dependent on – I realised the answer. The first term is, in fact the zeroth term of the expansion and so the dependency on is in fact a dependency on .
I never managed to get the thing into mainline – indeed the battering I got last time I tried, in 2009, more or less made me give up writing anything for the kernel and the Dreamcast was put away.
I am not pretending my code was pretty or even particularly good but it is no wonder people get put off from contributing when you get pig ignorant comments like these:
Everything about the on-disk format is tied to the VMU. Until that sinks in, don’t bother sending me email, thanks.
This was someone, who ought to have known better, claiming that it was not possible to have a free standing filesystem for this at all – though they were making their, incorrect, claim in the manner seen all too frequently on the Linux Kernel Mailing List.
No comments. Really. There must be some limits on the language one is willing to use on public maillist, after all.
As you can tell this person – a top flight Linux hacker – did not like my code. And, looking back, I can hardly blame him, it was pretty ugly. But as a help to fix it this is of next to no use – and only serves to demotivate. Nasty computer scientists, again.
Ok, so I have got that off my chest. And I am once more placing myself in the firing line.
The filesystem code, a work in progress (so check where the code has got to once you click the link), can be found here. A filesystem that you should be able to mount under loopback, can be found here. All testers welcomed with open arms.
It looks as though my wireless bridge, which relied on equipment more than a decade old – a Ricoh pci pcmcia bridge and a 801.11b DLink card – has died. The network interface won’t come up on booting the system and lspci just lists the device as an unrecognised non-VGA card (interestingly the BIOS still sees it as a network controller though).
So, I need to replace it. But with what? My understanding, last year at least, was that Linux bridging software won’t work with anything more modern (this card had an old Prism II chipset)
Much internet traffic in recent weeks has been devoted to efforts by US lawmakers to give the executive the power to “shut down the internet”, but other news shows that not all the ideas from the regulators over there are bad ones:
FCC Chairman Julius Genachowski said, “With today’s approval of the first TV white spaces database and device, we are taking an important step towards enabling a new wave of wireless innovation. Unleashing white spaces spectrum has the potential to exceed even the many billions of dollars in economic benefit from Wi-Fi, the last significant release of unlicensed spectrum, and drive private investment and job creation.” Unused spectrum between TV stations – known as “white spaces” – represents a valuable opportunity for provision of broadband data services in our changing wireless landscape. This unused TV spectrum provides a major new platform for innovation and delivery of service, with potential for both research and commercial applications. Development of unlicensed radio transmitting devices has already led to a wave of new consumer technologies, including Wi-Fi and other innovations like digital cordless phones and in-home video distribution systems that have generated billions of dollars of economic growth. This new technology will build on that track record and provide even more benefits to the U.S. economy.
We are used to the idea that flipping a coin is likely to generate a random sequence of heads or tails but, of course, it is perfectly possible to predict, using the rules of classical mechanics, the outcome of a series of coin tosses if we know the values of a not very long list of parameters. In other words, the outcome of flipping a coin is entirely deterministic, it is just that humans are unlikely to be able to faithfully replicate the same flick over and over again.
Quantum events – such as the –particle decay are, as far as our knowledge today tells us, truly random – in the sense they have a probability of occurring in a given time period but we have no way of knowing if a given nucleus will decay at any given time.
This is really a very profound finding – it implies that two physical objects, in this case atomic nuclei, behave in completely different ways despite all the physical parameters describing their existence being the same. That sounds like the exact opposite of everything that science has taught us about the nature of the universe.
Thinking about this, one can quickly come to agree with Einstein that it must be based on a flawed understanding of physical reality as “God does not play dice”. But it is also the best explanation we have for that physical reality.
But why would a nucleus decay in one time period and not another? Can this really be an event without specific cause? Just a ‘randomly‘ chosen moment? But chosen by what?
Of course, some will say by “God” but that really is metaphysics – a completely untestable and unverifiable proposition that merely kicks the physical puzzle in a domain beyond physics.