Incompleteness in the natural world

Gödel Incompleteness Theorem (Photo credit: janoma.cl)

A post inspired by Godel, Escher, Bach, Complexity: A Guided Tour, an article in this week’s New Scientist about the clash between general relativity and quantum mechanics and personal humiliation.

The everyday incompleteness: This is the personal humiliation bit. For the first time ever I went on a “Parkrun” today – the 5km Finsbury Park run, but I dropped out after 2.5km 2km – at the top of a hill and about 250 metres from my front door – I simply thought this is meant to be a leisure activity and I am not enjoying it one little bit. I can offer some excuses – it was really the first time ever I had run outdoors and so it was a bit silly to try a semi-competitive environment for that, I had not warmed up properly and so the first 500 metres were about simply getting breathing and limbs in co-ordination - mais qui s’excuse, s’accuse.

But the sense of incompleteness I want to write about here is not that everyday incompleteness, but a more fundamental one – our inability to fully describe the universe, or rather, a necessary fuzziness in our description.

Let’s begin with three great mathematical or scientific discoveries:

The diagonalisation method and the “incompleteness” of the real numbers: In 1891 Georg Cantor published one of the most beautiful, important and accessible arguments in number theory – through his diagonalisation argument, that proved that the infinity of the real numbers was qualitatively different from and greater than the infinity of the counting numbers.

The infinity of the counting numbers is just what it sounds like – start at one and keep going and you go on infinitely. This is the smallest infinity – called aleph null ($\aleph_0$).

Real numbers include the irrationals – those which cannot be expressed as fractions of counting numbers (Pythagoras shocked himself by discovering that $\sqrt 2$ was such a number). So the reals are all the numbers along a counting line – every single infinitesimal point along that line.

Few would disagree that there are, say, an infinite number of points between 0 and 1 on such a line. But Cantor showed that the number was uncountably infinite – i.e., we cannot just start counting from the first point and keep going. Here’s a brief proof…

Imagine we start to list all the points between 0 and 1 (in binary) – and we number each point, so…

1 is 0.00000000…..
2 is 0.100000000…..
3 is 0.010000000……
4 is 0.0010000000….
n is 0.{n – 2 0s}1{000……}

You can see this can go on for an infinitely countable number of times….

and so on. Now we decide to ‘flip’ the o or 1 at the index number, so we get:

1 is 0.1000000….
2 is 0.1100000….
3 is 0.0110000….
4 is 0.00110000….

And so on. But although we have already used up all the counting numbers we are now generating new numbers which we have not been able to count – this means we have more than $\aleph_0$ numbers in the reals, surely? But you argue, let’s just interleave these new numbers into our list like so….

1 is 0.0000000….
2 is 0.1000000…..
3 is 0.0100000….
4 is 0.1100000….
5 is 0.0010000….
6 is 0.0110000….

And so on. This is just another countably infinite set you argue. But, Cantor responds, do the ‘diagonalisation’ trick again and you get…

1 is 0.100000…..
2 is 0.110000….
3 is 0.0110000….
4 is 0.1101000…
5 is 0.00101000…
6 is 0.0110010….

And again we have new numbers, busting the countability of the set. And the point is this: no matter how many times you add the new numbers produced by diagonalisation into your counting list, diagonalisation will produce numbers you have not yet accounted for. From set theory you can show that while the counting numbers are of order (analogous to size) $\aleph_0$, the reals are of order $2^{\aleph_0}$, a far far bigger number – literally an uncountably bigger number.

Gödel’s Incompleteness Theorems: These are not amenable to a blog post length demonstration, but amount to this – we can state mathematical statements we know to be true but we cannot design a complete proof system that incorporates them – or we can state mathematical truths but we cannot build a self-contained system that proves they are true. The analogy with diagonalisation is that we know how to write out any real number between 0 and 1, but we cannot design a system (such as a computer program) that will write them all out – we have to keep ‘breaking’ the system by diagonalising it to find the missing numbers our rules will not generate for us. Gödel’s demonstration of this in 1931 was profoundly shocking to mathematicians as it appeared to many of them to completely undermine the very purpose of maths.

Turing’s Halting Problem: Very closely related to both Gödel’s incompleteness theorems and Cantor’s diagonalisation proof is Alan Turing’s formulation of the ‘halting problem’. Turing proposed a basic model of a computer – what we now refer to as a Turing machine – as an infinite paper tape and a reader (of the tape) and writer (to the tape). The tape’s contents can be interpreted as instructions to move, to write to the tape or to change the machine’s internal state (and that state can determine how the instructions are interpreted).

Now such a machine can easily be made of go into an infinite loop e.g.,:

• The machine begins in the ‘start’ state and reads the tape.  If it reads a 0 or 1 it moves to the right and changes its state to ‘even’.
• If the machine is in the state ‘even’ it reads the tape. If it reads a 0 or 1 it moves to the left and changes its state to ‘start’

You can see that if the tape is marked with two 0s or two 1s or any combination of 0 or 1 in the first two places the machine will loop for ever.

The halting problem is this – can we design a Turing machine that will tell us if a given machine and its instructions will fall into an infinite loop? Turing proved  we cannot without having to discuss any particular methodology … here’s my attempt to recreate his proof:

We can model any other Turing machine though a set of instructions on the tape, so if we have machine $T$ we can have have it model machine $M$ with instructions $I$: i.e., $T(M, I)$

Let us say $T$ can tell whether $M$ will halt or loop forever with instructions $I$ – we don’t need to understand how it does it, just suppose that it does. So if $(M, I)$ will halt $T$ writes ‘yes’, otherwise it writes ‘no’.

Now let us design another machine $T^\prime$ that takes $T(M,I)$ its input but here $T^\prime$ loops forever if $T$ writes ‘yes’ and halts if $T$ writes ‘no’.

Then we have:

$M(I)$ halts or loops – $T(M, I)$ halts – $T^\prime$ loops forever.

But what if we feed $T^\prime$ the input of $T^\prime(T(M, I)$?

$M(I)$ halts or loops – $T(M, I)$ halts – $T^\prime(T(M,I))$ loops forever – $T^\prime(T^\prime(T(M,I)))$ – ??

Because if the second $T^\prime(T^\prime(T(M,I)))$ halted then that would imply that the first had halted – but it is meant to loop forever, and so on…

As with Gödel we have reached a contradiction and so we cannot go further and must conclude that we cannot build a Turing machine (computer) that can solve the halting problem.

Quantum mechanics: The classic, Copenhagen, formulation of quantum mechanics states that the uncertainty of the theory collapses when we observe the world, but the “quantum worlds” theory suggests that actually the various outcomes do take place and we are just experiencing one of them at any given time. The experimental backup for the many worlds theory comes from quantum ‘double-slit’ experiments which suggest particles leave traces of their multiple states in every ‘world’.

What intrigues me: What if our limiting theories – the halting problem, Gödel’s incompleteness theorem, the uncountable infinite, were actually the equivalents of the Copenhagen formulation and, in fact, maths was also a “many world” domain where the incompleteness of the theories was actually the deeper reality – in other words the Turing machine can both loop forever and halt? This is probably, almost certainly, a very naïve analogy between the different theories but, lying in the bath and contemplating my humiliation via incompleteness this morning, it struck me as worth exploring at least.

Schrödinger’s cat: for real

Quantum Mechanics is, along with General Relativity, the foundation stone of modern physics and few explanations of its importance are more famous than the “Schrödinger’s cat” thought experiment.

This seeks to explain the way “uncertainty” operates at the heart of the theory. Imagine a cat in a box with a poison gas capsule. The capsule is set off if a radioactive decay takes place. But radioactivity is governed by quantum mechanics – we can posit statistical theories about how likely the radioactive decay is to take place but we cannot be certain – unless we observe. Therefore the best way we can state of the physical state of the cat – so long as it remains unobserved – is to say it is both alive and dead.

Now physicists still argue about what happens next – the act of observing the cat. In the classical, or Copenhagen, view of quantum mechanics the “wave equation collapses” and observing forces the cat into a dead or alive state. In increasingly influential “many worlds” interpretations anything that can happen does and an infinite number of yous sees an infinite number of dead or alive cats. But that particular mind bender is not what we are about here. (NB we should also note that in the real world the cat is “observed” almost instantaneously by the molecules in the box – this is a mind experiment not a real one, except… well read on for that).

The idea of the cat being alive and dead at once is what is known as “quantum superposition” – in other words both states exist at once, with the prevalence of one state over another being determined by statistics and nothing else.

Quantum superposition is very real and detectible. You may have heard of the famous interferometer experiments where a single particle is sent through some sort of diffraction grating and yet the pattern detected is one of interference – as though the particle interfered with itself – in fact this indicates that superposed states exist.

In fact the quantum theories suggest that superposition should apply not just to single particles but to everything and every collection of things in the universe. In other words cats could and should be alive and dead at the same time. If we can find a sufficiently large object where superimposition does not work then we would actually have to rethink the quantum theories and equations which have stood us in such good stead (for instance making the computer you are reading this on possible).

And Stefan Nimmrichter of the Vienna Centre for Quantum Science Technology and Klaus Hornberger of the University of Duisberg-Essen have proposed we use this measurement – how far up the scale of superposition we have got as a way of determining just how successful quantum mechanics’s laws are (you can read their paper here).

They propose a logarithmic scale (see graph) based on the size of the object showing superposition – so the advance from the early 60s score of about 5 to today’s of about 12 might mean we can be one million times more confident in quantum theory’s universal application. (A SQUID is a very sensitive magnetometer which relies on superconductivity.)

And they say that having a 4kg ‘house cat’ be superposed in two states 10cm apart (which might be taken for a good example of lying dead versus prowling around) would require a score of about 57 – in other words about $10^{45}$ more experimental power than currently available.

That probably means no one reading this is ever likely to see a successful demonstration that Schrödinger’s cat is rather more than a thought experiment, but it does give us a target to aim at!

Time’s arrow

English: Lee Smolin at Harvard University (Photo credit: Wikipedia)

The forward march of time is possibly the most basic and shared human experience. Whatever else may happen in our lives none of us can make time run backwards  (the title of this post recalls Martin Amis‘s brilliant novel premised on this idea – time running backwards – if you’ve read it you will understand why we are never likely to see it filmed, as 90 minutes of backwards time would be just too much to take.)

Yet, as Lee Smolin points out in this week’s New Scientist, our most fundamental theories of physicsquantum mechanics and general relativity – are time free: they work just as well if time runs the other way round. Physicists square this circle by insisting on only time-forward solutions and by imposing special conditions on our universe. We have even invented a physical category – which has no material existence per se – called entropy and demanded that it always increase.

The accepted physics leaves us in the difficult position of believing that “the future” is not the future at all – it exists and has always existed but we are barred from getting there “ahead of time”. It’s a deep contradiction, though whether this is a flaw in the theories or in human comprehension is what the debate (such as it exists, those who challenge QM and GR are very much in the minority) is all about.

In Smolin’s view (or perhaps my interpretation of it) all of this violates the “Copernican principle” – that we observers are nothing special – that has guided much of physics’s advances of the last five centuries. So what if it is actually telling us that our theories are wrong and like Newtonian gravity is to general relativity, they are merely approximations?

Smolin’s argument is just this. He says we should base our theories on the fundamental observation that time flows in only one direction and so find deeper, truer theories based on unidirectional time.

One possible way the Higgs boson might be produced at the Large Hadron Collider. Similar images at: http://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/Conferences/2003/aspen-03_dam.ppt (Photo credit: Wikipedia)

The Scientific American reports that the mass of the Higgs Boson indicates that our Universe is merely meta-stable and that, via quantum tunnelling it is possible that our universe could transition to a different, lower energy state: in other words the universe (as we know it) would end.

The half-life of our current meta-stable state is reckoned to be many, many billions of years and so, we are assured, the changes of this actually happening to us in any given time are essentially zero.

But surely, if we accept the “many worldsinterpretation of quantum mechanics, this means that our current universe has already decayed (and is, indeed, decaying all the time). We just hope and believe (based on the evidence) that we happen to live in a typical one of those many worlds and so the chances of us seeing our universe decay are negligible. But what if that were wrong?

Patenting reality

(I was about to post something about this when I noticed the Stephen Fry nomination of Turing’s Universal Machine as a great British “innovation” and decided to write about that first … but the two dovetail as I hope you can see.)

Patent (Photo credit: brunosan)

I was alerted to this by an article in the latest edition of the New Scientist (subscription link) -on whether scientific discoveries should be patentable. The New Scientist piece by Stephen Ornes argues strongly and persuasively that the maths at the heart of software should be protected from patents. But having now read the original article Ornes is replying to, I think he has missed the full and horrific scale of what is being proposed by David Edwards, a retired associate professor of maths for the University of Georgia at Athens.

Of course I am not suggesting that Edwards himself is evil, but his proposal certainly is: because he writes, in the current issue of the  Notices of the American Mathematical Society (“Platonism is the Law of the Land”) that not just mathematical discoveries should be patentable but, in fact, all scientific discoveries should be: indeed he explicitly cites general relativity as an idea that could have been covered by a patent.

Edwards is direct in stating his aim:

Up until recently, the economic consequences of these restrictions in intellectual property rights have probably been quite slight. Similarly, the economic consequences of allowing patents for new inventions were also probably quite slight up to about 1800. Until then, patents were mainly import franchises. After 1800 the economic consequences of allowing patents for new inventions became immense as our society moved from a predominately agricultural stage into a predominately industrial stage. Since the end of World War II,our society has been moving into an information stage, and it is becoming more and more important to have property rights appropriate to this stage. We believe that this would best be accomplished by Congress amending the patent laws to allow anything not previously known to man to be patented.

Part of me almost wants this idea to be enacted, because like the failure of prohibition of alcohol it would teach an unforgettable lesson. But as someone who cares about science and the good that science could do for humanity it is deeply chilling.
For instance, it is generally accepted that there is some flaw in our theories of gravity (general relativity) and quantum mechanics in that they do not sit happily beside one another. Making them work together is a great task for physicists. And if we do it – if we find some new theory that links these two children of the 20th century – perhaps it will be as technologically important as it will be scientifically significant (after all, quantum mechanics gave us the transistor and general relativity the global positioning system). But if that theory was locked inside some sort of corporate prison for twenty or twenty-five years it could be that the technological breakthroughs would be delayed just as long.

The nine billion names of God

English: A GIF animation about the summary of quantum mechanics. Schrödinger equation, the potential of a “particle in a box”, uncertainty principle and double slit experiment. (Photo credit: Wikipedia)

If you are an easily offended religious fundamentalist you should probably stop reading this now.

The nine billion names of God” is a famous science fiction short story by Arthur C. Clarke. In essence the plot is that some researchers complete a piece of work and suddenly notice that the world is being switched off.

A piece of whimsy, obviously. But what if it were something that could really happen (I am now risking a listing under “questions to which the answer is no” by John Rentoul)? If your scientific experiment reached a conclusion would you just let it run on, or switch it off (or maybe wait till your paper was accepted and then switch it off!)

The issue here is the question of whether or not the universe, as we see it, is in fact all just a gigantic computer simulation. As I have written before, if we accept that computing power will continue to grow without limit we are almost bound to accept it is much more likely we are inside a simulated than a real universe. Of course, if the universe was confirmed as a simulation it would make no physical difference to us (though I suspect the psychological blow to humanity would be profound), so long as nobody turned the simulation off.

Testing whether it is true that the universe is simulated requires finding a fundamental minimal size beyond which we cannot further explore the universe: this is because computing a simulation relies on the fundamental digital nature of a computer – you cannot get below one bit, however you have scaled the bits. Now, chance, God, the simulators (take your pick) have made this quite difficult via the Heisenberg Uncertainty Principle:

$\sigma_x\sigma_p \geq \frac{\hbar}{2}$

Where $\sigma_x$ is the uncertainty in a particle’s position, $\sigma_p$ uncertainty in momentum and $\hbar$ a very small number – $1.055$ x $10^{-34}$ Joule seconds. In most situations the very smallness of $\hbar$ means the uncertainty principle is of no concern but once we start to reduce $\sigma_x$ (ie look at extremely small parts of space) then $\sigma_p$ starts to soar and the amount of energy needed to conduct experiments also flies through the roof.

But nature also gives us extreme energies for free in the form of cosmic rays and these could hold the clue as to whether the universe is grainy (hence a simulation) or smooth (at least at currently detectable sizes).

Footnote: the fundamental weakness in the argument seems to me to be the fact that computing is increasingly showing that an unlimited increase in computing power is unlikely. But if you want to know more about this I really do recommend Brian Greene’s The Hidden Reality.

A question for a cosmologist about brane death

Image via Wikipedia

The “string theory revolution” began in 1984 and I graduated with my astrophysics degree in 1987, perhaps unsurprisingly, having been taught nothing about it at all.

But now, reading The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos, (a good book) I discover that we may all be living on a brane – a three dimensional slab of reality “floating” inside ten dimensional space. And indeed there may be many of these branes perhaps just milimetres away from all of us, each of which might appear to its inhabitants (if its physical laws allow for any inhabitants, of course), as a fully dressed universe in its own right.

Now, so the theory goes, photons and indeed all particles of the electroweak or grand unified force (assuming it exists) cannot move between the branes, but gravitons, the theoretical (and undetected so far) quantum messengers of the gravitational force can. Indeed this ability of gravitons to stray into other dimensions is what is believed to make the gravitational force seem so weak to us.

But what if a highly massive object in another brane were to pass close by us. Such an object could have a very strong gravitational field that we would feel in this universe/brane and which could have drastic effects, perhaps putting us all at risk of “brane death”. Couldn’t it?

Well, I suspect I have misunderstood the mathematics of this. The fact we don’t see galaxies ripped to pieces by the super massive black holes at the centres of galaxies in other branes is rather more likely to lead me to believe that I have missed a point about how this works than to conclude the theory is that easily disproved.

Perhaps a reader might enlighten me?

How random is random?

Image via Wikipedia

What is a truly random event?

We are used to the idea that flipping a coin is likely to generate a random sequence of heads or tails but, of course, it is perfectly possible to predict, using the rules of classical mechanics, the outcome of a series of coin tosses if we know the values of a not very long list of parameters. In other words, the outcome of flipping a coin is entirely deterministic, it is just that humans are unlikely to be able to faithfully replicate the same flick over and over again.

Quantum events – such as the $\alpha$-particle decay are, as far as our knowledge today tells us, truly random – in the sense they have a probability of occurring in a given time period but we have no way of knowing if a given nucleus will decay at any given time.

This is really a very profound finding – it implies that two physical objects, in this case atomic nuclei, behave in completely different ways despite all the physical parameters describing their existence being the same. That sounds like the exact opposite of everything that science has taught us about the nature of the universe.

Thinking about this, one can quickly come to agree with Einstein that it must be based on a flawed understanding of physical reality as “God does not play dice”. But it is also the best explanation we have for that physical reality.

But why would a nucleus decay in one time period and not another? Can this really be an event without specific cause? Just a ‘randomly‘ chosen moment? But chosen by what?

Of course, some will say by “God” but that really is metaphysics – a completely untestable and unverifiable proposition that merely kicks the physical puzzle in a domain beyond physics.