A while ago I read Max Tegmark‘s “Our Mathematical Universe” (Amazon link) – which introduced me to the concept of “Quantum Suicide” and the idea that if the “many worlds” interpretation of quantum physics is correct then, if death is a result of quantum processes (e.g., does this particular atomic nucleus decay, releasing radiation, causing a mutation, leading to cancer and so on), then, actually, we can expect to live forever – in the sense that our consciousness would continue on in that universe where all the quantum randomness was for the best.

It’s a powerful, if quite mind-bending idea, and it had quite a profound effect on me.

Until, that is, at the end of January, when I slipped on a London street, smashed my face on the pavement and swallowed the broken piece of tooth. Three months later the pain in my upper left arm – with which I tried to break my fall, is a constant reminder that maybe Niels Bohr and the Copenhagen Interpretation was right after all.

There is a fascinating article in this week’s New Scientist about the idea that quantum mechanics and general relatively could be linked via the idea of the “wormhole” – a fold in spacetime that links what appears to be two very distant parts of the universe.

The article – as is generally the case in a popular science magazine – is more hand wavy than scientific, but the concepts involved don’t seem to be difficult to grasp and they might answer some of the more mysterious aspects of quantum mechanics – especially the problem “spooky action at a distance“: quantum entanglement.

Quantum entanglement is the seeming evidence that two particles separated by distance appear to exchange information instantaneously: for when one particle changes state (or rather when its state is observed), the other does too. The suggestion is that, actually, these two particles are not separated by distance by are linked by a wormhole.

Sounds like a piece of Hollywood science, for sure, but it is an idea based on our understanding of black holes – a prediction of Einstein’s general relativity that we have lots of (indirect) evidence for: these would seem to be surrounded by entangled particles – the so-called quantum firewall.

This another one of those bizarre thoughts that cosmology throws up which manages to be both simple and profound.

Imagine the wave function for the whole universe.

By its nature the universe cannot change its quantum state: it’s the ultimate closed system. Of course there is a probabilistic distribution of energy inside the system but the total energy of the system does not change and therefore its quantum state cannot change either.

So, in quantum terms the universe is unchanging over time.

So, let us conduct a thought experiment that might suggest “you” can live forever.

In this world we assume that you don’t do anything dangerous – such as commute to work. The only factors that could kill you are the normal processes of human ageing (and related factors such as cancer): your fate is completely determined by chemical processes in your body.

And we accept the “many worlds” view of quantum mechanics – in other words all the possible quantum states exist and so “the universe” is constantly multiplying as more and more of these worlds are created.

Now, if we accept that the chemical processes are, in the end, driven by what appears to us as stochastic (random) quantum effects – in other words chemicals react because atoms/electrons/molecules are in a particular range of energies governed by the quantum wave equation – then it must surely be the case that in one of the many worlds the nasty (to our health) reactions never happen because “randomly” it transpires that the would-be reactants are never in the right energy state at the right time.

To us in the everyday world our experience is that chemical reactions “just happen”, but in the end that is a statistically driven thing: there are billions of carbon atoms in the piece of wood we set fire to and their state is changing all the time so eventually they have the energy needed to “catch fire”. But what if, in just one quantum world of many trillions, the wood refuses to light?

So, too for us humans: in one world, the bad genetic mutations that cause ageing or cancer just don’t happen and so “you” (one of many trillions of “you”s) stays young for ever.

The obvious counter argument is: where are these forever-young people? The 300 year olds, the 3000 year olds? Leaving aside Biblical literalism, there is no evidence that such people have ever lived.

But that is surely just because this is so very, very rare that you could not possible expect to meet such a person. After all, around 70 – 100 billion humans have ever been born and each of them has around 37 trillion cells, which live for an average of a few days (probably) – so in a year perhaps 37 billion trillion cell division events – each of which could spawn a new quantum universe – take place. That means the chances of you being in the same universe as one of the immortals is pretty slim.

Yet, on the other hand, we all know someone who seems to never age as quickly as we do…

…I’d be really interested in hearing arguments against the hypothesis from within the many worlds view of quantum physics.

Dr Johnson famously settled an argument on the existence or non-existence of the physical universe by kicking a heavy stone and saying “I refute it thus”.

But when it comes to the science of the micro-universe, the quantum world, no heavy stones seem to be around to be kicked.

I have been thinking about this since I posted a blog about Antony Valentini‘s idea that we could use some very rare particles to communicate at a speed faster than the speed of light.

The fundamental difficulty with quantum mechanics is that it says we cannot know, with total accuracy, the position and momentum of a particle. This “uncertainty” is what creates the “spooky interaction at a distance” – because if we measure the momentum of one paired particle then uncertainty and energy conservation laws “appear” to make the other particle assume a certain state instantaneously.

In the “Copenhagen interpretation” we are essentially asked to accept that this is due to an instantaneous “collapse” of the physical laws we have been using to this point. It’s as if our quantum rules are just a window on to a “real” physical world and that our poking shakes up what is going on behind the scenes in ways we cannot hope to understand.

That’s not very convincing, though (even if it “works”).

So, what are the alternatives?

Valentini is reviving an idea of Louis DeBroglie that was rejected in favour of the “Copenhagen interpretation”: namely that our paired particles remain linked by a “pilot wave” that communicates the state change instantly.

That, though, appears to offend against the physics of the world of general relativity – we are conditioned to think such instant communication is impossible because our physics tells us that we need infinite energy to move a massive body at the speed of light: hence making that an unbreakable speed limit.

And then there is the “many worlds” interpretation – namely that all that might happen does happen and so there are an infinite number of those paired particles and “our universe” is just one of an infinite number.

None of them really seem that satisfactory an explanation.

Well, the answer is pretty plain: Einstein‘s theory of general relativity – which even in the last month has added to it’s already impressive list of predictive successes – tells us that to travel at the speed of light a massive body would require an infinite amount of propulsive energy. In other words, things are too far away and travel too slow for us to ever hope to meet aliens.

But what if – and it’s a very big if – we could communicate with them, instantaneously? GR tells us massive bodies cannot travel fast, or rather along a null time line – which is what really matters if you want to be alive when you arrive at your destination – but information has no mass as such.

Intriguingly, an article in the current edition of the New Scientist looks at ways in which quantum entanglement could be used to pass information – instantaneously – across any distance at all. Quantum entanglement is one of the stranger things we can see and measure today – Einstein dismissed it as “spooky interaction at a distance” – and essentially means that we can take two similar paired particles and by measuring the state of one can instantaneously see the other part of the pair fall into a particular state (e.g., if the paired particles are electrons and we measure one’s quantum spin, the other instantly is seen to have the other spin – no matter how far away it is at the time).

Entanglement does not allow us to transmit information though, because of what the cosmologist Antony Valentini calls, in an analogy with thermodynamic “heat death”, the “quantum death” of the universe – in essence, he says that in the instants following the Big Bang physical particles dropped into a state in which – say – all electron spins were completely evenly distributed, meaning that we cannot find electrons with which to send information – just random noise.

But – he also suggests – inflation – the super-rapid expansion of the very early universe may also have left us with a very small proportion of particles that escaped “quantum death” – just as inflation meant that the universe is not completely smooth because it pushed things apart at such a rate that random quantum fluctuations were left as a permanent imprint.

If we could find such particles we could use them to send messages across the universe at infinite speed.

Perhaps we are already surrounded by such “messages”: those who theorise about intelligent life elsewhere in the universe are puzzled that we have not yet detected any signs of it, despite now knowing that planets are extremely common. That might suggest either intelligent life is very rare, or very short-lived or that – by looking at the electromagnetic spectrum – we are simply barking up the wrong tree.

Before we get too excited I have to add a few caveats:

While Valentini is a serious and credible scientist and has published papers which show, he says, the predictive power of his theory (NB he’s not the one speculating about alien communication – that’s just me) – such as the observed characteristics of the cosmic microwave background (an “echo” of the big bang) – his views are far from the scientific consensus.

To test the theories we would have to either be incredibly lucky or detect the decay products of a particle – the gravitino – we have little evidence for beyond a pleasing theoretical symmetry between what we know about “standard” particle physics and theories of quantum gravity.

Even if we did detect and capture such particles they alone would not allow us to escape the confines of general relativity – as they are massive and so while they could allow two parties to theoretically communicate instantly, the parties themselves would still be confined by GR’s spacetime – communicating with aliens would require us and them in someway to use such particles that were already out there, and perhaps have been whizzing about since the big bang itself.

But we can dream!

Update; You may want to read Andy Lutomirski’s comment which, I think it’s fair to say, is a one paragraph statement of the consensus physics. I am not qualified to say he’s wrong and I’m not trying to – merely looking at an interesting theory. And I have tracked down Anthony Valentini’s 2001 paper on this too.

Feynman argues that there is no radiation without absorption: in other words a tree that falls in an empty forest does indeed make no sound (if we imagine the sound is transmitted by photons, that is).

This sounds like a gross violation of all common sense – how could a photon know when it leaves a radiating body that it is to be absorbed?

But then, general relativity comes to our rescue – because in the photon’s inertial frame the journey from radiator to absorber is instantaneous.

But how can a body that exists for no time at all, exist at all?

Then again my assumption in asking this question is that time is in some sense privileged as a dimension of spacetime. This is a pretty deep controversy in theoretical physics these days and I am not qualified to shed much light on it – but let us assume that a body can exist with a zero dimension in time but real dimensions in space, can we then have bodies which have zero dimensions in space but a real dimension in time? If so, what are they?

My one problem with it was its explanation of “stimulated emission“. Now, as an undergraduate, I remember I understood this quite well – it came up in a discussion of MASERs (intense microwave sources in deep space) as opposed to the more familiar LASERs ifI remember correctly. But that’s a long time ago.

A post inspired by Godel, Escher, Bach, Complexity: A Guided Tour, an article in this week’s New Scientist about the clash between general relativity and quantum mechanics and personal humiliation.

The everyday incompleteness: This is the personal humiliation bit. For the first time ever I went on a “Parkrun” today – the 5km Finsbury Park run, but I dropped out after 2.5km 2km – at the top of a hill and about 250 metres from my front door – I simply thought this is meant to be a leisure activity and I am not enjoying it one little bit. I can offer some excuses – it was really the first time ever I had run outdoors and so it was a bit silly to try a semi-competitive environment for that, I had not warmed up properly and so the first 500 metres were about simply getting breathing and limbs in co-ordination – mais qui s’excuse, s’accuse.

But the sense of incompleteness I want to write about here is not that everyday incompleteness, but a more fundamental one – our inability to fully describe the universe, or rather, a necessary fuzziness in our description.

Let’s begin with three great mathematical or scientific discoveries:

The diagonalisation method and the “incompleteness” of the real numbers: In 1891 Georg Cantor published one of the most beautiful, important and accessible arguments in number theory – through his diagonalisation argument, that proved that the infinity of the real numbers was qualitatively different from and greater than the infinity of the counting numbers.

The infinity of the counting numbers is just what it sounds like – start at one and keep going and you go on infinitely. This is the smallest infinity – called aleph null ().

Real numbers include the irrationals – those which cannot be expressed as fractions of counting numbers (Pythagoras shocked himself by discovering that was such a number). So the reals are all the numbers along a counting line – every single infinitesimal point along that line.

Few would disagree that there are, say, an infinite number of points between 0 and 1 on such a line. But Cantor showed that the number was uncountably infinite – i.e., we cannot just start counting from the first point and keep going. Here’s a brief proof…

Imagine we start to list all the points between 0 and 1 (in binary) – and we number each point, so…

1 is 0.00000000…..
2 is 0.100000000…..
3 is 0.010000000……
4 is 0.0010000000….
n is 0.{n – 2 0s}1{000……}

You can see this can go on for an infinitely countable number of times….

and so on. Now we decide to ‘flip’ the o or 1 at the index number, so we get:

1 is 0.1000000….
2 is 0.1100000….
3 is 0.0110000….
4 is 0.00110000….

And so on. But although we have already used up all the counting numbers we are now generating new numbers which we have not been able to count – this means we have more than numbers in the reals, surely? But you argue, let’s just interleave these new numbers into our list like so….

1 is 0.0000000….
2 is 0.1000000…..
3 is 0.0100000….
4 is 0.1100000….
5 is 0.0010000….
6 is 0.0110000….

And so on. This is just another countably infinite set you argue. But, Cantor responds, do the ‘diagonalisation’ trick again and you get…

1 is 0.100000…..
2 is 0.110000….
3 is 0.0110000….
4 is 0.1101000…
5 is 0.00101000…
6 is 0.0110010….

And again we have new numbers, busting the countability of the set. And the point is this: no matter how many times you add the new numbers produced by diagonalisation into your counting list, diagonalisation will produce numbers you have not yet accounted for. From set theory you can show that while the counting numbers are of order (analogous to size) , the reals are of order , a far far bigger number – literally an uncountably bigger number.

Gödel’s Incompleteness Theorems: These are not amenable to a blog post length demonstration, but amount to this – we can state mathematical statements we know to be true but we cannot design a complete proof system that incorporates them – or we can state mathematical truths but we cannot build a self-contained system that proves they are true. The analogy with diagonalisation is that we know how to write out any real number between 0 and 1, but we cannot design a system (such as a computer program) that will write them all out – we have to keep ‘breaking’ the system by diagonalising it to find the missing numbers our rules will not generate for us. Gödel’s demonstration of this in 1931 was profoundly shocking to mathematicians as it appeared to many of them to completely undermine the very purpose of maths.

Turing’s Halting Problem: Very closely related to both Gödel’s incompleteness theorems and Cantor’s diagonalisation proof is Alan Turing’s formulation of the ‘halting problem’. Turing proposed a basic model of a computer – what we now refer to as a Turing machine – as an infinite paper tape and a reader (of the tape) and writer (to the tape). The tape’s contents can be interpreted as instructions to move, to write to the tape or to change the machine’s internal state (and that state can determine how the instructions are interpreted).

Now such a machine can easily be made of go into an infinite loop e.g.,:

The machine begins in the ‘start’ state and reads the tape. If it reads a 0 or 1 it moves to the right and changes its state to ‘even’.

If the machine is in the state ‘even’ it reads the tape. If it reads a 0 or 1 it moves to the left and changes its state to ‘start’

You can see that if the tape is marked with two 0s or two 1s or any combination of 0 or 1 in the first two places the machine will loop for ever.

The halting problem is this – can we design a Turing machine that will tell us if a given machine and its instructions will fall into an infinite loop? Turing proved we cannot without having to discuss any particular methodology … here’s my attempt to recreate his proof:

We can model any other Turing machine though a set of instructions on the tape, so if we have machine we can have have it model machine with instructions : i.e.,

Let us say can tell whether will halt or loop forever with instructions – we don’t need to understand how it does it, just suppose that it does. So if will halt writes ‘yes’, otherwise it writes ‘no’.

Now let us design another machine that takes its input but here loops forever if writes ‘yes’ and halts if writes ‘no’.

Then we have:

halts or loops – halts – loops forever.

But what if we feed the input of ?

halts or loops – halts – loops forever – – ??

Because if the second halted then that would imply that the first had halted – but it is meant to loop forever, and so on…

As with Gödel we have reached a contradiction and so we cannot go further and must conclude that we cannot build a Turing machine (computer) that can solve the halting problem.

Quantum mechanics: The classic, Copenhagen, formulation of quantum mechanics states that the uncertainty of the theory collapses when we observe the world, but the “quantum worlds” theory suggests that actually the various outcomes do take place and we are just experiencing one of them at any given time. The experimental backup for the many worlds theory comes from quantum ‘double-slit’ experiments which suggest particles leave traces of their multiple states in every ‘world’.

What intrigues me: What if our limiting theories – the halting problem, Gödel’s incompleteness theorem, the uncountable infinite, were actually the equivalents of the Copenhagen formulation and, in fact, maths was also a “many world” domain where the incompleteness of the theories was actually the deeper reality – in other words the Turing machine can both loop forever and halt? This is probably, almost certainly, a very naïve analogy between the different theories but, lying in the bath and contemplating my humiliation via incompleteness this morning, it struck me as worth exploring at least.

Quantum Mechanics is, along with General Relativity, the foundation stone of modern physics and few explanations of its importance are more famous than the “Schrödinger’s cat” thought experiment.

This seeks to explain the way “uncertainty” operates at the heart of the theory. Imagine a cat in a box with a poison gas capsule. The capsule is set off if a radioactive decay takes place. But radioactivity is governed by quantum mechanics – we can posit statistical theories about how likely the radioactive decay is to take place but we cannot be certain – unless we observe. Therefore the best way we can state of the physical state of the cat – so long as it remains unobserved – is to say it is both alive and dead.

Now physicists still argue about what happens next – the act of observing the cat. In the classical, or Copenhagen, view of quantum mechanics the “wave equation collapses” and observing forces the cat into a dead or alive state. In increasingly influential “many worlds” interpretations anything that can happen does and an infinite number of yous sees an infinite number of dead or alive cats. But that particular mind bender is not what we are about here. (NB we should also note that in the real world the cat is “observed” almost instantaneously by the molecules in the box – this is a mind experiment not a real one, except… well read on for that).

The idea of the cat being alive and dead at once is what is known as “quantum superposition” – in other words both states exist at once, with the prevalence of one state over another being determined by statistics and nothing else.

Quantum superposition is very real and detectible. You may have heard of the famous interferometer experiments where a single particle is sent through some sort of diffraction grating and yet the pattern detected is one of interference – as though the particle interfered with itself – in fact this indicates that superposed states exist.

In fact the quantum theories suggest that superposition should apply not just to single particles but to everything and every collection of things in the universe. In other words cats could and should be alive and dead at the same time. If we can find a sufficiently large object where superimposition does not work then we would actually have to rethink the quantum theories and equations which have stood us in such good stead (for instance making the computer you are reading this on possible).

And Stefan Nimmrichter of the Vienna Centre for Quantum Science Technology and Klaus Hornberger of the University of Duisberg-Essen have proposed we use this measurement – how far up the scale of superposition we have got as a way of determining just how successful quantum mechanics’s laws are (you can read their paper here).

They propose a logarithmic scale (see graph) based on the size of the object showing superposition – so the advance from the early 60s score of about 5 to today’s of about 12 might mean we can be one million times more confident in quantum theory’s universal application. (A SQUID is a very sensitive magnetometer which relies on superconductivity.)

And they say that having a 4kg ‘house cat’ be superposed in two states 10cm apart (which might be taken for a good example of lying dead versus prowling around) would require a score of about 57 – in other words about more experimental power than currently available.

That probably means no one reading this is ever likely to see a successful demonstration that Schrödinger’s cat is rather more than a thought experiment, but it does give us a target to aim at!