The nine billion names of God


English: A GIF animation about the summary of ...
English: A GIF animation about the summary of quantum mechanics. Schrödinger equation, the potential of a “particle in a box”, uncertainty principle and double slit experiment. (Photo credit: Wikipedia)

If you are an easily offended religious fundamentalist you should probably stop reading this now.

The nine billion names of God” is a famous science fiction short story by Arthur C. Clarke. In essence the plot is that some researchers complete a piece of work and suddenly notice that the world is being switched off.

A piece of whimsy, obviously. But what if it were something that could really happen (I am now risking a listing under “questions to which the answer is no” by John Rentoul)? If your scientific experiment reached a conclusion would you just let it run on, or switch it off (or maybe wait till your paper was accepted and then switch it off!)

The issue here is the question of whether or not the universe, as we see it, is in fact all just a gigantic computer simulation. As I have written before, if we accept that computing power will continue to grow without limit we are almost bound to accept it is much more likely we are inside a simulated than a real universe. Of course, if the universe was confirmed as a simulation it would make no physical difference to us (though I suspect the psychological blow to humanity would be profound), so long as nobody turned the simulation off.

Testing whether it is true that the universe is simulated requires finding a fundamental minimal size beyond which we cannot further explore the universe: this is because computing a simulation relies on the fundamental digital nature of a computer – you cannot get below one bit, however you have scaled the bits. Now, chance, God, the simulators (take your pick) have made this quite difficult via the Heisenberg Uncertainty Principle:

\sigma_x\sigma_p \geq \frac{\hbar}{2}

Where \sigma_x is the uncertainty in a particle’s position, \sigma_p uncertainty in momentum and \hbar a very small number – 1.055 x 10^{-34} Joule seconds. In most situations the very smallness of \hbar means the uncertainty principle is of no concern but once we start to reduce \sigma_x (ie look at extremely small parts of space) then \sigma_p starts to soar and the amount of energy needed to conduct experiments also flies through the roof.

But nature also gives us extreme energies for free in the form of cosmic rays and these could hold the clue as to whether the universe is grainy (hence a simulation) or smooth (at least at currently detectable sizes).

Footnote: the fundamental weakness in the argument seems to me to be the fact that computing is increasingly showing that an unlimited increase in computing power is unlikely. But if you want to know more about this I really do recommend Brian Greene’s The Hidden Reality.

A (final) speculation from “The Hidden Reality”


I have just finished Brian Greene’s The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos, so want to take this opportunity to once again endorse it and also to restate or reprise one of his speculations in the book: this time one grounded firmly in computing rather than string theory or conventional cosmology.

The key question at the heart of this speculation is this: do you think that, given we have seen computational power double every 18 – 24 months or so for around a century now, we will reach the point where we will be able to convince computer-based agents that they are in a physically real world when they are, in fact, in a computer-based simulation?

And if you answer yes to that question, do you realise that you are more or less forced to accept that you are one such agent and what you see around you as real is, in fact, a computer-generated “virtual reality”?

The reasoning is simple: once the technology to build a sufficiently convincing virtual world exists then such worlds are likely to rapidly grow in number and there will be far many more ‘people’ who are agents than people who are physically real. By simple averaging you and I are far more likely to be such agents than to be real.

It’s a truly mind-bending thought.

Saying ‘no’ to the question requires putting a hard limit on human progress in the field of computation. How can that accord with our experience of ever faster, more powerful and ubiquitous computing?

Well, there are some reasons – but even they are open to attack.

Reality appears smooth and continuous, but smooth and continuous numbers are not computable. Computable numbers are of finite extent – as otherwise the computer would never finish computing them. For instance no computer can ever compute \pi, only render an approximation. Indeed most numbers are “transcendental numbers” and inherently not computable.

But this argument – which sounds like a sure-fire winner for knocking down the idea that our reality is, in fact, a simulation, has a weakness: we cannot measure smoothness either. Our measurements are discrete and while it appears to us that the world is smooth and continuous, maybe it is not – it is just that the computed approximation is beyond our ability to (presently) measure it.

If, at some point in the future, we discovered a finite limit to measurement that was beyond physical explanation it would surely confirm we were in a simulation.

Why isn’t the universe of infinite density?


Brian Greene at the World Science Festival lau...
Brian Greene at the World Science Festival launch press conference (Photo credit: Wikipedia)

Another speculation produced by Brian Greene’s The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos.

Imagine the universe was infinite (along the “quilted multiverse” pattern – namely that it streched on and on and we could only see a part). That would imply, assuming that the “cosmological principle” that one bit of the universe looked like any other, applied, that there were an infinite number of hydrogen atoms out there.

So, why is the universe not of infinite density? Because surely Shroedinger’s Equation means that there is a finite probability that electrons could be in any given region of space? (Doesn’t it?)

For any given electron the probability in “most” regions of space is zero in any measurable sense. But if there are an infinite number of electrons then the probability at a given point that there is an electron there is infinite, isn’t it?

OK, I have obviously got something wrong here because nobody is dismissing the “quilted multiverse” idea so simply – but could someone explain what it is I have got wrong?

Update: Is this because space-time is a continuum and the number of electrons a countable infinity?

Cosmologists’ problems with aleph-null and the multiverse


Cyclic progressions of the universe
Cyclic progressions of the universe (Photo credit: Wikipedia)

This is another insight from Brian Greene’s book The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos – well worth reading.

Aleph-null (\aleph_0) is the order (size) of the set of countably infinite objects. The counting numbers are the obvious example: one can start from one and keep on going. But any infinite set where one can number the members has the order of \aleph_0. (There are other infinities – eg that of the continuum, which have a different size of infinity.)

It is the nature of \aleph_0 that proportions of it are also infinite with the same order. So 1% of a set with the order \aleph_0 is also of order \aleph_0. To understand why, think of the counting numbers. If we took a set of 1%, then the first member would be 1, the second 101, the third 201 and so on. It would seem this set is \frac{1}{100}^{th} of the size of the counting numbers, but it is also the case that because the counting number set is infinite with order \aleph_0, the 1% set must also be infinite and have the same order. In other words, if paradoxically, the sets are in fact of the same order (size) – \aleph_0.

The problem for cosmologists comes when considering the whether we can use observations of our “universe” to point to the experimental provability of theories of an infinite number of universes – the multiverse.

The argument runs like this: we have a theory that predicts a multiverse. Such a theory also predicts that certain types of universe are more typical, perhaps much more typical than others. Applying the Copernican argument we would expect that we, bog-standard observers of the universe – nothing special in other words – are likely to be in one of those typical universes. If we were in a universe that was atypical it would weaken the case for the theory of the multiverse.

But what if there were an infinite number of universes in the multiverse? Then, no matter how atypical any particular universe was (as measured by the value of various physical constants) then there would be an infinite number of such a typical universes. It would hardly weaken the case of the multiverse theory if it turned out we were stuck inside one of these highly atypical universes: because there were an infinite number of them.

This “measure problem” is a big difficulty for cosmologists who, assuming we cannot build particle accelerators much bigger than the Large Hadron Collider, are stuck with only one other “experiment” to observe – the universe. If all results of that experiment are as likely as any other, it is quite difficult to draw conclusions.

Greene seems quite confident that the measure problem can be overcome. I am not qualified to pass judgement on that, though it is not going to stop me from saying it seems quite difficult to imagine how.

The Copernican principle and the multiverse


Brian Greene
Brian Greene (Photo credit: Marjorie Lipan)

Thinking about this leaves my mind in a bit of a twist, but it is worth exploring.

I am still reading Brian Greene’s The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos: a great book (just enough maths in the footnotes to make me feel I haven’t completely lost touch yet with a clear narrative in plain English in the body).

In the book there is, understandably enough, a fair bit of discussion of the “cosmological constant” – an anti-gravitational force that is powering the universe’s expansion.

It turns out that this force is just about the right value to allow galaxies to form (if it were too high then gravity would not be able to overcome it, if it were too low then gravity might just throw everything into one lump or a black hole). Without galaxies, goes the reasoning (after Steven Weinberg), there would be no life – as galaxies allow the mixing of various elements (eg everything on the Earth that comes higher in the periodic table than iron was manufactured in a supernova, while everything that is heavier than helium surely got here in the same explosive way – we are not so much what stars are made of as opposed to being made of stars.)

But there are about 10^{124} different values of the cosmological constant that could have a measurable effect on our universe’s physical laws, argues Brian Greene and essentially demands that, via the Copernican Principle (that humans are not at the centre of the universe) that requires there to be approximately (in fact, rather more) that number of universes out there to show that our universe, with its physical laws (or, more accurately, its physical constants – the laws being immutable) is just another typical drop off point.

And, happily for Greene, he points out that string theory allows for about 10^{500} universes, so it is perfectly possible for this one, with its particular cosmological constant, to be just typical.

But, while I understand this argument and, of course, it has a beauty and is perhaps the ultimate vindication of Doctor Copernicus, it also seems to me to be flawed. There seems to me to be no need to demand these additional universes. Because we can only observe the universe we are in. Were there to be only one universe (I know that term is technically a tautology, but I hope you understand the point) and it had different physical characteristics we simply would not be around to see it.

The fact that our universe has a particular set of characteristics and we can see it seems to me to prove or demand nothing very much (ie., I am not making some argument in favour of a “grand designer” either) – other than we have “won” a physical lottery. We exist because of the physical characteristics of the universe, not the other way round, which it seems to me is quite close to what Greene demands.