## “Ultimate L” explained

Well, that is what I wanted to write about – Hugh Woodin‘s theories of infinity, and indeed I found a blog that discussed a lot of this – http://caicedoteaching.wordpress.com/2010/10/19/luminy-hugh-woodin-ultimate-l-i/

But that is somewhat beyond me! And there does not appear to be a wikipedia article on this, so you may have to wait for some time before you see me try to take it on.

Interestingly, Woodin’s wikipedia entry, accurately, suggests that his earlier work was sceptical as to the truth of Cantor’s continuum hypothesis  – something that his latest work turns on its head.

Posted on

## Goedel’s Incompeteness theorem surpassed?

Gödel’s Incompleteness Theorems are one of the cornerstone’s of modern mathematical thought but it is also a major blot on the mathematical landscape – as it establishes an inherent limit on the ability of mathematicians to describe the mathematical world: the first theorem (often thought of as the theorem) states that no consistent (ie self-contained) axiomatic system is capable to describing all the facts about natural numbers.

To today’s physical scientists – used to concepts such as relativity and quantum uncertainty – the broad idea that there could be an uncertainty at the heart of mathematics is maybe not so difficult to take, but it is fair to say it broke a lot of mathematical hearts in the 1930s when first promulgated. (This book – Godel’s Proof – offers an excellent introduction for the non-mathematician who is mathematically competent – ie like me!).

Gödel thought at the time that this kink in mathematical reality could be smoothed out by a better understanding of infinities in mathematics – and, according to the cover article in this week’s New Scientist (seemingly only available online to subscribers ), by Richard Elwes, it is now being claimed by Hugh Woodin of UC Berkeley that just that has been shown.

Along the way, this new hypothesis of “Ultimate L” also demonstrates that Cantor’s continuum hypothesis is correct. I do not claim to understand “Ultimate L”, and in any case, as is their style, the New Scientist don’t print the proof, they just describe it in layman’s terms. I do have a basic understanding of the continuum hypothesis, though, and so can show the essential points that “Ultimate L” claims to have found.

Georg Cantor showed that there were multiple infinities, the first of which, so-called $\aleph_0$ (aleph null) is the infinity of the countable numbers – eg 1, 2, 3… and so on. Any infinite set that can be paired to a countable number in this way has a cardinality of $\aleph_0$. (And, as the New Scientist point out, this is the smallest infinity – eg if you thought that, say, there must be half as many even numbers as there are natural numbers, you are wrong – the set of both is of cardinality $\aleph_0$ – 2 is element 1 of the set, 4, is element 2 and so on: ie a natural number can be assigned to every member of the set of even numbers and so the set is of cardinality $\aleph_0$.)

The continuum hypothesis concerns what might be $\aleph_1$ – the next biggest infinity. Cantor’s hypothesis is that $\aleph_1$ is the real numbers (the continuum): I discuss why this is infinite and a different infinity from the natural numbers here.

We can show that this set has cardinality $2^{\aleph_0}$ – a number very much bigger than $\aleph_0$. But is there another infinity in between?

Mathematicians have concentrated on looking at whether any projections (a word familiar to me now from relational algebra) of the set of reals has a cardinality between $\aleph_0$ and $2^{\aleph_0}$ – if they did then it would be clear the reals could not have the cardinality of $\aleph_1$ – but some other, higher, $\aleph$.

No projections with a different cardinality have been found, but that is not the same as a proof they do not exist. But if Woodin’s theory is correct then none exist.

(Just one more chance to plug the brilliant Annotated Turing : if you are interested in computer science you should really read it! This is the book that first got me interested in all this.)

## Gödel, Turing and decidability

I am still wandering around in the world opened to me by The Annotated Turing – perhaps a little lost, but I have been reading some guide books.

The most recent of these has been Gödel’s Proof which gives a gentle(ish) introduction to Gödel’s incompleteness theorem.

This is a short, but fascinating book that I repeatedly made the mistake of trying to read when tired or not able to give it my full attention (eg on a flight from London to Budapest last week when three days of hard work had really taken it out of me). But I finally managed to finish it last night.

I think the really startling point it makes – which admittedly Charles Petzold’s book also makes but I didn’t fully grasp at the time – is about the nature of the human mind.

Gödel’s theorem states that no consistent system of axioms whose theorems can be listed by an “effective procedure” (such as a computer program) is capable of proving all facts about the natural numbers.

Before I read the book I had not really thought about what this meant: after all Turing and Church had shown that most numbers/mathematical problems were not computable and so this seemed of a part with that conclusion.

But the key thing here is that metamathematics can show the correctness of theorems that axiomatic proofs cannot. In other words – I think – that computer programs are actually a poor model of our ability to solve mathematical problems.