Turing versus Rosenberg


Perhaps this would be better on my book review site, but it’s really a question of science, prompted by reading the challenging The Atheist’s Guide to Reality: Enjoying Life without Illusions.

My issue with the book is not atheism but the essential claim of the author – Alex Rosenberg – that human beings cannot reason about anything, can exercise no choice and have no free will and live a completely determined life.

Rosenberg grounds this in the claim that humans cannot have thoughts “about” anything – how can, he asks, your neurons be “about Paris” (or anything else) when they are merely electrical connections? And, he adds, our sense of free will, of conscious decision, is an illusion as demonstrated by multiple experiments that show we have “taken” any decision before we consciously “decide” to take it.

English: Plaque marking Alan Turing's former h...
English: Plaque marking Alan Turing’s former home in Wilmslow, Cheshire. Español: Placa conmemorativa en la antigua casa de Turing (Photo credit: Wikipedia)

In the end I just think this is a tautology. How can the words on a page be “about Paris” either when they are just black ink? We end up abolishing the category of “about” if we follow this argument. Nothing is about anything else.

And how do humans advance their knowledge and understanding if they cannot reason, cannot decide? Knowledge cannot be immanent in experience, surely? Newton did not formulate gravity because being hit on the head by the mythical apple was a form of “percussive engineering” on his neural circuits – he reasoned about the question and yes, that reasoning helped reshape the neural connections, but it was not pre-destined.

And anyone who has read Godel, Escher, Bach will surely see conscious and unconscious decision making closely linked in any case – this is what a “strange loop” is all about.

Ultimately I find myself thinking of Turing’s idea of the “imitation game” and the more general idea that intelligence is what looks like intelligence. Computers have no free will, but they are not necessarily fully deterministic either – we can build a random number generator which is powered by nuclear decay events which, we must believe, are fully stochastic. Such a system could be made to appear as exercising choice in a completely non-deterministic way and look fully human within the bounds of Turing’s game. And when I say it is being “made to appear” to be exercising choice, I think it will be exercising choice in just the same way as we do – because there is no way that we could tell it apart from a human.

Or to take another example – if we build a genetic algorithm to find a heuristic solution to the travelling salesman problem in what sense has the computer not thought “about” the problem in developing its solution?

 

A (partial) answer to my Goedelian conundrum?


Douglas Hofstadter in Bologna, Italy
Douglas Hofstadter in Bologna, Italy (Photo credit: Wikipedia)

Last week I puzzled over what seemed to me to be the hand waiving dismissal, by both Alan Turing and Douglas Hofstadter of what I saw as the problem of humans being able to write true statements that the formal systems employed by computers could not determine – the problem thrown up by Goedel’s Incompleteness Theorems.

Well, Douglas Hofstadter has now come to his own (partial) rescue as I continue to read on through Godel, Escher, Bach – as he describes Tarski’s Theorem, which essentially states that humans cannot determine all such statements either (unless we posit the Church-Turing thesis is wrong, of course and there is some inner human computational magic we have yet to describe).

I am now going to quickly run through Hofstadter’s exposition – it might not mean too much to those of you not familiar with GEB, but if so and if you are interested in computation (and genetics and music) and you want to improve your mind this summer you could always think about buying the book. I don’t promise it’s an easy read, the style can vary from the nerdy to the deeply frustrating, but it is still a rewarding read.

So here goes:

We imagine we have a procedure that can determine the truth of a number theoretical statement, i.e.:

TRUE\{a\} can tell us whether the number theoretical statement with Goedel number a is true.

So now we posit, T with Goedel number t :

\exists a:<\sim TRUE\{a\} \wedge ARITHMOQUINE\{a^{\prime\prime},a\}>

Now, of you have read GEB you will know that to “arithmoquine” a number theoretical statement is to replace the free variable – in this case a^{\prime\prime}, with the Goedel number for the statement itself…

\exists a:<\sim TRUE\{a\} \wedge ARITHMOQUINE\{SSS \ldots SSS0,a\}>

Which we can state as “the arithmoquinification of t is the Goedel number of a false statement”.

But the arithmoquinification of t is T ‘s own number, so this statement is the equivalent of saying “this statement is false”: just another version of the famous Epimenides Paradox, but one that is decidedly not hand waving in form: it’s about natural numbers.

The outcome is that TRUE\{a\} cannot exist without our whole idea of natural numbers collapsing and we are forced to conclude there is no formal way of deciding what is a true statement of theoretical number theory using only theoretical number theory – and so humans are no better off than computers in this regard: we use concepts from without the formal theory to establish truth and we could surely program our computers to do the same. Turing’s “imitation game” conception of intelligent machines thus survives.

At least I think so!

Turing, Hofstadter, Bach – with some Cherla thrown in


The imitation game, as described by Alan Turin...
The imitation game, as described by Alan Turing in “Computing Machinery and Intelligence”. Player C, through a series of written questions, attempts to determine which of the two players is a man, and which of the two is the woman. Player A – the man – tires to trick player C into making the wrong decision, while player B tries to help player C. Turing uses this game as the basis for his test for intelligence. (Photo credit: Wikipedia)

A few weeks ago I attended the morning (I had to go back to work in the afternoon) of the BCS doctoral consortium in Covent Garden in London – watching various PhD students present their work to audience of peers.

The presentation which most interested me was that of Srikanth Cherla who is researching connectionist models for music analysis and prediction and to use generative models to produce short passages of music that are in a similar style to the music passages that his systems learn.

It’s not a field that I have any expertise in or indeed much knowledge of, though in essence (I hope I get this right): a specialised form of neural network is used to analyse musical passages (Bach’s chorale works were highlighted) and from there it is possible to get the computer to play some passages it has composed based on the style it has learnt.

Srikanth emphasised that it was not a case of applying a rigid rule that guessed or picked the next note – there is a semi-random/stochastic element that can be attributed to certain musical patterns in the works of the great composers and capturing that is important.

And the music he played at the end – while plainly not matching Bach, did certainly sound like Bach.

As I am (still! It’s a big book) reading Godel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter my thoughts were immediately turned to the idea of artificial intelligence and then on to Alan Turing’s idea of the “Imitation Game” and the concept that anything that looks like it is thinking should, in fact, be thought of as thinking.

Today, prior to writing this blog I read through Turing’s October 1950 paper “Computing Machinery and Intelligence“, from which we get the idea of a “Turing Test” (though obviously he doesn’t call it that).

The paper begins:

I propose to consider the question, ‘Can machines think?’

And goes on to discuss ways in which it might be possible “by the end of the century” to have machines which could fool a remote observer, able only to read typed answers to questions, that a digital computer was in fact a person.

The paper is not, for Turing at least, in a completely different field to “On computable numbers”: Turing’s essential point is anything a human computer can do, a digital computer can do, and he goes on to explicitly call humans machines.

The idea that great works of art, such as the “next” set of Bach chorales,  might in the future be composed by computer no doubt horrifies many readers, as it plainly did in Turing’s day too – as he deals specifically with what he calls “the theological objection” – an extreme objection based on the idea that “God gives an immortal soul to every man and woman, but not to any other animal or machine”:

I am unable to accept any part of this… I am not very impressed with theological arguments whatever they may be used to support

But in any case, from within the theological paradigm dismisses it as a human imposition on what is meant to be an unlimited Godly power:

It appears to me the argument quoted above implies a serious restriction on the omnipotence of the Almighty

…before going on to swat aside Biblical literalism as an argument by citing how it was used against Galileo (maybe there are still fundamentalists out there who believe in the literal truth of Psalm 104 and an unmoving Earth but if so they keep quiet about it).

Then he deals with the argument that machines could not appear human because they have no consciousness by essentially asking what is consciousness anyway – and how can we prove others have it and then goes on to deal with “various disabilities” – such as computers being unable to appreciate the taste of strawberries with cream:

The inability to enjoy strawberries and cream may have struck the reader as frivilous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one would be idiotic. What is important about this disability is that it contributes to some of the other diabilities e.g. to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man.

This passage is worth quoting as it both suggests that Turing is far from the 100% progressive superhero later admirers are tempted to paint him as – he beat the Nazis and was persecuted as a gay man and therefore can do no wrong: in fact he was a man of his times with all that implies – as well as because I find it less than fully satisfying an answer.

In context I think the point he is seeking to make is that we could make a machine that liked “eating” strawberries and could be friends with its fellows (so long as they had the same skin colour, don’t lets get too radical!) but why would we bother… but it is not totally clear.

Similarly he, like Hofstadter, deals with the so-called Goedalisation argument less than satisfactorily: this states that we, humans, can state true statements about numbers that machines cannot determine (i.e. we know they are true but the machine cannot decide if they are true or false). Hence we could, in the imitation game, pose a Goedel Number type puzzle that the computer could never answer.

Actually, of course, the computer could guess, as humans often do! But the more general point – that humans can do something machines cannot and so we are not truly Turing Machines seems unanswered to me by both Turing and Hofstadter’s argument: that we can also find questions humans cannot determine if we make them complex enough.

Perhaps an expert would care to comment?

Update: Following some feedback from Srikanth I have edited the passages referring to his work slightly – haven’t changed the sense I think but just made it a bit clearer. I also updated the Psalm number – as I had misread Turing’s reference to line 5 of the Psalm for the Psalm itself

Some notes on Hofstadter’s Typographical Number Theory


In Godel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter outlines his axiomatic “Typographical Number Theory” and sets certain problems for the reader – one is to show that b (a variable) is a power of 2.

These are my notes in trying to demonstrate this (hopefully understandable to anyone who knows a bit of symbolic logic and has a passing knowledge of Peano’s axioms.

My reasoning runs like this: if a number is a power of two then it has no prime factors other than 2 (and 1, if we count that as a prime) – see the earlier entry on the fundamental theorem of arithmetic.

A prime (a) in TNT:

\exists a: \sim \exists a^{\prime}: \sim \exists a^{\prime\prime}: < a^{\prime}.a^{\prime\prime}=a \wedge \sim a^{\prime} = 0 \wedge \sim a^{\prime} = S0 \wedge \sim a^{\prime\prime} = 0 \wedge \sim a^{\prime\prime} = S0>

First Update: Now realise there is a simpler way of stating a prime in TNT – namely that there is no two numbers bigger than one the product of which is the number we seek:

\exists a: \sim \exists a^{\prime}: \exists a^{\prime\prime}: SSa^{\prime}.SSa^{\prime\prime} = a

Second update:

So, we have a definition for a prime, a, and we can set  a to a prime other than 2.

\exists d: \exists a: \sim \exists a^{\prime}: \exists a^{\prime\prime}: <SSa^{\prime}.SSa^{\prime\prime} = a \wedge \sim a = SS0>

Then

\exists b: \exists c: \sim \exists a: \sim \exists a^{\prime}: \exists a^{\prime\prime}: <SSa^{\prime}.SSa^{\prime\prime} = a \wedge \sim a = SS0 \wedge \sim b = a.c>

I think this is a sufficient definition for “b is a power of 2”, as we know that all numbers are products of primes and we have ruled out any prime other 2 being a factor of b. Is that right?

On further thought I don’t think that’s enough, because we need to specify that b is a multiple of 2.

So what about this?

\exists b: \exists c: \exists d: \sim \exists a: \sim \exists a^{\prime}: \exists a^{\prime\prime}: <SSa^{\prime}.SSa^{\prime\prime} = a \wedge \sim a = SS0 \wedge \sim b = a.c \wedge b = SS0.d>

So has that nailed it?

How I discovered the fundamental theorem of arithmetic by chance


Cover of "Gödel, Escher, Bach: An Eternal...
Cover via Amazon

Actually, of course, I rediscovered it.

I have been attempting to read, for the third time Douglas Hofstadter‘s celebrated Godel, Escher, Bach: I bought a copy in Washington DC in 2009 and loved it (though didn’t get very far before I put it down for some reason) but I have always struggled to get deeply into it: it’s plainly a great book – though having read more on Turing these days I am not sure I’d subscribe to what appears to be the core thesis – but it’s not an easy read.

(Though Hofstadter’s preface did inspire me three years ago to read the brilliantly compact Godel’s Proof– buy, steal or borrow a copy if you are at all interested in maths.)

This time round I am feeling more confident I’ll make progress, it just seems a bit easier to grasp – possibly because I am feeling more confident about maths these days (not that there is any higher maths in it – but grasping some of the more complex stuff does seem to help with all of it.)

So I got the part known as the “MU puzzle” – can you go from the axiom MI to the theorem MU following the rules of the formal system he sets out. I thought – oh, yes, easy, just find a power of two that is also a multiple of three – and in my head I start going 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096 – a list I have been familiar with for thirty three years or so. But none of them are multiples of three.

Now, presumably anyone who knows anything about prime numbers has by now said “of course they are not, haven’t you heard of the fundamental theorem of arithmetic?” – well it seems the answer to that is “err, no”.

Or, if I had, I never really grasped its meaning. Each number cn be factorised into a unique set of primes (if a prime then just itself, of course). That precludes any power of two having as a factor any other prime number (of which three is of course one). It’s really epically important in terms of understanding numbers and I am slightly shocked and puzzled I have never really understood it before now.