I’m not all that interested in sex dolls, actually. But what I am interested in is the reactions they provoke from people when they consider the nature of intelligence.

My view – pretty much that followed by Alan Turing in his pioneering paper “Computing Machinery and Intelligence” – from which we get the “imitation game” aka the Turing Test – is that intelligence is whatever looks like intelligence.

The relevance of this to sex dolls is that the BBC’s technology correspondent Jane Wakefield has put together a series of reports on the subject – the first was on “From Our Own Correspondent” last Saturday, there’s a web piece – here – and there is a report for the BBC’s World Service yet to come.

Wakefield argues that while the sex doll “Harmony” can say things which sound like intimate small talk, the doll can never know the feelings behind the words.

But what does that mean? At the most basic level none of us can live inside the head of another – we can never “feel” what it’s like to that other person, because we cannot be them.

Or, to paraphrase Turing, you might like strawberry ice cream and I might hate it: but we are both tasting the same thing, so what does this feeling of “love” or “hate” correspond to? How could you know how I “feel” about the ice cream, when you “feel” differently?

It’s an entirely subjective thing, so how can you assert that the machine “feels” nothing?

Of course, the human brain and human experience generally appears to be a massively parallel thing and we simply cannot, yet, replicate that in a machine, but if we could are we seriously suggesting that human consciousness transcends the material? That simply doesn’t make any sense to me.

A computer program – AlphaGo – has now beaten the world’s greatest Go player and another threshold for AI has been passed.

As a result a lot of media commentary is focusing on threats – threats to jobs, threats to human security and so on.

But what about the opportunities? Actually, eliminating the necessity of human labour is a good thing – if we ensure the benefits of that are fairly distributed.

The issue is not artificial intelligence – a computer is just another machine and there is nothing magic about “intelligence” in any case (I very much follow Alan Turing in this). The issue is how humans organise their societies.

I am no Turing expert – I’ve read On Computable Numbers(via the quite brilliant The Annotated Turing),Computing Machinery and Intelligence and listened to the audio book of Alan Turing: The Enigma (now subtitled “the book that inspired the film The Imitation Game”) but I know enough to doubt that there really was a late-night post-boozing moment when the bombe machine started to work (this appears to be an attempt to lump the success of the bombe and Turing’s insight into German naval codes into one gloriously cinematic moment) and I certainly know that Turing did not spend – as the film implies if not explicitly states – the whole of the war at Bletchley Park.

And the film does Turing’s co-workers a great dis-service when it implies that Turing alone wrote to Churchill and that Turing then used the letter’s success to dominate his co-workers. The letter was written collectively and, what is more, after the first bombes were working, not as a device to get a bombe built.

Nor is the film fair on Turing in the sense that he is portrayed as – and the title implies he was adept at – hiding his sexuality. If anything it was Turing’s unwillingness to hide that caused him so much trouble. He was not ashamed to be gay (a word he used) even if some simulation just might have helped him dodge his indecency conviction.

No matter, though, the spirit of the film is correct and Cumberbatch is excellent in the lead role. Though it was Keira Knightly playing Joan Clarke who, if anything, impressed me more (though her accent seemed to swing back and forth between very posh and modern classlessness). Perhaps that is because Joan is now rather more of an enigma than Alan.

Reading this morning’s papers about the film it was implied that Turing’s work was not given due credit until recently because he was gay. I am not sure that is true.

We should not really expect the majority of the public to have heard of the “Church-Turing thesis” or to have grasped the basics of a Turing Machine, though the increased pervasiveness of computing devices does mean that Turing’s name as a key founder of the theoretical basis of electronic computing has become more widely known regardless of attitudes towards homosexuality. The Ultra decryption effort was kept hidden until the late 70s and the full details took some time to come out, but the ACM‘s Turing Award – the highest that a computer scientist could hope for, has been in existence since 1966: computer science did not disavow him.

But his story is a reminder of how bigotry damaged so many lives – even those to whom we owe so much.

At least, that is the claim being made by the University of Reading and it seems to have some credibility – as a computer entered into their annual “Turing Test” appears to have passed – convincing a third of the judges that it was a human and not a machine.

This definition of intelligence relies on Turing’s own – in his famous 1950 paper “Computing Machinery and Intelligence” (well worth reading, and no particular knowledge of computing is required) – a definition I like to think of as being summarised in the idea that “if something looks intelligent it is intelligent”: hence if you can make a computer fool you into thinking it is as intelligent as a 13-year-old boy (as in the Reading University case), then it is as intelligent as a 13 year old boy.

Of course, that is not to say it has self-awareness in the same way as a 13-year-old. But given that we are struggling to come up with an agreed scientific consensus on what such self-awareness consists of, that question is, to at least a degree, moot.

Alan Turing was not ashamed of being gay and made little or no effort to hide it. In today’s parlance he was “out” – if not to the world then certainly to a large number of people.

I wonder if he would ever have asked for a ‘pardon’ – because his view was certainly he had done nothing that required a pardon.

The other factor, of course, is that thousands of people – many still alive – were prosecuted using the same repressive law under which Turing was victimised. Are they too to be pardoned? Or is it just that a high profile case, involving someone who cannot say anything that causes discomfort in response, is a handy pre-Christmas news sponge?

Update: Andrew Hodges makes the point much better than I can:

“Alan Turing suffered appalling treatment 60 years ago and there has been a very well intended and deeply felt campaign to remedy it in some way. Unfortunately, I cannot feel that such a ‘pardon’ embodies any good legal principle. If anything, it suggests that a sufficiently valuable individual should be above the law which applies to everyone else.

“It’s far more important that in the 30 years since I brought the story to public attention, LGBT rights movements have succeeded with a complete change in the law – for all. So, for me, this symbolic action adds nothing.

“A more substantial action would be the release of files on Turing’s secret work for GCHQ in the cold war. Loss of security clearance, state distrust and surveillance may have been crucial factors in the two years leading up to his death in 1954.”

My issue with the book is not atheism but the essential claim of the author – Alex Rosenberg – that human beings cannot reason about anything, can exercise no choice and have no free will and live a completely determined life.

Rosenberg grounds this in the claim that humans cannot have thoughts “about” anything – how can, he asks, your neurons be “about Paris” (or anything else) when they are merely electrical connections? And, he adds, our sense of free will, of conscious decision, is an illusion as demonstrated by multiple experiments that show we have “taken” any decision before we consciously “decide” to take it.

In the end I just think this is a tautology. How can the words on a page be “about Paris” either when they are just black ink? We end up abolishing the category of “about” if we follow this argument. Nothing is about anything else.

And how do humans advance their knowledge and understanding if they cannot reason, cannot decide? Knowledge cannot be immanent in experience, surely? Newton did not formulate gravity because being hit on the head by the mythical apple was a form of “percussive engineering” on his neural circuits – he reasoned about the question and yes, that reasoning helped reshape the neural connections, but it was not pre-destined.

And anyone who has read Godel, Escher, Bach will surely see conscious and unconscious decision making closely linked in any case – this is what a “strange loop” is all about.

Ultimately I find myself thinking of Turing’s idea of the “imitation game” and the more general idea that intelligence is what looks like intelligence. Computers have no free will, but they are not necessarily fully deterministic either – we can build a random number generator which is powered by nuclear decay events which, we must believe, are fully stochastic. Such a system could be made to appear as exercising choice in a completely non-deterministic way and look fully human within the bounds of Turing’s game. And when I say it is being “made to appear” to be exercising choice, I think it will be exercising choice in just the same way as we do – because there is no way that we could tell it apart from a human.

Or to take another example – if we build a genetic algorithm to find a heuristic solution to the travelling salesman problem in what sense has the computer not thought “about” the problem in developing its solution?

A post inspired by Godel, Escher, Bach, Complexity: A Guided Tour, an article in this week’s New Scientist about the clash between general relativity and quantum mechanics and personal humiliation.

The everyday incompleteness: This is the personal humiliation bit. For the first time ever I went on a “Parkrun” today – the 5km Finsbury Park run, but I dropped out after 2.5km 2km – at the top of a hill and about 250 metres from my front door – I simply thought this is meant to be a leisure activity and I am not enjoying it one little bit. I can offer some excuses – it was really the first time ever I had run outdoors and so it was a bit silly to try a semi-competitive environment for that, I had not warmed up properly and so the first 500 metres were about simply getting breathing and limbs in co-ordination – mais qui s’excuse, s’accuse.

But the sense of incompleteness I want to write about here is not that everyday incompleteness, but a more fundamental one – our inability to fully describe the universe, or rather, a necessary fuzziness in our description.

Let’s begin with three great mathematical or scientific discoveries:

The diagonalisation method and the “incompleteness” of the real numbers: In 1891 Georg Cantor published one of the most beautiful, important and accessible arguments in number theory – through his diagonalisation argument, that proved that the infinity of the real numbers was qualitatively different from and greater than the infinity of the counting numbers.

The infinity of the counting numbers is just what it sounds like – start at one and keep going and you go on infinitely. This is the smallest infinity – called aleph null ().

Real numbers include the irrationals – those which cannot be expressed as fractions of counting numbers (Pythagoras shocked himself by discovering that was such a number). So the reals are all the numbers along a counting line – every single infinitesimal point along that line.

Few would disagree that there are, say, an infinite number of points between 0 and 1 on such a line. But Cantor showed that the number was uncountably infinite – i.e., we cannot just start counting from the first point and keep going. Here’s a brief proof…

Imagine we start to list all the points between 0 and 1 (in binary) – and we number each point, so…

1 is 0.00000000…..
2 is 0.100000000…..
3 is 0.010000000……
4 is 0.0010000000….
n is 0.{n – 2 0s}1{000……}

You can see this can go on for an infinitely countable number of times….

and so on. Now we decide to ‘flip’ the o or 1 at the index number, so we get:

1 is 0.1000000….
2 is 0.1100000….
3 is 0.0110000….
4 is 0.00110000….

And so on. But although we have already used up all the counting numbers we are now generating new numbers which we have not been able to count – this means we have more than numbers in the reals, surely? But you argue, let’s just interleave these new numbers into our list like so….

1 is 0.0000000….
2 is 0.1000000…..
3 is 0.0100000….
4 is 0.1100000….
5 is 0.0010000….
6 is 0.0110000….

And so on. This is just another countably infinite set you argue. But, Cantor responds, do the ‘diagonalisation’ trick again and you get…

1 is 0.100000…..
2 is 0.110000….
3 is 0.0110000….
4 is 0.1101000…
5 is 0.00101000…
6 is 0.0110010….

And again we have new numbers, busting the countability of the set. And the point is this: no matter how many times you add the new numbers produced by diagonalisation into your counting list, diagonalisation will produce numbers you have not yet accounted for. From set theory you can show that while the counting numbers are of order (analogous to size) , the reals are of order , a far far bigger number – literally an uncountably bigger number.

Gödel’s Incompleteness Theorems: These are not amenable to a blog post length demonstration, but amount to this – we can state mathematical statements we know to be true but we cannot design a complete proof system that incorporates them – or we can state mathematical truths but we cannot build a self-contained system that proves they are true. The analogy with diagonalisation is that we know how to write out any real number between 0 and 1, but we cannot design a system (such as a computer program) that will write them all out – we have to keep ‘breaking’ the system by diagonalising it to find the missing numbers our rules will not generate for us. Gödel’s demonstration of this in 1931 was profoundly shocking to mathematicians as it appeared to many of them to completely undermine the very purpose of maths.

Turing’s Halting Problem: Very closely related to both Gödel’s incompleteness theorems and Cantor’s diagonalisation proof is Alan Turing’s formulation of the ‘halting problem’. Turing proposed a basic model of a computer – what we now refer to as a Turing machine – as an infinite paper tape and a reader (of the tape) and writer (to the tape). The tape’s contents can be interpreted as instructions to move, to write to the tape or to change the machine’s internal state (and that state can determine how the instructions are interpreted).

Now such a machine can easily be made of go into an infinite loop e.g.,:

The machine begins in the ‘start’ state and reads the tape. If it reads a 0 or 1 it moves to the right and changes its state to ‘even’.

If the machine is in the state ‘even’ it reads the tape. If it reads a 0 or 1 it moves to the left and changes its state to ‘start’

You can see that if the tape is marked with two 0s or two 1s or any combination of 0 or 1 in the first two places the machine will loop for ever.

The halting problem is this – can we design a Turing machine that will tell us if a given machine and its instructions will fall into an infinite loop? Turing proved we cannot without having to discuss any particular methodology … here’s my attempt to recreate his proof:

We can model any other Turing machine though a set of instructions on the tape, so if we have machine we can have have it model machine with instructions : i.e.,

Let us say can tell whether will halt or loop forever with instructions – we don’t need to understand how it does it, just suppose that it does. So if will halt writes ‘yes’, otherwise it writes ‘no’.

Now let us design another machine that takes its input but here loops forever if writes ‘yes’ and halts if writes ‘no’.

Then we have:

halts or loops – halts – loops forever.

But what if we feed the input of ?

halts or loops – halts – loops forever – – ??

Because if the second halted then that would imply that the first had halted – but it is meant to loop forever, and so on…

As with Gödel we have reached a contradiction and so we cannot go further and must conclude that we cannot build a Turing machine (computer) that can solve the halting problem.

Quantum mechanics: The classic, Copenhagen, formulation of quantum mechanics states that the uncertainty of the theory collapses when we observe the world, but the “quantum worlds” theory suggests that actually the various outcomes do take place and we are just experiencing one of them at any given time. The experimental backup for the many worlds theory comes from quantum ‘double-slit’ experiments which suggest particles leave traces of their multiple states in every ‘world’.

What intrigues me: What if our limiting theories – the halting problem, Gödel’s incompleteness theorem, the uncountable infinite, were actually the equivalents of the Copenhagen formulation and, in fact, maths was also a “many world” domain where the incompleteness of the theories was actually the deeper reality – in other words the Turing machine can both loop forever and halt? This is probably, almost certainly, a very naïve analogy between the different theories but, lying in the bath and contemplating my humiliation via incompleteness this morning, it struck me as worth exploring at least.

Last week I puzzled over what seemed to me to be the hand waiving dismissal, by both Alan Turing and Douglas Hofstadter of what I saw as the problem of humans being able to write true statements that the formal systems employed by computers could not determine – the problem thrown up by Goedel’s Incompleteness Theorems.

Well, Douglas Hofstadter has now come to his own (partial) rescue as I continue to read on through Godel, Escher, Bach – as he describes Tarski’s Theorem, which essentially states that humans cannot determine all such statements either (unless we posit the Church-Turing thesis is wrong, of course and there is some inner human computational magic we have yet to describe).

I am now going to quickly run through Hofstadter’s exposition – it might not mean too much to those of you not familiar with GEB, but if so and if you are interested in computation (and genetics and music) and you want to improve your mind this summer you could always think about buying the book. I don’t promise it’s an easy read, the style can vary from the nerdy to the deeply frustrating, but it is still a rewarding read.

So here goes:

We imagine we have a procedure that can determine the truth of a number theoretical statement, i.e.:

can tell us whether the number theoretical statement with Goedel number a is true.

So now we posit, with Goedel number :

Now, of you have read GEB you will know that to “arithmoquine” a number theoretical statement is to replace the free variable – in this case , with the Goedel number for the statement itself…

Which we can state as “the arithmoquinification of is the Goedel number of a false statement”.

But the arithmoquinification of is ‘s own number, so this statement is the equivalent of saying “this statement is false”: just another version of the famous Epimenides Paradox, but one that is decidedly not hand waving in form: it’s about natural numbers.

The outcome is that cannot exist without our whole idea of natural numbers collapsing and we are forced to conclude there is no formal way of deciding what is a true statement of theoretical number theory using only theoretical number theory – and so humans are no better off than computers in this regard: we use concepts from without the formal theory to establish truth and we could surely program our computers to do the same. Turing’s “imitation game” conception of intelligent machines thus survives.

Cambridge University has a stellar reputation for Computer Science in the UK.

The Computer Laboratory can trace its history back over more than 75 years (to a time when ‘computers’ where humans making calculations), while the wider University can claim Alan Turing for one of its own. And Sinclair Research, ARM, the Cambridge Ring – the list of companies and technical innovations associated with the University is a long one: they even had what was possibly the world’s first webcam.

But, according to today’s Guardian, they might need to work a bit harder with their undergraduates – the Guardian’s 2014 University Guide rates Cambridge as the best University in Britain overall but slots it in only at 8th in computer science and conspicuously gives it the worst rating (1/10) for “value added” – namely the improvement from entry to degree for students.

Now, possibly this is because it is the toughest computer science course in the country to get a place in – the average student needs more than 3 A* grades at A level (and 3 As at AS) to get a place, compared to Imperial, the next place down where 3 A*s would probably set you right – but there has to be more to it than that. It is even harder to get into biosciences at Cambridge and yet they are rated 8/10 in the value added score.

Don’t get me wrong – I am sure Cambridge is fantastic at teaching computer science, but it is also given a lot of money on the basis that it is an elite institution and so it seems reasonable to ask for an explanation (from the Guardian too of course!)

(Incidentally, it seems that Oxford teaches so few undergraduates computer science it cannot be rated at all.)

A few weeks ago I attended the morning (I had to go back to work in the afternoon) of the BCS doctoral consortium in Covent Garden in London – watching various PhD students present their work to audience of peers.

The presentation which most interested me was that of Srikanth Cherla who is researching connectionist models for music analysis and prediction and to use generative models to produce short passages of music that are in a similar style to the music passages that his systems learn.

It’s not a field that I have any expertise in or indeed much knowledge of, though in essence (I hope I get this right): a specialised form of neural network is used to analyse musical passages (Bach’s chorale works were highlighted) and from there it is possible to get the computer to play some passages it has composed based on the style it has learnt.

Srikanth emphasised that it was not a case of applying a rigid rule that guessed or picked the next note – there is a semi-random/stochastic element that can be attributed to certain musical patterns in the works of the great composers and capturing that is important.

And the music he played at the end – while plainly not matching Bach, did certainly sound like Bach.

Today, prior to writing this blog I read through Turing’s October 1950 paper “Computing Machinery and Intelligence“, from which we get the idea of a “Turing Test” (though obviously he doesn’t call it that).

The paper begins:

I propose to consider the question, ‘Can machines think?’

And goes on to discuss ways in which it might be possible “by the end of the century” to have machines which could fool a remote observer, able only to read typed answers to questions, that a digital computer was in fact a person.

The paper is not, for Turing at least, in a completely different field to “On computable numbers”: Turing’s essential point is anything a human computer can do, a digital computer can do, and he goes on to explicitly call humans machines.

The idea that great works of art, such as the “next” set of Bach chorales, might in the future be composed by computer no doubt horrifies many readers, as it plainly did in Turing’s day too – as he deals specifically with what he calls “the theological objection” – an extreme objection based on the idea that “God gives an immortal soul to every man and woman, but not to any other animal or machine”:

I am unable to accept any part of this… I am not very impressed with theological arguments whatever they may be used to support

But in any case, from within the theological paradigm dismisses it as a human imposition on what is meant to be an unlimited Godly power:

It appears to me the argument quoted above implies a serious restriction on the omnipotence of the Almighty

…before going on to swat aside Biblical literalism as an argument by citing how it was used against Galileo (maybe there are still fundamentalists out there who believe in the literal truth of Psalm 104 and an unmoving Earth but if so they keep quiet about it).

Then he deals with the argument that machines could not appear human because they have no consciousness by essentially asking what is consciousness anyway – and how can we prove others have it and then goes on to deal with “various disabilities” – such as computers being unable to appreciate the taste of strawberries with cream:

The inability to enjoy strawberries and cream may have struck the reader as frivilous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one would be idiotic. What is important about this disability is that it contributes to some of the other diabilities e.g. to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man.

This passage is worth quoting as it both suggests that Turing is far from the 100% progressive superhero later admirers are tempted to paint him as – he beat the Nazis and was persecuted as a gay man and therefore can do no wrong: in fact he was a man of his times with all that implies – as well as because I find it less than fully satisfying an answer.

In context I think the point he is seeking to make is that we could make a machine that liked “eating” strawberries and could be friends with its fellows (so long as they had the same skin colour, don’t lets get too radical!) but why would we bother… but it is not totally clear.

Similarly he, like Hofstadter, deals with the so-called Goedalisation argument less than satisfactorily: this states that we, humans, can state true statements about numbers that machines cannot determine (i.e. we know they are true but the machine cannot decide if they are true or false). Hence we could, in the imitation game, pose a Goedel Number type puzzle that the computer could never answer.

Actually, of course, the computer could guess, as humans often do! But the more general point – that humans can do something machines cannot and so we are not truly Turing Machines seems unanswered to me by both Turing and Hofstadter’s argument: that we can also find questions humans cannot determine if we make them complex enough.

Perhaps an expert would care to comment?

Update: Following some feedback from Srikanth I have edited the passages referring to his work slightly – haven’t changed the sense I think but just made it a bit clearer. I also updated the Psalm number – as I had misread Turing’s reference to line 5 of the Psalm for the Psalm itself