Because I essentially got it wrong in the last post … turns out that a fully connected network is, generally, not a great idea for image processing and that partial connections – through “convolution layers” are likely to be more efficient.
And my practical experience backs this up: my first NN did, in effect, have two convolution layers (or filters), although somewhat eccentrically designed as 100 x 1 and 1 x 100 filters. And this network performs better than the single hidden layer fully connected alternative. That may just be because it takes an age to train the fully connected network and converge of the error levels towards a low number is just taking for ever (a convolution layers has many fewer connections and so can be trained much faster).
I’m not all that interested in sex dolls, actually. But what I am interested in is the reactions they provoke from people when they consider the nature of intelligence.
My view – pretty much that followed by Alan Turing in his pioneering paper “Computing Machinery and Intelligence” – from which we get the “imitation game” aka the Turing Test – is that intelligence is whatever looks like intelligence.
The relevance of this to sex dolls is that the BBC’s technology correspondent Jane Wakefield has put together a series of reports on the subject – the first was on “From Our Own Correspondent” last Saturday, there’s a web piece – here – and there is a report for the BBC’s World Service yet to come.
Wakefield argues that while the sex doll “Harmony” can say things which sound like intimate small talk, the doll can never know the feelings behind the words.
But what does that mean? At the most basic level none of us can live inside the head of another – we can never “feel” what it’s like to that other person, because we cannot be them.
Or, to paraphrase Turing, you might like strawberry ice cream and I might hate it: but we are both tasting the same thing, so what does this feeling of “love” or “hate” correspond to? How could you know how I “feel” about the ice cream, when you “feel” differently?
It’s an entirely subjective thing, so how can you assert that the machine “feels” nothing?
Of course, the human brain and human experience generally appears to be a massively parallel thing and we simply cannot, yet, replicate that in a machine, but if we could are we seriously suggesting that human consciousness transcends the material? That simply doesn’t make any sense to me.
A computer program – AlphaGo – has now beaten the world’s greatest Go player and another threshold for AI has been passed.
As a result a lot of media commentary is focusing on threats – threats to jobs, threats to human security and so on.
But what about the opportunities? Actually, eliminating the necessity of human labour is a good thing – if we ensure the benefits of that are fairly distributed.
The issue is not artificial intelligence – a computer is just another machine and there is nothing magic about “intelligence” in any case (I very much follow Alan Turing in this). The issue is how humans organise their societies.
This definition of intelligence relies on Turing’s own – in his famous 1950 paper “Computing Machinery and Intelligence” (well worth reading, and no particular knowledge of computing is required) – a definition I like to think of as being summarised in the idea that “if something looks intelligent it is intelligent”: hence if you can make a computer fool you into thinking it is as intelligent as a 13-year-old boy (as in the Reading University case), then it is as intelligent as a 13 year old boy.
Of course, that is not to say it has self-awareness in the same way as a 13-year-old. But given that we are struggling to come up with an agreed scientific consensus on what such self-awareness consists of, that question is, to at least a degree, moot.
My issue with the book is not atheism but the essential claim of the author – Alex Rosenberg – that human beings cannot reason about anything, can exercise no choice and have no free will and live a completely determined life.
Rosenberg grounds this in the claim that humans cannot have thoughts “about” anything – how can, he asks, your neurons be “about Paris” (or anything else) when they are merely electrical connections? And, he adds, our sense of free will, of conscious decision, is an illusion as demonstrated by multiple experiments that show we have “taken” any decision before we consciously “decide” to take it.
In the end I just think this is a tautology. How can the words on a page be “about Paris” either when they are just black ink? We end up abolishing the category of “about” if we follow this argument. Nothing is about anything else.
And how do humans advance their knowledge and understanding if they cannot reason, cannot decide? Knowledge cannot be immanent in experience, surely? Newton did not formulate gravity because being hit on the head by the mythical apple was a form of “percussive engineering” on his neural circuits – he reasoned about the question and yes, that reasoning helped reshape the neural connections, but it was not pre-destined.
And anyone who has read Godel, Escher, Bach will surely see conscious and unconscious decision making closely linked in any case – this is what a “strange loop” is all about.
Ultimately I find myself thinking of Turing’s idea of the “imitation game” and the more general idea that intelligence is what looks like intelligence. Computers have no free will, but they are not necessarily fully deterministic either – we can build a random number generator which is powered by nuclear decay events which, we must believe, are fully stochastic. Such a system could be made to appear as exercising choice in a completely non-deterministic way and look fully human within the bounds of Turing’s game. And when I say it is being “made to appear” to be exercising choice, I think it will be exercising choice in just the same way as we do – because there is no way that we could tell it apart from a human.
Or to take another example – if we build a genetic algorithm to find a heuristic solution to the travelling salesman problem in what sense has the computer not thought “about” the problem in developing its solution?
Next week Unreal bots will battle human players at the IEEE Conference of Computational Intelligence and Games in Grenada, Spain and if a bot can convince human players it is real then its developers could win $7000. In past years the bots have only won a maximum of $2000 – the money that goes to the best bot that is not convincing as a human.
This year, though, hopes seem high that one bot – ‘Neurobot’ – has a real crack at the $7000 prize (it came second to ICE-CIG amongst the bots last year but Neurobot’s developers, from Imperial College in London, are hoping that improvements they have made put it in poll.)
The interesting thing is that Neurobot is the algorithm/concept being used – the bot doesn’t try to use computational power to fully absorb the scene and act on every piece of information, but instead discriminates using the principles of “global workplace theory” (GWT) which states that the human brain only pushes a small number of things into the forefront of thought – the “global workplace”.
Neurobot models the brain’s GWT with about 20,000 simulated neurons as opposed to the estimated 120 billion in the human brain.
Neurobot’s prospects for success might then suggest that the barrier to successful AI has not really been the inability of computers to match the computational power of the human brain, but the failure, thus far at least, for human AI researchers to model how the brain works. In other words – we are not really as clever as we like to think (a thought which dominated much of the latter work of Alan Turing – as much discussed in Alan Turing: The Enigma (which I am still listening to – though I have got down to the final three hours of thirty).
Google’s director of research, Peter Norvig, has told New Scientist that one of the reasons they launched their audio service, Google Voice, (not available in the UK or maybe not anywhere out of the US) is that they needed more human voice data to perfect their algorithms. (The article is not online for non-subscribers yet, but is on page 26 of the current print edition).
Norvig describes the general approach of Google to cracking some of the most difficult problems of artificial intelligence – “big data, simple algorithms”.
The example is given of translation – as Norvig says “in the past people had thought of this as being a linguistics problem” – now it seems Google has taken the approach of simply amassing enough good translations to be able to ‘guess’ what an unseen text might mean by comparing it to the previous translations. You have to admire the beauty of that idea and my experience is that while it is not perfect, Google translate generally does enough to allow you to use your intelligence and understanding of the context to fill in the gaps (of course it also explains why they ask you to “contribute to a better translation” – you are literally doing that by typing in a better answer),
The problem with voice appears to be two-fold: firstly that there still isn’t enough audio on the web and secondly that the range of different vocal styles and ticks means that the material that needs to be assembled to get a “big data, simple algorithm” approach to work is that much greater.
Norvig tells interviewer Peter Aldhous that nobody is actually listening to your voice when you leave a message with Google Voice (which then translates the voice into an email) – it’s all automated. But as Alhous states it is the sort of thing that has contributed to an unease about Google and its hunger for data.
Of course it is not as though a call on the phone network is particularly, or at all, secure.