Even more about neural networks


Because I essentially got it wrong in the last post … turns out that a fully connected network is, generally, not a great idea for image processing and that partial connections – through “convolution layers” are likely to be more efficient.

And my practical experience backs this up: my first NN did, in effect, have two convolution layers (or filters), although somewhat eccentrically designed as 100 x 1 and 1 x 100 filters. And this network performs better than the single hidden layer fully connected alternative. That may just be because it takes an age to train the fully connected network and converge of the error levels towards a low number is just taking for ever (a convolution layers has many fewer connections and so can be trained much faster).

Advertisements

Learning more about neural networks


Cheap and accessible books on neural nets are not easy to find – so I bought “Practical Neural Network Recipes in C++” as a second-hand book on Amazon (link). According to Google Scholar this book – though now 24 years old – has over 2,000 citations, so it ought to be good, right?

Well, the C++ is really just C with a few simple classes – lots of pointers and not an STL class to be seen (but then I can hardly blame author Timothy Masters for not seeing into the future). The formatting is awful – it seems nobody even thought that you could put the source into a different font from the words. But, yes, it works.

Essentially I used it – though I had to get additional help to understand back propagation as the book’s explanation is garbled, mixing up summed outputs and outputs from activation functions for instance – to build a simple neural network which worked, after a fashion.

(In fact I didn’t build a fully connected network, because the book didn’t say – anywhere that I could see, anyway – that you should. I have rectified that now and my network is much slower at learning but does seem, generally, to be delivering better results.)

But it seems that 24 years is a long time in the world of neural nets. I now know that “deep learning” isn’t just (or only) a faddish way of referring to neural networks, but a reflection of the idea that deep nets (i.e., with multiple hidden layers) are generally thought to be the best option, certainly for image classification tasks. Timothy Masters’s book essentially describes additional layers as a waste of computing resources: certainly anything above two hidden layers is expressly dismissed.

Luckily I have access to an electronic library and so haven’t had to buy a book like “Guide to Convolutional Neural Networks” (Amazon link) – but I have found it invaluable in learning what I need to do. But it’s complicated: if I build many different convolutional layers into my code the network will be slow(er) – and it will be time to break out the threads and go parallel. But now I have fallen into this rabbit hole, I might as well go further.

Neural network output

First results from the “musical” neural network


I am working on a project to see whether, using various “deep learning” methods, it is possible to take a photograph of some musical notation and play it back. (I was inspired to do this by having a copy of 1955’s Labour Party Songbook and wondering what many of the songs sounded like.)

The first task is to identify which parts of the page contain musical notation and I have been working with a training set built from pictures of music chopped into 100 x 100 pixel blocks – each is labelled as containing or not containing musical notation and the network is trained, using back propagation, to attempt to recognise these segments automatically.

Now I have tested it for the first time and the results are interesting – but a bit disappointing. In this image all that is plotted is the neural net’s output: the redder the image, the higher the output from the net’s single output neuron:

Neural network output
The brighter the image the more likely there is music

It’s a bit of a mystery to me as to why you can see the staves and notes in this sort of shadowy form: as that means the network is rejecting them as musical notation even as it does highlight the regions where they are found as the best places to look.

To make it all a bit clearer, here are the results with the blue/green pixels of the original image unchanged and the red pixels set on the strength of the network’s output:

Blaydon Races filtered by neural net

It seems clear the network is, more or less, detecting where there is writing on the page – though with some bias towards musical staves.

I’m not too disappointed. My approach – based on stuff I read in a book almost 25 years old – was probably a bit naïve in any case. I came across a much more recent and what looks to be much more relevant text yesterday and that’s what I will be reading in the next few days.

(You can see the code behind all of this at my Github: https://github.com/mcmenaminadrian)

Back to neural networks


Neural networks have fascinated me for a long time, though I’ve never had much luck applying them.

Back in the early and mid 1990s the UK’s trade and industry department ran a public promotional programme to industry about NNs and I signed up, I even bought a book about how to build them in C++ (and I read it, though I have to confess my understanding was partial).

My dream then was to apply the idea to politics: as a more effective way of concentrating resources on key voters. My insight was that, when out doing what political parties then called “canvassing” and now – largely for less than fully honest legal reasons as far as I can see – call “voter ID” (in electoral law “canvassing” i.e., seeking to persuade someone to vote one way is more highly regulated than simply “identifying” how they intend to vote) you could quite often tell how someone would respond to you even before they opened the door. There was some quality that told you, you were about to knock on a Labour voter’s door.

I still don’t know what that quality is – and after the recent election I’m not sure the insight applies any more any way – but the point was that if you could take a mass of data and have the NN find the function for you, then you could improve your chances of getting to your potential support and so on…

But I never wrote any code and NNs seemed to go out of fashion.

Now, renamed “machine learning”, they are back in fashion in a big way and my interest has been revived. But I am not trying to write any code that will work for politics.

Instead I am exploring whether I can write any code that will look at a music score and play it back. (This probably a silly idea as a start NN project, as music is not easy to decipher in any way as far as I can see).

I have read another book on C++ and neural networks (and even understood it): an ancient tome from the previous time NNs were in fashion.

The first task is to actually identify which bits of scribbling on the page are musical notation at all. And I have written some code to build a training set for that – here. It might well be of use to you if you need to mark bits of a JPEG as “good” or “bad” for some other purpose, so please do get in touch if you need some help with that.