I am working on a project to see whether, using various “deep learning” methods, it is possible to take a photograph of some musical notation and play it back. (I was inspired to do this by having a copy of 1955’s Labour Party Songbook and wondering what many of the songs sounded like.)
The first task is to identify which parts of the page contain musical notation and I have been working with a training set built from pictures of music chopped into 100 x 100 pixel blocks – each is labelled as containing or not containing musical notation and the network is trained, using back propagation, to attempt to recognise these segments automatically.
Now I have tested it for the first time and the results are interesting – but a bit disappointing. In this image all that is plotted is the neural net’s output: the redder the image, the higher the output from the net’s single output neuron:

It’s a bit of a mystery to me as to why you can see the staves and notes in this sort of shadowy form: as that means the network is rejecting them as musical notation even as it does highlight the regions where they are found as the best places to look.
To make it all a bit clearer, here are the results with the blue/green pixels of the original image unchanged and the red pixels set on the strength of the network’s output:
It seems clear the network is, more or less, detecting where there is writing on the page – though with some bias towards musical staves.
I’m not too disappointed. My approach – based on stuff I read in a book almost 25 years old – was probably a bit naïve in any case. I came across a much more recent and what looks to be much more relevant text yesterday and that’s what I will be reading in the next few days.
(You can see the code behind all of this at my Github: https://github.com/mcmenaminadrian)