Could someone explain this contradiction to me?


Reading on with Julian Havil’s Gamma: Exploring Euler’s Constant and inspired by his discussion of the harmonic series, I come across this:

\frac{1}{1-e^x} = 1 + e^x + e^{2x} + e^{3x} + ...

Havil calls this a “non-legitimate binomial expansion” and it seems to me it can be generalised:

(1 - r^x)^{-1}= 1 + r^x + r^{2x} + r^{3x} + ...

as 1 = (1 - r^x)(1 + r^x +r^{2x}+r^{3x}+... )= 1 + r^x +r^{2x}+r^{3x}+...-r^x-r^{2x}-r^{3x}-...

And, indeed if we take x=-1, r=2 we get:

\frac{1}{1-2^{-1}} = 2 = 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} +... at the limit.

But if we have x \geq 0 it is divergent and the identity, which seems algebraically sound to me, breaks down. E.g., r=2, x=2:

\frac{1}{1-4} = -\frac{1}{3} = 1 + 4 + 8 + 16 + ...

So what is the flaw in my logic?

In Pursuit of the Traveling Salesman


In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation

English: A representation of the relation amon...
English: A representation of the relation among complexity classes, which are subsets of each other. (Photo credit: Wikipedia)

This is a strange little book – and while I am happy to recommend it, I am not really clear in my own mind what sort of a book it is meant to be.

A popular description of a famous problem in the “NP” space and a gentle introduction to the whole issue of computational complexity and complexity classes? Not really, it assumes a bit too much knowledge for that.

So, a canter round the basic maths of the TSP and complexity? Not really that either, as it is light on mathematical specifics.

Still, it’s a decent book.

If you want bad advice, ask a London taxi driver


Steve McNamara, general secretary of the Licensed Taxi Drivers’ Association in London, quoted in the Guardian on the prospect of driverless buses in the capital:

“We don’t have a a lot of confidence in anything that comes out of TfL [Transport for London], to be honest, and the fact that they’re suggesting it means it’s almost certainly likely not to happen.

“Who knows with technology, but some of the simplest things, they still can’t do. The best example is voice recognituion technology.

“If you’ve got it on your car… it’s rubbish. If you’ve got it on your phone, it’s probably worse. They’re all crap, aren’t they? None of them work, and they can’t even get that right. And they expect people to get into driverless cars?”

Where do you begin with this?

Firstly, we should note that the Mayor’s office ran a million miles away from the suggestion – in their own paper – that at some point between now and 2050 driverless buses will be on London’s streets. To make it worse they – plainly less than truthfully – tried to claim that references in their own paper to driverless vehicles were a reference to tube trains.

The disappointing thing is that instead of actually once again pioneering a public transport technology – London gave the world underground railways and once had the world’s most admired bus network too – London’s public admisitrators are not willing to lead.

Before anyone on the left says “what about the jobs”, my reply is “what about them?” Is not the left meant to be about freeing human creativity from the realm of necessity? The issue is the distribution of the opportunities freed by the removal of the need to drive buses – it cannot be about preserving relatively low-skilled jobs that are no longer required.

As for Steve McNamara, I am amused by the fact he thinks speech recognition is the “simplest thing”. Should we reply that  three billion years of evolution produced only one species that can speak so it can’t be that simple? Or perhaps ask McNamara how many languages he can speak given that speech recognition is so simple?

In fact, my guess would be that speech recognition is probably many more times more difficult, computationally speaking, than driving a bus. However the risk of human injury means that speech recognition software is socially more acceptible than driverless vehicles – for now. But I don’t expect that to last.

Was new maths really such a disaster?


English: Freeman Dyson
English: Freeman Dyson (Photo credit: Wikipedia)

On holiday now, so I fill my time – as you do – by reading books on maths.

One of these is Julian Havil’s Gamma: Exploring Euler’s Constant.

The book begins with a foreword by Freeman Dyson, in which he discusses the general failure, as he sees it (and I’d be inclined to agree), of mathematical education. He first dismisses learning by rote and then condemns “New Mathematics” as – despite its efforts to break free of the failures of learning by rote – an even greater disaster.

Now, I was taught a “new maths” curriculum up to the age of 16 and I wonder if it really was such a disaster. The one thing I can say is that it didn’t capture the beauty of maths in the way that the ‘A’ level (16 – 18) curriculum came close to doing. At times I really wondered what was it all about – at the age of 15 matrix maths seems excessively abstract.

But many years later I can see what all that was about and do think that my new maths education has given me quite a strong grounding in a lot of the fundamentals of computing and the applied maths that involves.

To the extent that this blog does have an audience I know that it is read by those with an interest in maths and computing and I would really welcome views on the strengths and weaknesses of the new maths approach.

A horror story with a happy ending (hopefully)


An LGM-25C Titan intercontinental ballistic mi...
An LGM-25C Titan intercontinental ballistic missile in silo, ready to launch (Photo credit: Wikipedia)

Command and Control is not a piece of light reading – in any sense. But it is an absolutely essential book.

 

It tells the story of the United States’s nuclear weapons programme from the Manhattan Project to the present day, with an emphasis on safety management (with the story of a particular accident in a Titan II missile silo in 1980 foregrounded).

 

Finishing it you are left wondering why you are there at all – because it is surely more by luck than design that civilisation has managed to survive in the nuclear age – particularly through the forty-five years of the Cold War when, more or less, fundamentally unsafe weapons were handed out willy-nilly to military personnel who were not even vetted for mental illness.

 

We read of how politicians – Eisenhower, Kennedy, Nixon, Carter – all tried (to various degrees – Eisenhower comes off worst as fundamentally weak man) to get some sort of grip on the nuclear colossus and all essentially capitulated to a military more interested in ensuring their weapons would work when needed, than they were safe when not.

 

The good news is that the book has a relatively happy ending: in that the end of the Cold War and the persistent efforts of a few scientists and engineers, deep within the US nuclear weapons programme, eventually led to safety being given a greater priority. The chance of an accidental nuclear war is probably less now than it has ever been – but the chance is not zero.

 

The book, per force, does not give us much insight into the Soviet (or Chinese, or indeed French, British, Indian, Israeli or Pakistani) nuclear programme – was it safer because state control was so much more strict (the fear of Bonapartism), or more dangerous because the Soviets were always running to catch up? The book suggests both at different points.

 

It’s brilliantly written too – so if you want a bit of chill to match the summer sun in your holiday reading I do recommend it.

 

 

Forty five years on


Archive: Apollo 11 Sees Earthrise (NASA, Marsh...
Archive: Apollo 11 Sees Earthrise (NASA, Marshall, 07/69) (Photo credit: NASA’s Marshall Space Flight Center)

I’m a day late here, but the sheer brilliance of the achievement of Apollo 11 means I have to write of it.

I was just three, but I remember the day well, watching the black and white images on the TV in the corner of the room in Donegal – where we were on holiday.

Apollo 11 has shaped my life in a very real way – a lifelong love of science.

Curses on ncurses


gdb icon, created for the Open Icon Library
gdb icon, created for the Open Icon Library (Photo credit: Wikipedia)

Every programmer will be familiar with something like this…

A little while back I wrote a program that simulates – crudely but effectively – a multicore NoC device. I use it to model the execution times of different page replacement algorithms.

The input is XML generated via a step by step trace of a working program. The actually instructions being traced do not matter – what I care about are the memory access patterns.

To allow me to test more models more quickly I have now written some R code that generates a semi-random access pattern based, very loosely indeed, on the patterns seen in the real program. The advantage is I can test against a set number of memory accesses but with a range of pseudo-random access patterns, so although I am not running models against the “real” access pattern, neither am I taking three weeks per experiment.

But when I used the artificially generated access patterns, my program crashed with a seg fault. But even more confusingly, when I ran the code in GDB, the GNU Debugger, if I stepped through the code it worked, but I just ran the code in debugger then it crashed just as it did without using the debugger.

After a few hours I realised why – in my artificial patterns, the first thing the first thread does is spawn all the other threads to be used. In real world code, of course, these spawns take place after quite some code has been executed.

Every code spawn causes the ncurses code I am using to update the screen. When using ‘real’ access patterns these updates take place comfortably after all the ncurses environment has been set up (by a separate thread), but in the artificial code, the thread updates are the first thing that get posted to the screen, even before ncurses has been set up – hence the crash.

If I step through the code then the ncurses thread runs ahead and sets up the screen before I hit the thread update code and again it works.

The solution? Use a condition variable and a mutex to ensure that nothing executes before the ncurses environment is fully established.

Not a big deal – but perhaps, at some point in the future someone struggling to understand why their code – which previously worked so well – has now stopped processing what seems to be well-formed input. Hope this helps!

Follow

Get every new post delivered to your Inbox.

Join 970 other followers

%d bloggers like this: