A further suggestion that P!=NP


There are a class of mathematical problems known as “P”: these can be solved in “polynomial time” or, to cut to the chase, they are problems for which we know how (algorithmically) to solve when we see them. A trivial example is -if you are given a number X, how do you calculate Y which is  X times 10. (If you don’t know the algorithm to multiply a base 10 number by 10 then stop reading this now!)

There are another class of problems – the “NP” (as in “Not P”) problems for which we know how to check if we have a correct solution to the problem, but not how to solve the problem itself. A typical example is producing the factors of a very large number created by multiplying two very large prime numbers together. If I gave you the factors you could probably verify that their product was the larger number, but if I gave you the larger number you might be here until the end of the universe trying to figure out what the factors were, as the only real way we know of (today) is to eliminate every prime number, one by one.

But, it may just be the case that “NP” problems are actually “P” problems after all, and it’s just we have yet to discover the algorithm to solve them. If we could show that NP=P in this way then we could do things like simply draw up train and class timetables instead of fiddling with expensive and approximate software tools to get these right.

But we’d also be able to crack internet encryption, which essentially relies on large numbers produced by two very large primes (public and private keys). These “one way functions” – i.e. bits of maths easy to do one way (multiply the two primes) but essentially impossible the other way – factor their product – are at the core of internet encryption.

Encryption and mathematics are deeply entwined and so communications intelligence agencies like the NSA in the United States and GCHQ in the UK are also centres of mathematical excellence – trying to break codes and, we must assume, test whether P=NP after all.

So this fascinating story about the state of encryption software in the US might suggest to us that the NSA have not been able to prove P=NP (most mathematicians think P!=NP but that is not proved either).

(The story essentially suggests that the NSA have paid to have a widely used form of internet encryption – Dual_EC_DRBG – operate like an Enigma Machine where the starting positions are always known. As with Enigma, once you had that, everything else would fall into place and what looks like a random sequence of letters is quickly converted to plain text.

Of course it could all be paranoid nonsense or even a cover to make us think that they haven’t cracked the P=NP problem (as, after all, you’d guard that secret with everything you’d got – except internet encryption!) – paying out a few million dollars to make someone think you had doctored one way functions because you could crack them no other way would be money very well spent!

The world’s fastest computers


Every six months or so a list of the world’s fastest computers is published in the “Top 500” list.

Except it is not. For surely the world’s fastest computers are in the hands of the NSA, GCHQ and whatever the French and Chinese equivalents are (I think we can probably discount the Russians for now)? At least I hope so – us taxpayers ought to be getting something in return for the money we are ploughing into these bodies.

The “revelation” that the NSA are seeking to build a quantum computer is most remarkable for the fact it suggests that have not already got one. Just as the problems the British authorities had, back in 2011, with rioters co-ordinating their actions through encrypted messages told us nobody at GCHQ had proved P=NP, what is most interesting about these stories is what is missing, not what is in there.

(Of course, there is another explanation on the riots: GCHQ have proved P=NP and are even now using this fact and the algorithm they have developed using it to crack everybody else’s codes and fundamentally a few wrecked city centres are less important than that. But I doubt this very much.)

Even if P=NP we might see no benefit


A system of linear inequalities defines a poly...
A system of linear inequalities defines a polytope as a feasible region. The simplex algorithm begins at a starting vertex and moves along the edges of the polytope until it reaches the vertex of the optimum solution. (Photo credit: Wikipedia)

Inspired by an article in the New Scientist I am returning to a favourite subject – whether P = NP and what the implications would be in the (unlikely) case that this were so.

Here’s a crude but quick explanation of P and NP: P problems are those that can be solve in a known time based on a polynomial (hence P) of the problem’s complexity – ie., we know in advance how to solve the problem. NP (N standing for non-deterministic) problems are those for which we can quickly (ie in P) verify that a solution is correct but for which we don’t have an algorithmic solution to hand – in other words we have to try all the possible algorithmic solutions in the hope of hitting the right one. Reversing one-way functions (used to encrypt internet commerce) is an NP problem – hence, it is thought/hoped that internet commerce is secure. On the other hand drawing up a school timetable is also an NP problem so solving that would be a bonus. There are a set of problems, known as NP-complete, which if any one was shown to be, in reality a P problem would mean that P = NP – in other words there would be no NP problems as such (we are ignoring NP-hard problems).

If it was shown we lived in a world where P=NP then we would inhabit ‘algorithmica’ – a land where computers could solve complex problems with, it is said, relative ease.

But what if, actually, we have polynomial solutions to P class problems but there were too complex to be of much use? The New Scientist article – which examines the theoretical problems faced by users of the ‘simplex algorithm’ points to just such a case.

The simplex algorithm aims to optimise a multiple variable problem using linear programming – as in an example they suggest, how do you get bananas from 5 distribution centres with varying numbers of supplies to 200 shops with varying levels of demand – a 1000 dimensional problem.

The simplex algorithm involves seeking the optimal vertex in the geometrical representation of this problem. This was thought to be rendered as a problem in P via the ‘Hirsch conjecture‘ – that the maximum number of edges we must traverse to get between any two corners on a polyhedron is never greater than the number of faces of the polyhedron minus the number of dimensions in the problem.

While this is true in the three dimensional world a paper presented in 2010 and published last month in the Annals of MathematicsA counterexample to the Hirsch Conjecture by Francisco Santos has knocked down its universal applicability. Santos found a 43 dimensional shape with 86 faces. If the Hirsch conjecture was valid then the maximum distance between two corners would be 43 steps, but he found a pair at least 44 steps apart.

That leaves another limit – devised by Gil Kalai of the Hebrew University of Jerusalem and Daniel Kleitman of MIT, but this, says the New Scientist is “too big, in fact, to guarantee a reasonable running time for the simplex method“. Their two page paper can be read here. They suggest the diameter (maximal number of steps) is n^{log(d+2)} where n is the number of faces and d the dimensions. (The Hirsch conjecture is instead n-d .)

So for Santos’s shape we would have a maximal diameter of \approx 10488 (this is the upper limit, rather than the actual diameter). A much bigger figure even for a small dimensional problem, the paper also refers to a linear programming method that would require, in this case, a maximum of n^{4\sqrt d}\approx 10^{50} steps. Not a practical proposition if the dimension count starts to rise. (NB I am not suggesting these are the real limits for Santos’s shape, I am merely using the figures as an illustration of the many orders of magnitude difference they suggest might apply).

I think these figures suggest that proving P = NP might not be enough even if it were possible. We might have algorithms in P, but the time required would be such that quicker, if somewhat less accurate, approximations (as often used today) would still be preferred.

Caveat: Some/much of the above is outside my maths comfort zone, so if you spot an error shout it out.

Have we reached “peak silicon” and what can we do about it?


Moore’s Law states that the number of transistors that can be squeezed into a given slice of silicon doubles every two years (or 18 months) – something I wrote about recently and where I declared “More transistors means greater speed, more and cheaper memory and so on … ”

Except, maybe not. As the graph below, shamelessly grabbed from Herb Stutter’s “The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software“, shows, while Moore’s Law (the green graph) holds true, the other associated improvements that we have come to expect to parallel it, such as a similar increase in efficiency per watt (royal blue graph) and clock speed (navy blue) have not. In short, we can build cheaper chips but they are not necessarily much faster.

Herb Sutter's graph of CPU performanceAnd, as this article recounts, we are now talking about “dark silcon” – bits of chips that have to remain unpowered while other parts are in use so as to ensure the whole chip does not fry or fail due to too high power consumption.

So, if we have reached the point of “peak silicon” what can we do about it?

The chip manufacturers have responded by packing more and more cores into their devices and that works up to a point – we do not even need to have very parallel coding outside the operating system to take some advantage of that on even a typical multitasking desktop machine. But none of us are doubling the number of video renderers, MP3 decoders, database queries and spreadsheet calculations we run in parallel every 18 months, so the “Moore paradigm” of computing power doubling in that time will be lost.

A more fundamental alternative is to rewrite our software so that it becomes inherently designed to take advantage of multicore machines. Writing effective parallel software is not easy, but it can be done for lots of tasks. But even then there are limits – “Amdahl’s law” reminds us that parallelisation will only speed the parts of a program that can be run in parallel: if say we had a piece of code that must be run in serial and takes 5 seconds, and some code that currently takes 55 seconds but could be made perfectly parallel, then if we had 2 processors it takes 5 seconds (serial time), plus 27.5 seconds for the parallel code, doubling the processors but not quite halving the time, with a 46% saving. Doubling the number of processors again (to 4) cuts total computing time to 18.75 seconds but the proportional saving has dropped to 42%. In other words, the “Moore paradigm” also disappears.

The third thing we can do is look for better algorithms: the recent announcement of a vastly improved fast fourier transform (FFT) algorithm shows what can be done here – algorithmic improvement can vastly outstrip hardware speedup. But currently for many problems (those in NP space) there is no prior known algorithm available and computing power can be simply dedicated to going through all the possible algorithms looking for the one that works (we do not know what algorithms solves an NP problem but once a solution is found we can verify it ‘easily’). Assuming, as most mathematicians are said to do, that P does not equal NP (ie there is no yet to be discovered algorithm that cracks NP problems) this at least means that “peak silicon” will keep internet commerce safe for the foreseeable future but it is bad news in lots of other ways.

There is a fourth option, of course, which is to get a better physics – either for silcon fabrication, quantum computing or some other physics based innovation. Right now, though, these are probably still the least likely options but as the links below show, lots of people are working .

Reflections on the riots: part one


AddRoundKey operation for AES
Image via Wikipedia

This is a blog about computing (along with some maths and science) – and not about politics, and having disciplined myself to stick to that for the last nine months, I intend to keep it that way, even as I write about the biggest political event of the year.

But I will allow myself two short political observations: firstly, that disrespect for the law and contempt for order are not new things in London. If you read Albion’s Fatal Tree you will see that there have long been many in the capital who have made their at least part of their livelihoods from criminality and who celebrated their fellows. Pretending that recent events represent some terrible breakdown in ancient respect for authority is ahistorical.

And, before people start to say it is the fault of rap music or other “alien” influences, do they remember this? Perhaps the Fast Show is the real cause of the disorder?

So, that over, what is the science point? Well, it was consistently reported during last week’s disturbances that the looters were sharing their intelligence through BlackBerry smart phones, specifically through “BlackBerry Messenger” (BBM). Given that the UK has one of the most sophisticated signals intelligence set-ups in the world at GCHQ, the fact that the police were clearly left in the halfpenny seats by the looters suggests to me that nobody there has yet proved that P=NP or developed an algorithm to crack the one way functions used to  encrypt the BBMs.

According to Wikipedia Blackberry encrypt everything with “Advanced Encryption Standard” (AES). A brute force attack on this would, on average, require 2^{255} attempts (for the 256 bit encryption), so that is not a practical option (eg the universe is very roughly 4^{17} seconds old).

Now, it could be that the US government has cracked this thing and just refuses to tell even its closest ally (I dare say the name Kim Philby is still spat out in various places), but my guess is that AES is safe, for now.

As I have said before that is probably a pity: while a world where P=NP would be one where internet commerce was broken, it would also be one with many compensatory benefits.

“Basically, you would be able to compute anything you wanted”


The quote that forms the title here comes from Lance Fortnow, a computer scientist at Northwestern University, in an article (here – subscription required) in the current edition of the New Scientist on the P = NP question.

It’s an odd statement for a computer scientist to make – most numbers are transcendental numbers and so are fundamentally incomputable: for instance there are \aleph_{0} (aleph null – the smallest of Cantor’s hypothesised infinities) transcendental numbers between 0 and 1 (or between any range of integers).

But besides that oddity it is actually a very good article – calling the world where P = NP Algorithmica – “the computing nirvana”.

I have written before of how much I hope we do live in Algorithmica, though the consensus is we live in a world of NDAlgorithmica (ND for non-deterministic).

The article’s beauty is that it also examines the states between the two: what if, for instance, we discovered that the class of P problems were identical to the class of NP problems but that we could not find the P algorithm, or that the P algorithm was of such a degree of complexity it “would not amount to a hill of beans”?

Possibly the most important news you will read this year


This an example of how a public and private ke...
Image via Wikipedia

Apparently P==NP. (So public key encryption – used for internet commerce – is broken and many more problems than we previously thought are quickly solvable).

At least that is the suggestion you can read here. Slashdot also has this here.

If it’s true then the revolution has just begun. If it’s false, well, tomorrow’s another day…