From Frank Vega:
I wanted to answer you one of your comments in your post “Even if P=NP we might see no benefit“, but I saw I can’t do it anymore in that page, maybe due to problem with my internet. I was the person who claim a possible demonstration of problem “P versus NP” in my paper “P versus UP” which is published in IEEE,
I’m working in that problem as a hobby since I graduated. I sent a preprint version in Arxiv with the intention of find some kind of collaboration. I also tried to find help in another blogs. But, finally I decided to sent to a journal of IEEE a new version which differs of the preprints in Arxiv that I withdrew because it had some errors. Then, I waited and after a revision in a IEEE journal I was published in 17th August 2012.
However, I wrote this paper in Spanish and I had the same version in English. So, I decided to sent again to Arxiv, but they denied me that possibility, therefore, I used a pseudonymous. I also uploaded other papers with that name which are not so serious but reflect the work that I’m doing right now as a hobby too.
I love Computer Science and Math. I’m working right now in a project so important as P versus NP, but I do all this as a way of doing the things that I like most although my environment doesn’t allow me at all. I also tried to work with other scientists which have invited me to work with them since I published my paper in IEEE. Indeed, I don’t want to be annoying with my comments, I just searching the interchange with another people who have the capacity to understand my work, that’s all.
Frank’s website is here: http://the-point-of-view-of-frank.blogspot.com/
First of all – a quick explanation of P and NP. The class of problems known as ‘P’ – for polynomial (as in they can be solved in a time which is dependent on a polynomial of their complexity) – are those for which a known algorithm – a sequence of mathematical steps – to solve them exists. For instance, solving (i.e., finding the value of x where the formula evaluates to zero) x – 2 is a P class problem. NP (not P) problems are much more interesting – these are problems for which an algorithm exists but which is unknown at the time the problem is posed. In the worst case the only way of solving the problem may be to try all the potential algorithms until we find the one that solves the problem. That said, once a potential solution is found it can be verified ‘simply’ (i.e. in polynomial time). It is not known if, in fact NP problems (such as drawing up school timetables or cracking internet public key encryption) are really P type problems after all and we just have not found the solution or are permanently excluded from ‘simple’ (P) solutions. A class of NP problems called ‘NP complete‘ are those that, if shown to really be P class problems, would indicate P=NP. Most, but not all, mathematicians think, in fact P!=NP.
So, here’s the problem. It sounds simple, but as it is NP, it’s not! (I got this from Making Embedded Systems: Design Patterns for Great Software)
You have a micro controller with a timer of fixed 4MHz frequency and two 8 bit registers that (a) counts tick and (b) a match register that triggers an interrupt which the count register matches the tick count and a 16 bit prescaler (that allows the scaling of the ticks e.g. – if set to 2 then twice as many ticks are required to trigger the interrupt).
So how can you set the match and prescaler to work for an arbitrary frequency? Sounds like it should be easily algorithmically managed, but it’s not.
- how do you prove that SAT is NP-complete? (cs.stackexchange.com)
- Are there NP problems, not in P and not NP Complete? (cs.stackexchange.com)
- Rule of thumb to know if a problem could be NP-complete (cs.stackexchange.com)
This is a blog about computing (along with some maths and science) – and not about politics, and having disciplined myself to stick to that for the last nine months, I intend to keep it that way, even as I write about the biggest political event of the year.
But I will allow myself two short political observations: firstly, that disrespect for the law and contempt for order are not new things in London. If you read Albion’s Fatal Tree you will see that there have long been many in the capital who have made their at least part of their livelihoods from criminality and who celebrated their fellows. Pretending that recent events represent some terrible breakdown in ancient respect for authority is ahistorical.
And, before people start to say it is the fault of rap music or other “alien” influences, do they remember this? Perhaps the Fast Show is the real cause of the disorder?
So, that over, what is the science point? Well, it was consistently reported during last week’s disturbances that the looters were sharing their intelligence through BlackBerry smart phones, specifically through “BlackBerry Messenger” (BBM). Given that the UK has one of the most sophisticated signals intelligence set-ups in the world at GCHQ, the fact that the police were clearly left in the halfpenny seats by the looters suggests to me that nobody there has yet proved that P=NP or developed an algorithm to crack the one way functions used to encrypt the BBMs.
According to Wikipedia Blackberry encrypt everything with “Advanced Encryption Standard” (AES). A brute force attack on this would, on average, require attempts (for the 256 bit encryption), so that is not a practical option (eg the universe is very roughly seconds old).
Now, it could be that the US government has cracked this thing and just refuses to tell even its closest ally (I dare say the name Kim Philby is still spat out in various places), but my guess is that AES is safe, for now.
As I have said before that is probably a pity: while a world where P=NP would be one where internet commerce was broken, it would also be one with many compensatory benefits.
- London riots: how BlackBerry Messenger has been used to plan two nights of looting (telegraph.co.uk)
- RIM to help police on BBM role in London riots (intomobile.com)
- London Riots: BlackBerry Messenger Used More than Facebook or Twitter (mashable.com)
The quote that forms the title here comes from Lance Fortnow, a computer scientist at Northwestern University, in an article (here – subscription required) in the current edition of the New Scientist on the question.
It’s an odd statement for a computer scientist to make – most numbers are transcendental numbers and so are fundamentally incomputable: for instance there are (aleph null – the smallest of Cantor’s hypothesised infinities) transcendental numbers between 0 and 1 (or between any range of integers).
But besides that oddity it is actually a very good article – calling the world where Algorithmica – “the computing nirvana”.
I have written before of how much I hope we do live in Algorithmica, though the consensus is we live in a world of NDAlgorithmica (ND for non-deterministic).
The article’s beauty is that it also examines the states between the two: what if, for instance, we discovered that the class of problems were identical to the class of problems but that we could not find the algorithm, or that the algorithm was of such a degree of complexity it “would not amount to a hill of beans”?
I have to make a confession. When I wrote the post below I was genuinely puzzled by the issue and what seemed to be a mistake in P, NP, and NP-Completeness, but just as I was about to press “publish” I saw that several pages earlier Professor Goldreich had stated:
“We consider finite objects, each represented by a finite binary sequence called a string.”
Although 15 pages later writes of “the set of all strings” I am now assuming he means in the sense he first used…
But I did not have the heart to not post my message as I worked on it for a little bit.
- Levels of Infinity (xamuel.com)
I finished John Naughton‘s A Brief History of the Future: Origins of the Internet – an interesting diversion, to be sure and a book worth reading (not only because it reminds you of how rapidly internet adoption has accelerated in the last decade.)
By now I should be on to proper revision, but I indulged myself last week and bought a copy of P, NP, and NP-Completeness: The Basics of Computational Complexity which I am now attempting to read between dozes in the garden (weather in London is fantastic). The book is written in a somewhat sparse style but even so is somewhat more approachable that many texts (the author, Professor Oded Goldreich, rules out using “non-deterministic Turing machines“, for instance, saying they cloud the explanation).
But in his discussion of (deterministic) Turing machines he states:
Is that right? (Serious question, please answer if you can).
Surely the set of all strings has the cardinality of the reals?
If we had a set of strings like this:
and so on…
Then surely the standard diagonalisation argument applies? (i.e. take the diagonal and switch states of each member – and this string cannot be in the original set as it is guaranteed that for the member of the set, the with elements , will be different. (See blog on diagonalisation.)
In Naughton’s book he makes (the very valid) point that students of the sciences are generally taught that when their results disagree with the paradigm, then their results are wrong and not the paradigm: so what have I got wrong here?
- Levels of Infinity (xamuel.com)
- A Brief History of the Future (cartesianproduct.wordpress.com)
- Random Axioms and Gödel Incompleteness (rjlipton.wordpress.com)
- State Machines – Basics of Computer Science (markwshead.com)
- Yogi Berra and Complexity Theory (rjlipton.wordpress.com)
- Understanding Computational Complexity (amitasuri.wordpress.com)
Paul A. Rubin – “apostate mathematician and Professor of Management Science at Michigan State University” – commented on the blog post below about P=NP and so naturally enough I had a look at his blog and the most recent item is something of a corker … “Math and Science Can Be Sexy“.
In it Professor Rubin lists three prominent actresses who are/were also scientists with serious achievements to their name and says that he already knew about one of them … Danica McKellar.
But even as I read it I realised that while I didn’t know about her, I did (vaguely) know about Hedy Lamarr – I think I may have read about her in Andrew Tannenbaum’s Computer Networks - which I read large chunks of earlier this academic year. As I recall her proposal on frequency-hopping was not taken seriously by the US Navy at the time (during the war), which proved to be a pretty big mistake. It’s now widely used in all sorts of wireless communications.
Anyway, I challenged Professor Rubin’s challenge to me about the potential brokenness of public key encryption if P = NP before I realised he was likely someone with the intellectual firepower to destroy me an instant – so I hope he will take this link back to his blog as a peace offering.
Plus, writing this is more interesting (what isn’t?) than constructing Unified Process use cases for my object orientated design and programming class…
Been an interesting week here on the blog … as I said before getting on Slashdot is great as you get access to the intellect of a lot of people, and that taught me quite a bit.
It’s not relevant to my course but I went to the (Maths shelves of) the Birkbeck library and withdrew The Language of Machines – one of the few books on computability in the library.
But before I get down to reading much of that I think I need to reply to some of the harsher remarks that were made about this blog on Slashdot – in particular the piece I wrote at the end of December called “What if P=NP?“
The opening line of that article is “I admit I now going slightly out of my depth” and it does contain a few errors, though nothing I think that justifies some of the really nasty things a couple of people wrote about it – but then again some people also insisted that proving P=NP would not break public key encryption so plainly there are people on /. whose ignorance is matched by the vehemence of their opinion, so maybe I shouldn’t let it get to me: especially as one of the critics insisted there were no algorithms to solve NP problems (so how are they solved? Fairy dust inside your computer?)
But I thought I’d try to rewrite my blog post in a better way….
I admit I now going slightly out of my depth, but I will try to explain what this is about and why it is interesting.
It is known that computers can solve some problems in what is called “polynomial time“: that is to say a finite time that is proportional to a polynomial of complexity of the input. The key thing is that these problems are computable using known mechanical steps (ie algorithms). Often that means they are solvable in a way that is (sometimes euphemistically) described as “quickly”.
These can be simple problems – like what is the sum of the first ten integers – or more complex ones, such as creating self-balancing trees, sorting database records alphabetically and so on.
These are the so-called “P” (for polynomial time class) problems. Here’s a definition:
A problem is assigned to the P (polynomial time) class if there exists at least one algorithm to solve that problem, such that the number of steps of the algorithm is bounded by a polynomial in , where is the length of the input.
Then there are another class of problems which seem fiendishly difficult to solve but which it is relatively simple to prove the correctness of any solution offered. These problems can also be solved (computed) in a finite time – and they can also be computed by a Turing machine (a simple model of a computer) and so an algorithmic solution exists. It is just that one cannot tell in advance what that algorithm is. In the worst case the only way – it is thought – that a solution can be found is through an exhaustive search of all algorithms – in other words a form of “brute force“. These are the NP (non deterministic polynomial time class) problems.
Now most mathematicians think that NP does not equal P – ie there are no “simple” solutions to the NP problems – and that may or may not be a good thing as much of our internet commerce relies on encryption which is thought to be an NP problem and effectively impervious (due to the time it would take to factor the very large prime numbers at the heart of the process) to brute force attacks if properly done.
(In Afghanistan in 2001 US cryptanalysts seemingly brute forced a Taliban Windows NT box but it used much weaker encryption than most electronic commerce.)
But what if it were the case that all seemingly NP problems were actually P problems? There are a lot of people studying this issue – but according to the New Scientist (their Christmas edition, the first in my subscription and delivered this morning, inspired me to write this) we should expect to wait at least until 2024 for an answer (by which point the problem – first formulated in 1971 – will have reached the age at which 50% of major mathematical problems will have been solved).
This all matters: unlike, say, Fermat’s last theorem, proving P = NP is likely to have more or less immediate effects on the lives of many of us (how would you feel if you knew that it was possible, even if not likely, that someone had developed a way to quickly crack open all your internet credit card transactions?)
Of course, proving P = NP for all problems does not necessarily mean we will immediately have determined polynomial time based solutions for all the current NP problems out there. But we should not expect that to last for long, though it may be the case that the algorithms may be exceptionally complex rendering them very slow even on the fastest computing equipment, but if history teaches us anything it is that computers get faster (yes, I know there are bounds on that).
And, actually, I think the benefits to humanity would be enormous. The most likely immediate effects would be in improvements in computing/operating system efficiency. fast computers for less money and/or less energy consumption would be a huge benefit. From that alone many other benefits will flow in terms of information availability and universality.
But there could be many, many others in medicine and linguistics and even in getting the trains to run on time!
Last night I submitted a story on this blog – about P=NP – to Slashdot and today the submission was accepted and in a few hours I have had perhaps five times more visitors than in the previous two and a half months.
But, thanks to WordPress.com there was no “slashdotting” – instead the site has stayed up and running throughout.
Slashdot has a reputation for flame wars and ignorance, but actually the discussion has been great – I wish I could get a few more articles from the blog on to /. as then I would benefit from the clever people who hang out there: yes there is a lot of noise, but there’s also good signal.
The other thing it has reminded me of is the need to avoid imprecision in scientific writing. A few flames have been aimed at me and articles on the blog: one item being called horrible when, actually, what I think what was really wrong was a few corners cut in an effort to make the exposition a little less cluttered.
- Another book for the wish list? (cartesianproduct.wordpress.com)
- Don’t Fear Being Fireballed or Slashdotted if on Blogger (louisgray.com)
I vowed not to write about politics here, so I won’t. But the Irish election proved an exciting distraction all weekend. Still, I did manage to get somethings done:
- Fitted a new graphics card on the machine I am using now (the previous one seemed to simply die when I tried to access the “Power Mizer” facilities on the card – Linux users of the nvidia proprietary drivers be warned;
- Fitted a new heat sink on my kids’ Linux box – they will no longer have the excuse they have to use this (my) computer because theirs has died from overheating (plus they have about 5000 more MIPS on their machine anyway);
- Wrote to the users of the Inkling prediction market I had started on Vladimir Romanov’s P=NP proposal to tell them it was dead (for now). I doubt many were surprised – the market rated the chances of his “proof” being correct as 0.09% when I cashed it out.
- Started reading Alone in Berlin
- And last, but by no means least, actually completed a draft of my MSc proposal. I’ll have to read through the text looking for spelling errors and fix up all the references (I discovered the ability to import from BibTeX via uses of Google Scholar this weekend so no more typing by hand).
I am really pleased about that last one. The Irish election results weren’t so bad either.