Holy Freaking Cow… P != NP??

Very big, very exciting news in the theoretical comp sci world today!

A group at HP research has published a proof that, if correct, shows that the classic problem of computational complexity has been solved, once and for all. It’s still far from certain that it’s correct. But it’s the most credible attempt that I’ve ever seen, and it’s getting some preliminary favorable feedback from big-names in complexity theory. In fact, one of the biggest names in complexity theory, Stephen Cook, has apparently said that it should be taken seriously. If Cook thinks it’s credible, then it’s damn-well credible. It might not be correct, but it’s not just crackpottery either.

For the non-CS folks out there, here’s my attempt to explain what the problem is, and what the solution means.

One of the most important areas of theoretical computer science is complexity theory. What complexity theory does is try to characterize, in an abstract way, how long it takes for an algorithm to solve a given problem. In some cases, we can do better that showing how long a single algorithm takes; we can actually analyze the problem, and show intrinsic properties of the problem that dictate a minimum amount of time to compute a solution.

We tend to describe the complexity in terms of functions on the size of an input. So, for example, suppose you’re given a list of N names, and you’d like to get them into sorted order. You can’t do than with fewer than n lg_2(n) comparisons. So, speaking very roughly, we can say that sorting a list of strings has complexity O(n lg_2(n)) in number of comparisons.

One way of proving that is based on Kolmogorov-Chaitin complexity. Given a list of N names, there are (N!) possible orderings of that list. To be able to sort the list, essentially, we need to identify which ordering of the list we were given, so that we can select the right permutation the re-arrange it into sorted order. The particular permutation is one of the (N!) possible ones. To select one of those, we need to know the number that identifies the permutation in a list. So we need lg_2(N!) bits – which is O(n lg_2(n)) bits. A comparison generates exactly one bit of data. So we need to do O(n lg_2(n)) comparisons.

Moving on, we can broadly characterize the complexity of algorithms in terms of the functions that describe the complexity bounds. We can talk about constant-time algorithms; sub-linear time algorithms; linear time; quadratic time; etc.

The most important distinction in terms of complexity is polynomial time versus exponential time. A problem or algorithm is polynomial time if its complexity function is no larger than something which can be expressed as a fixed polynomial. We call the set of problems that can be solved in polynomial time P. It’s exponential time if it’s no smaller than something which must be expressed with a variable exponent; we call things that require exponential time EXP.

In very general terms, we tend to say that something which is polynomial time is solvable in practical terms; whereas something which is exponential time, we can’t solve non-trivial examples before the universe comes to an end.

Within the exponential-time domain, there’s a really important subset, which we care about, called NP. NP stands for nondeterministically polynomial-time. Once again, speaking very loosely, problems in NP are problems where to the best of our knowledge, computing a solution to the problem is in EXP; but checking a solution for correctness is in P.

NP contains a lot of very classical and very important problems. For example, there’s the traveling salesman problem: given a set of N cities, what’s the shortest route which visits all of them? Actually getting the optimum answer for that is amazingly difficult! It’s an NP problem.

One of the questions that has plagued computer scientists for decades is: does P = NP? In other words, does the fact that a solution is testable in polynomial time mean that the solution is computable in polynomial time? There are some good arguments either way – some people think that the fact that there’s a polynomial time test is simply irrelevant; others think that there’s a deep relationship between the testability of a solution, and the process of generating that solution. Without getting into some hairy math, it’s hard to explain the two arguments any more deeply – but many people a lot smarter than I am have spent years thinking about it.

If this paper is correct, then it shows, once and for all, that P != NP; that the problems that we know are NP have no polynomial time solutions.

I’m reading the paper, but it’s rough going. If there’s anything wrong with it, I doubt that I’ll be able to find it. But so far, it’s difficult but absolutely fascinating. They’ve taken a really interesting approach to analyzing the problem. If it holds up, they’re probably in line for a Fields medal.

65 thoughts on “Holy Freaking Cow… P != NP??

  1. Dorothea Salo

    *clicks the stopwatch off* I was wondering how long it’d take you to pick this ‘un up…

    Reply
  2. Christina Pikas

    Very cool, and understandable. I hear about P and NP and all that, but haven’t seen a good description. Thanks!

    Reply
  3. James Sweet

    Wow, that is a really funny coincidence. Two days ago, i.e. Saturday, the P/NP problem came up in conversation (a friend of mine had an idea for a mobile app that turned out to rely on having a good solution to the traveling salesman problem — d’oh!) and I remarked to my friend and my wife that, “If you could prove P=NP, that’s instant Nobel Prize material. A proof P != NP would still bring you instant fame.” heh, what timing!

    Reply
    1. Janne

      In practice you can get provably good approximations to the solution very quickly. If your friends app just needs a good solution as opposed to the single best one, then it’s perfectly doable.

      Reply
      1. James Sweet

        Oh yeah, I know, there are just fine heuristic solutions. It’s conceivable we could put the app together. It was just really funny, he was saying, “I wish Google Maps did yada-yada-yada” and I paused for a second and said, “That’s the Travelling Salesman problem!”

        Reply
  4. jdkbrown

    Wow!

    I know enough to know that this is a big deal, but not nearly enough to follow the proof. Any chance you can do a post on the proof once you’ve gone through it? Pretty please!

    Reply
    1. MarkCC

      I’d love to do a post on the proof… But to be honest, I doubt that I’ll be able to follow it well enough to explain it. I’ll see what I can do.

      Reply
  5. Adam Malone

    If this paper is correct, which of the following are true:

    1) IF a problem has been proven to be NP, THEN there are no polynomial-time verifications possible

    2) IF there is a polynomial-time verification of a problem, THEN that problem must have a polynomial-time solution

    Thanks, and I am liking the new site!

    Reply
    1. MarkCC

      @Adam:

      Neither.

      NP is the class of problems which have polynomial time verifications. What this proof shows is that there are problems with polynomial time verifications, but for which there are no polynomial time solutions.

      Reply
      1. wayward

        So, as a layman, what I’m getting from this is the following; some problems exist that can not be solved before the universe ends, but if you’re willing to take a guess at a solution then it is possible to have a computer evaluate that guess and tell you if it’s right or not. Am I doing well so far?

        If the above is true, my next question would be whether such an evaluation could give information about the ‘quality’ of your guess. If you’re wrong, then how close are you to being right? Traditionally close only counts in ‘horseshoes and handgrenades’, but in the real world sometimes you are actually tossing a horseshoe.:)

        Reply
        1. MarkCC

          Yes, you’re doing very well.

          And unfortunately, no. If you knew how far away from the optimal solution you were, then you’d have a way of computing it quickly. In general, you can come up with solutions to NP-complete problems where you can show that for a broad selection of instances of the problem, you can get an answer that is either optimal or close to optimal. But for a given problem, you don’t know which, and you don’t know how far from optimum you actually are.

          In fact, it’s also rarely quite so bad. For a lot of the classic NP problems, the cases where it really takes a long time are well understood, and we’ve got very fast solutions for the normal cases.

          I was at a talk a few years ago given by a guy named Dan Jackson, who was talking about a software verification system. Describing how it worked, he said “the bad news is that the solution reduces to SAT, which is NP complete”. (SAT is one of the classic problems that require exponential time to solve). “But”, he continued, “the good news is that the solution reduces to SAT, which we can solve really quickly!”. SAT is an NP-complete problem. (In fact, the version of it he was talking about was slightly worse; it was NP-hard.) But for most of the cases where we actually want to solve SAT, we can solve it much faster. It’s always *possible* that a particular problem will fall into one of the traps where it will end up taking an obscenely long time – but those are so unusual that we generally just don’t worry about them.

          Reply
          1. Antonio E. Porreca

            Describing how it worked, he said “the bad news is that the solution reduces to SAT, which is NP complete”.

            I suppose you mean that SAT reduces to the software verification problem? The converse is trivial, assuming the original problem is in NP, so not very informative.

          2. MarkCC

            No, he meant that it reduces to SAT.

            The software verification task that was doing took a specification of a system expressed in a set-theoretic notation, and a collection of assertions about the specification. It would then perform an analysis that verified the assertions; if it couldn’t verify them, it would produce a specific counterexample demonstrating why the assertion failed.

            The way that it did that was by translating the original specification into an instance of SAT, solving SAT, and using the SAT solver to generate a solution, and then translating the SAT solution back into the specification domain.

            The specification language was, essentially, an alternate syntax for SAT. Of course that does mean that theoretically, SAT was reducible to the specification problem – but the point he was getting at was that the key to solving it was SAT.

        2. Paul Murray

          A good example of this is factoring a large number (has anyone shown this to be NP yet?). In general, N has a completely different set of factors to N+1.

          If this were not true, then you’d pick some factors at random, multiply them, and gradually narrow in on the number you are after by making substitutions in your set of factors and searching. But it just doesn’t work.

          NP problems are all a bit like that: a tiny change in the target means that the solution is wildly different. With travelling salesman, moving a city a tiny bit to the left *can* mean that the shortest path snaps to a completely different route.

          Reply
          1. Paul Murray

            Actually … that’s an interesting problem. Given a set of points on the plane, for a given point, what’s the region in which you can move it such that the optimal travelling salesman path stays the same?

  6. Vicki

    Thanks for the link. I’ve passed it along to my partner the computer scientist/programmer, in case he hasn’t seen it.

    Reply
  7. James

    Your description of big-O is actually a description of big-Omega… A better layman’s description of big-O might be that if something is O(f(x)), then we know that it can be done in at most f(x) time, but it might be done in less. Although if you want to describe the actual time taken by a function, you might as well just use big-theta.

    Reply
    1. John Armstrong

      Another nitpick:

      Once again, speaking very loosely, problems in NP are problems where to the best of our knowledge, computing a solution to the problem is in EXP; but checking a solution for correctness is in P.

      A little too loosely. Every problem in P is also in NP, so the clause about “computing a solution to the problem is in EXP” is bogus. NP is just about checking a solution for correctness in polynomial time, rather than anything about calculating a solution.

      Reply
    2. Jeremy Kun

      I agree. I think the important thing to mention about big-O is that it’s a worst case bound.

      Going back to his example of sorting a list of names, you might input a list of names which is partially sorted, and depending on the algorithm you use, it could take as small as N time, but it will never take longer than the big-O bound.

      Reply
  8. Nick Johnson

    It’s not by a group at HP – Vinay, the author, explicitly stated in his original letter:

    “This work was pursued independently of my duties as a HP Labs researcher, and without the knowledge of others.”

    Reply
  9. Matt Skalecki

    Wow. “Probably in line for a Fields Medal” is a huge understatement. I’m probably biased as a computer scientist, but in my opinion this is/was the second most important unproven conjecture after only the Riemann Hypothesis. It’s certainly much bigger than Fermat’s Last Theorem (although probably less known in casual circles).

    Wow, just, wow. I hope it stands up.

    Reply
  10. jdkbrown

    Is this another way to describe the issue? P is the set of problems that can be solved by deterministic machines in polynomial time; NP is the set of problems that can be solved by non-deterministic machines in polynomial time. Thus, if the result that P != NP holds up, it shows that non-deterministic machines are fundamentally more powerful than deterministic machines — non-determinism adds real computing power and isn’t just a convenience.

    Reply
    1. Peter

      No, that’s not a correct way to describe the problem. There’s no evidence suggesting that non-deterministic machines can solve NP problems in polynomial time, and experts tend to doubt that NP-complete problems have efficient solutions on quantum machines*–

      or is there such evidence? I’m researching a paper on quantum algorithms for a class, and if there is such evidence, it’s very recent.

      *there is a practical speed-up for solving such problems on a quantum machine, but it still takes exponential time.

      Reply
      1. jdkbrown

        Um. I was under the impression that the definition (or, at least, *a* definition) of NP was the set of problems solvable in polynomial time by a non-deterministic Turing machine. What am I missing?

        Reply
        1. MarkCC

          You’re right that the classical definition of NP is the set of problems solvable in polynomial time by a non-deterministic turing machine. But the class of problems solvable in P-time on a non-deterministic turing machine is exactly the same as the class of problems whose results can be tested in polynomial time on a deterministic turing machine.

          And since I was trying to explain this for non-CS people, it’s a lot easier to understand “testing the solution in polynomial time” than it is to understand “solvable using a non-deterministic machine in polynomial time”.

          Reply
          1. jdkbrown

            Yes–I wasn’t disputing the way you characterized NP. I was just perplexed by Peter’s claim that “There’s no evidence suggesting that non-deterministic machines can solve NP problems in polynomial time…”. Perhaps the trouble was that in my 2:31 comment to which he was responding, I sloppily elided ‘Turing’ and just wrote ‘machines’?

            In any event, given that the machine characterization of NP is o.k., is what I say above in my 2:31 comment correct? Or am I confused?

          2. Peter

            Oh, I didn’t really know what I was talking about. I see that a quantum machine, though indeterministic in a sense, isn’t non-deterministic in THAT sense.

  11. Jacob

    Another nitpick you said:

    “So, for example, suppose you’re given a list of N names, and you’d like to get them into sorted order. You can’t do than with fewer than nlog(n) comparisons.”

    This is true, with the caveat that you’re doing a comparison based sort. Never forget counting sort. Granted unless you’re sorting strings of length 2 and your alphabet is of size 2 you’re going to have more than nlog(n) with counting sorts. But god damnit if it isn’t fast for sorting integers.

    Reply
  12. Frank Quednau

    In which case, the only viable crackpottery left will be to refute proofs that a problem is NP.

    Say, does the NPness proof of travelling salesman actually consider Hyperspace?

    Reply
  13. Charles Siegel

    Mark,

    I would say that “published” is the wrong word. That document was actually (as far as I’ve gathered) posted to scribd without the initial consent of the author (I think as a way for one person he sent it to to share it with someone else) and he’s hoping to post a more polished version sometime in the near future (hopefully to the arXiv).

    Reply
  14. Jonathan Thornburg

    @ James Sweet: You might point out to your friend that there are in fact heuristic branch-and-bound algorithms which can solve many practical travelling-salesman problems efficiently. (This doesn’t contradict NP because these algorithms don’t get an *optimal* solution, just one that’s “good”. And they don’t work for *all* travelling-salesman problems, just “nice” ones (which turn out to be quite common in practice)…

    Reply
  15. Jeremy Kun

    From the abstract it looks like he’s picking an NP-Complete problem and proving that no algorithm in P can solve it. And since all NP problems are reducible to the one he picked, no algorithm in P can solve any problem in NP.

    Reply
  16. Wouter Lievens

    A little embarassing (since I saw all of this in my CS education) but I’m lost again how the Traveling Salesman Problem is verifiable in polynomial time.

    Imagine 6 cities ABCDEF… if I present the solution FDABCE, how can you ever tell in polynomial time whether that solution is optimal?

    Or are you talking about the Decision Version of the TSP? I.e., the question “can you find a path that is less than length X”? If so, I can of course check that length(FDABCE) < X in linear time.

    If the latter; how come people mix up the decision version and the raw version of the TSP problem even though they're fundamentally different?

    Reply
    1. Antonio E. Porreca

      Yes, it is the second version you mention that is polynomial-time verifiable. The version where you have to check whether the solution is optimal is conjectured to be harder.

      The confusion between the two versions might be due to the fact that decision version and search version of NP problems usually have the same complexity: i.e., checking whether this is a path of length ≤ k and actually finding a path of length ≤ k have the same complexity (modulo a polynomial). This is related to a property of problems called “self reducibility”. Then, it’s easy to mix up the search version and the optimisation version.

      Reply
  17. jhm

    WARNING! Rank amateur alert.

    As important as this result is in a pure math sense, wouldn’t people interested in computing be more interested in determining which elements of NP are in P? or is my ignorance of the situation showing? What I mean to say is that it doesn’t seem very helpful to someone trying to compute solutions to know that some but not all elements of NP are in P. If this is true, is there any indication that this proof suggests a way to more easily find (compute?) the subset of NP which is also a subset of P?

    Reply
    1. MarkCC

      The subset of NP that’s in P is really easy: it’s P. P is a subset of NP.

      Also, while I didn’t go into detail, the problems that we’re most interested in in NP are what we call NP-complete problems. A problem is NP-complete if there’s a computable, polynomial-time way of translating any other NP problem into it.

      For example, one of the classic examples that I mentioned in the post is the travelling salesman problem (TSP). You can take any other NP-complete problem, and in polynomial time, create an instance of the TSP problem; solving that instance of the TSP problem gives you a solution to the other problem. So you can take, for instance, a SAT problem, and in polynomial time convert it to a TSP problem. Then if you solve that instance of the TSP, you can convert the answer to it back to an answer to the SAT problem in P-time.

      In other words, the NP-complete problems form a sort of barrier, where if we can solve *any* problem in it in P-time, that means that *every* problem in it is solvable it P-time; if there’s a single NP-complete problem with isn’t solvable in P-time, then *none* of them are.

      Reply
    2. Antti-Juhani Kaijanaho

      Determining which elements of NP are in P, or more generally, determining which problems are in P, is being done and this has been going on for decades, and it gets steady progress.

      One of the reasons for P/=NP being so important is that it being false would have a major impact on both theory and practice of computing. For one example, certain practically important currently unbreakable encryption and digital signature algorithms would suddenly become breakable if we had a constructive proof of P=NP. Having a proof for P/=NP will confirm that that particular Pandora’s box will remain closed forever.

      Reply
  18. lylebot

    Almost everyone believes P != NP. That doesn’t mean a proof wouldn’t be important, of course. But P = NP would be much more surprising (and have much more far-reaching implications—way beyond breaking modern encryption algorithms).

    I personally think an error in this proof will be discovered, though I do believe the result to be true. I don’t have any technical reason for thinking that; it’s just a hunch.

    Some interesting reading on P vs NP:
    Russell Impagliazzo’s “Possible Worlds” depending on the outcome of P vs NP: http://cseweb.ucsd.edu/~russell/average.ps

    Lance Fortnow’s essay on the status of P vs NP: http://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext
    (and Fortnow’s blog was one of the first on computational complexity: http://blog.computationalcomplexity.org/)

    Dick Lipton (whose blog is worth reading) has identified some possible problems with this proof: http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p%E2%89%A0np/

    Scott Aaronson (whose blog is worth reading) promises to pay Vinay another $200,000 if he wins the Clay prize: http://scottaaronson.com/blog/?p=456

    Reply
    1. Serge Shirokov

      I’m reading “Possible Worlds” right now. Can anyone explain to me about Algorithmica – will P=NP actually get us method for producing solution to problem from the method of recognizing the valid solution? Or P = NP just proves that this method _exists_?

      Thanks in advance.

      Reply
      1. MarkCC

        That would depend on the proof.

        If P==NP were proved, the proof could just be a demonstration of principle – that is, something like a demonstration that there must be some kind of polynomial relationship between the complexity of recognizing a solution and the complexity of computing a solution to that problem. Theoretically, that could be proved without showing just how to get the polynomial time solution.

        But if P==NP, most likely, a proof would show how to construct a P-time solution.

        Most people believe that P!=NP, so it’s probably irrelevant. But there are some folks who think that there should be a polynomial relationship between verification complexity and solution complexity, and if they’re right, a proof would probably show a method.

        But even in that case – it’s not likely to be quite the panacea that a lot of people predict. As a shorthand, we tend to say that polynomial time complexity is computable, and exponential time isn’t. But in reality, if the polynomial is big enough, it’s still no really doable in any meaningful sense. If we can show, for example, that you can do an optimal solution of 3SAT in P-time, but it’s O(n^80), that’s not going to mean that suddenly, optimal solutions of 3SAT are tractable.

        Realistically, for most problems, any solution whose complexity is worse that O(n^3) is effectively intractable for large instances. So again, realistically, even if P==NP, there’s a good chance that it doesn’t matter.

        Reply
  19. lylebot

    (Resubmitting this comment again—I’m not sure if it didn’t go through the first time or if it’s being held for moderation or what…)

    Almost everyone believes P != NP. That doesn’t mean a proof wouldn’t be important, of course. But P = NP would be much more surprising (and have much more far-reaching implications—way beyond breaking modern encryption algorithms).

    I personally think an error in this proof will be discovered, though I do believe the result to be true. I don’t have any technical reason for thinking that; it’s just a hunch.

    Some interesting reading on P vs NP:
    Russell Impagliazzo’s “Possible Worlds” depending on the outcome of P vs NP: http://cseweb.ucsd.edu/~russell/average.ps

    Lance Fortnow’s essay on the status of P vs NP: http://cacm.acm.org/magazines/2009/9/38904-the-status-of-the-p-versus-np-problem/fulltext
    (and Fortnow’s blog was one of the first on computational complexity: http://blog.computationalcomplexity.org/)

    Dick Lipton (whose blog is worth reading) has identified some possible problems with this proof: http://rjlipton.wordpress.com/2010/08/09/issues-in-the-proof-that-p%E2%89%A0np/

    Scott Aaronson (whose blog is worth reading) promises to pay Vinay another $200,000 if he wins the Clay prize: http://scottaaronson.com/blog/?p=456

    Reply
  20. AnyEdge

    I’m admitting confusion here: If we know that a problem is NP, am I correct in thinking that that means that we KNOW that there is no polynomial solution? If that statement is true, then how did we not already know that P != NP? if there is no polynomial solution, then it is NP but not P? I’m obviously missing something critical.

    As for the TSP, we can easily check if a certificate is a TOUR in polynomial time, but surely we can’t verify that it is an OPTIMAL TOUR?

    I feel like I need a little more in the way of explanation here. My training is in modeling the problems, and then unleashing heuristics. Not in the proofs.

    Reply
    1. John Armstrong

      As for the TSP, we can easily check if a certificate is a TOUR in polynomial time, but surely we can’t verify that it is an OPTIMAL TOUR?

      The version of the TSP that gets used here is the “decision problem”. That is: given a graph and a total cost X, is there a tour with cost less than X? The verification is to check that a given tour does indeed cost less than X.

      Reply
  21. Paul

    If we know that a problem is NP, am I correct in thinking that that means that we KNOW that there is no polynomial solution?

    No, P is a subset of NP. We could know a problem is NP, but not yet know if it is either P or NP-complete. It is possible that an algorithm will be discovered that demonstrates the problem with a non-deterministic polynomial time solution can be solved deterministically in polynomial time. And if this can be done for a NP-complete problem, that would demonstrate that P = NP. Or, as is supposedly contained in this paper: if you can prove that an NP-complete problem cannot be solved in polynomial time, then no NP-complete problems are members of P, and P != NP.

    if there is no polynomial solution, then it is NP but not P?

    That’s generally an open question. You don’t need to demonstrate that there is no polynomial solution to classify a problem as NP. NP could simply be the class of problems where there is known a non-deterministic polynomial time solution but it is not yet known if there is a deterministic polynomial time solution (although in the case of NP-complete problems, there is either no such latter solution or all NP-complete problems have one).

    Sorry for the muddled explanation, in retrospect I should have just pointed to Mark’s earlier post

    Reply
  22. Anonymous

    it’s the most credible attempt that I’ve ever seen, and it’s getting some preliminary favorable feedback from big-names in complexity theory. In fact, one of the biggest names in complexity theory, Stephen Cook, has apparently said that it should be taken seriously. If Cook thinks it’s credible, then it’s damn-well credible.

    It’s not actually getting any favorable feedback: I don’t know of any expert who has said anything better than a vague, non-committal comment publicly, and several (such as Scott Aaronson) have made very negative comments. In private, a bunch more people are very pessimistic, and nobody seems to be optimistic.

    As quoted by Greg Baker in his initial blog post that broke the story, Cook’s comment is that the proof “appears to be a relatively serious claim to have solved P vs. NP.” That’s hardly a ringing endorsement. This is part of what’s been driving me nuts: everyone keeps repeating the word “serious”, but it’s getting totally misinterpreted by non-experts. What you’re actually seeing is an awkward dance around the issue of who Deolalikar is. What “serious” means is “yeah, we never heard of him until this weekend either, but he seems to have an actual Ph.D. and some publications, and his paper doesn’t look like crackpot work.” That’s certainly better than the alternative, but it’s not evidence in favor of the proof’s validity, just an attempt to keep people from jumping to unfair negative conclusions. In this case, I think it is unfortunately making non-experts jump to unjustified positive conclusions.

    Reply
  23. James Birkett

    A little nitpick: You say: “We can talk about constant-time algorithms; sub-linear time algorithms; linear time; quadratic time; etc.” but I think this is slightly misleading.

    If a problem can be solved in sub-linear time, then the result of that computation does not depend on the entire input, since reading the input takes linear time. This class of problems exists (e.g. the language of strings which begin with some fixed prefix) but it is not generally a very useful class.

    In contrast, constant space or log-space are much more interesting complexity classes, but you focused only on the running time in your article.

    Reply
    1. MarkCC

      There are lots of sub-linear-time algorithms that are incredibly useful: in particular, think of the class O(lg n), which includes
      many search algorithms, from binary search to union-find.

      Reply
      1. James Birkett

        My mistake; I’m thinking in terms of turing machines where you can’t skip over parts of the input that you’re not interested in – I retract my complaint.

        Incidentally, can you suggest a turing complete model of computation which is simple to understand the whole model at once (unlike real programming languages or machine architectures which get a bit too messy to do maths with) but gives more natural complexity bounds? Writing algorithms in pseudocode seems a little too informal.

        Reply
        1. Ørjan Johansen

          You can at least reduce to a very minimal machine architecture with a single instruction. One such instruction is SUBLEQ which takes three operands:

          SUBLEQ a b c

          which means: subtract the value in address b from the value in address a, and store the result in a. If the result was negative, jump to address c. With this instruction you can do any computation provided you allow self-modifying code.

          Also since there is only one instruction, you don’t actually need an instruction code in this machine language, just the operands.

          Reply
  24. Pingback: …:: Sentimenti Che Si Perdono ::… » Blog Archive » P != NP (o no?) forse siamo ad una Soluzione!!!

  25. Blaisorblade

    Seems that the proof has been disproved, and it has also been shown that the approach is unlikely to work:
    http://scottaaronson.com/blog/?p=458
    However, a conjecture was produced, as a result of the review going on: this class of approaches probably can’t work – see the final point on the “philosophical barrier” described here, which is quite easy compared to all the rest:
    http://rjlipton.wordpress.com/2010/08/10/update-on-deolalikars-proof-that-p%E2%89%A0np/#comment-4885
    By easy, I mean that you can understand what’s written (average case complexity being a different beast than worst-case one) after a standard course on complexity, even if the theorem implied there is of course much more complex.

    Reply
  26. Pingback: Interesting links « life in progress

Leave a Reply