Category Archives: Computational Complexity

Holy Freaking Cow… P != NP??

Very big, very exciting news in the theoretical comp sci world today!

A group at HP research has published a proof that, if correct, shows that the classic problem of computational complexity has been solved, once and for all. It’s still far from certain that it’s correct. But it’s the most credible attempt that I’ve ever seen, and it’s getting some preliminary favorable feedback from big-names in complexity theory. In fact, one of the biggest names in complexity theory, Stephen Cook, has apparently said that it should be taken seriously. If Cook thinks it’s credible, then it’s damn-well credible. It might not be correct, but it’s not just crackpottery either.

For the non-CS folks out there, here’s my attempt to explain what the problem is, and what the solution means.

Continue reading

Graph Searches and Disjoint Sets: the Union-Find Problem

Suppose you’ve got a huge graph – millions of nodes. And you know that it’s not connected – so the graph actually consists of some number of pieces (called the connected components of the graph). And there are constantly new vertices and edges being added to the graph, but nothing is ever removed. Some questions you might want to ask about this graph at a particular point in time are:

  • How many components are there in the graph?
  • Which component is vertex X in?
  • Are vertices X and Y in the same component?
  • How many components are there?

All of these questions are variants of a classic computer science problem, called
union-find, which also comes up in an astonishing number of different contexts. The reason for the name is that in the representation of the solution, there
are two basic operations: union, and find. Basically, the division of the graph into
components is also a partition of the vertices of the graph into disjoint sets: union find
is a problem which focuses on a particular kind of disjoint set problem, where you can modify
the sets over time.

Continue reading

Amortized Complexity – a Tool for Graph Algorithms (among others)

There are a lot of very cool problems in computer science that can be solved by using
an appropriate data structure; and the data structures are often easiest to describe in terms
of graphs. And of those data structures, one thing that often comes up is amortized algorithmic complexity. Amortized complexity is something which has been occupying my thoughts lately,
because it’s come up in several real problems, so I’m in the mood to write about it, and it’ll be
useful later.

The idea of amortized complexity is that for some structures, the worst case complexity
cost of a series of operations is different from the worst-case complexity of a single operation. In amortized complexity, you consider cases where some operation is inexpensive most of the time – but to keep it inexpensive most of the time, you need to periodically do something expensive.

Continue reading

Basics: Binary Search

For the basics, I wrote a bunch of stuff about sorting. It seems worth taking a moment
to talk about something related: binary search. Binary search is one of the most important
and fundamental algorithms, and it shows up in sorts of places.

It also has the amazing property that despite being simple and ubiquitous, it’s virtually
always written wrong. There’s a bit of subtlety in implementing it correctly, and virtually
everyone manages to put off-by-one indexing errors into their implementations. (Including me; last time I implemented a binary search, the first version included one of the classic errors.) The errors are so ubiquitous that even in a textbook that discusses the fact that most programmers get it wrong, they got it wrong in their example code!

Continue reading

Not Quite Basics: Sorting Algorithms

Multiple people have written to me, after seeing yesterday’s algorithms basics post, asking me to say more about sorting algorithms. I have to say that it’s not my favorite topic – sorting is one of those old bugaboos that you can’t avoid, but which gets really dull after a while. But there is a kernel of interest to it – sorting can be used to demonstrate a lot of interesting ideas about
computational complexity.

Continue reading

Quantum Computation Complexity: BQP

What started me on this whole complexity theory series was a question in
the comments about the difference between quantum computers and classical
computers. Taking the broadest possible view, in theory, a quantum computer
is a kind of non-deterministic machine – so in the best possible case,
a quantum machine can solve NP-complete problems in polynomial time. The set
of things computable on a quantum machine is not different from the set of
things computable on a classical machine – but the things that are tractable (solvable in a reasonable amount of time) on a quantum
computer may be different.

Continue reading

Probabilistic Complexity

As I’ve mentioned in the past, complexity theory isn’t really one of my favorite topics. As a professional computer scientist, knowing and loving complexity up to the level of NP-completeness
is practically a requirement. But once you start to get beyond P and NP, there are thousands of complexity classes that have been proposed for one reason or another, and really understanding all of them can get remarkably difficult, and what’s worse, can feel like an utterly pointless exercise,
particularly if the writer/teacher you’re learning from isn’t very good.

I’m not going to write a long series of posts on the more esoteric complexity classes. But there
are a few that are interesting, and might even have some relevance to actual practical problems in
selecting algorithms for a software system.

Continue reading

Basic Complexity Classes: P and NP

Now that we’ve gone through a very basic introduction to computational complexity, we’re ready
to take a high-level glimpse at some of the more interesting things that arise from it. The one
that you’ll hear about most often is “P vs NP”, which is what I’m going to write about today.

Continue reading