# Quantum Computation Complexity: BQP

What started me on this whole complexity theory series was a question in
computers. Taking the broadest possible view, in theory, a quantum computer
is a kind of non-deterministic machine – so in the best possible case,
a quantum machine can solve NP-complete problems in polynomial time. The set
of things computable on a quantum machine is not different from the set of
things computable on a classical machine – but the things that are tractable (solvable in a reasonable amount of time) on a quantum
computer may be different.

# Basics: Normal Distributions

In general, when we gather data, we expect to see a particular pattern to
the data, called a normal distribution. A normal distribution is one
where the data is evenly distributed around the mean in a very regular way,
which when plotted as a
histogram will result in a bell curve. There are a lot of ways of
defining “normal distribution” formally, but the simple intuitive idea of it
is that in a normal distribution, things tend towards the mean – the closer a
value is to the mean, the more you’ll see it; and the number of values on
either side of the mean at any particular distance are equal.

# Basics: Mean, Median, and Mode

Statistics is something that surrounds us every day – we’re constantly
bombarded with statistics, in the form of polls, tests, ratings, etc. Understanding those statistics can be an important thing, but unfortunately, most people have never been taught just what statistics really mean, how they’re computed, or how to distinguish the different between
statistics used properly, and statistics misused to deceive.

The most basic concept in statistics in the idea of an average. An average is a single number which represents the idea of a typical value. There are three different numbers which can represent the idea of an average value, and it’s important to know which one is being used, and whether or not that is appropriate. The three values are the mean, the median, and the mode.

# Back to the Basics?

Here at ScienceBlogs, we’ve got our own back-channel forums for the bloggers to chat with each other. An idea that came up, which a bunch of us are interested in, is doing some posts about basic definitions and basic concepts.
There are many people who read various blogs around here who’ve had problems with definitions of some basic ideas. For example, there’s the word vector – there are at least two very different uses of the word vector around here at SB: there’s the form that people like me use (the mathematical vector), and there’s the form that epidemiologists/biologists use.
For another example, there are things like the logic and proofs – a lot of people just aren’t familiar with the concept of a proof, or how to tell whether an argument is a proper mathematical proof, or whether a conclusion follows logically from an argument.
So the question: what kinds of basic ideas or terms would you like to see a very basic-level introductory post about?

# GM/BM in the Card Catalog

Shelley started it, then PZ joined in. Who am I to stop it?

# Programs as Poetry: Fishy Programming in Homespring

I’m hitting on something deeply twisted this week. It’s called homespring. Homespring is interesting for two reasons. First, it’s got a sort of reverse flavor to it: it consists of active agents in a passive structure. So, for example, you can’t do anything like control flow – that’s a dynamic change in the control flow of the program; in Homespring, you need to trick the agents into doing something using the static, unchanging structure of the program. And second, it challenges you to write your program in the form of a poem! And the icing on the cake? The agents are salmon,
and the passive structure is a network of streams.

# Another Example Sheaf: Vector Fields on Manifolds

There’s another classic example of sheaves; this one is restricted to manifolds, rather than general topological spaces. But it provides the key to why we can do calculus on a manifold. For any manifold, there is a sheaf of vector fields over the manifold.

# Probabilistic Complexity

As I’ve mentioned in the past, complexity theory isn’t really one of my favorite topics. As a professional computer scientist, knowing and loving complexity up to the level of NP-completeness
is practically a requirement. But once you start to get beyond P and NP, there are thousands of complexity classes that have been proposed for one reason or another, and really understanding all of them can get remarkably difficult, and what’s worse, can feel like an utterly pointless exercise,
particularly if the writer/teacher you’re learning from isn’t very good.

I’m not going to write a long series of posts on the more esoteric complexity classes. But there
are a few that are interesting, and might even have some relevance to actual practical problems in
selecting algorithms for a software system.

# Basic Complexity Classes: P and NP

Now that we’ve gone through a very basic introduction to computational complexity, we’re ready
to take a high-level glimpse at some of the more interesting things that arise from it. The one
that you’ll hear about most often is “P vs NP”, which is what I’m going to write about today.

# Basic Computational Complexity

In the comments thread of the post on Turing Equivalent vs Turing Complete, there’ve been a flurry of questions and comments about some stuff involving computational complexity. So I thought it would be interesting to write a couple of posts about computational complexity. I’m not going to get into a lot of detail, because I would need to dig out some books that I really don’t want to read again :-). But I can do the basics without them. It’ll take a few posts to get through this.