Both in comments, and via email, I’ve received numerous requests to take a look at
the work of Dembski and Marks, published through Professor Marks’s website. The site is
called the “Evolutionary Informatics Laboratory”. Before getting to the paper, it’s worth
taking just a moment to understand its provenance – there’s something deeply fishy about
the “laboratory” that published this work. It’s not a lab – it’s a website; it was funded
under very peculiar circumstances, and hired Dembski as a “post-doc”, despite his being a full-time professor at a different university. Marks claims that his work for
the EIL is all done on his own time, and has nothing to do with his faculty position at the university. It’s all quite bizarre. For details, see here.
On to the work. Marks and Dembski have submitted three papers. They’re all
in a very similar vein (as one would expect for three papers written in a short period
of time by collaborators – there’s nothing at all peculiar about the similarity). The
basic idea behind all of them is to look at search in the context of evolutionary
algorithms, and to analyze it using an information theoretic approach. I’ve
picked out the first one listed on their site: Conservation of Information in Search: Measuring the Cost of Success
There’s two ways of looking at this work: on a purely technical level, and in terms of its
On a technical level, it’s not bad. Not great by any stretch, but it’s entirely reasonable. The idea
of it is actually pretter clever. They start with NFL. NFL says, roughly, that if you don’t know anything about the search space, you can’t select a search that will perform better than a random walk.
If we have a search for a given search space that does perform better than a random walk,
in information theoretic terms, we can say that the search encodes information
about the search space. How can we quantify the information encoded in a search algorithm
that allows it to perform as well as it does?
So, for example, think about a search algorithm like Newton’s method. It generally homes in extremely
rapidly on the roots of a polynomial equation – dramatically better than one would expect in a random
walk. For example, if we look at something like y = x2 – 2, starting with an approximation of a
zero at x=1, we can get to a very good approximation in just two iterations. What information is encoded
in Newton’s method? Among other things, it’s working in a Euclidean space on a continuous, differentiable
curve. That’s rather a lot of information. We can actually quantify that in information theoretic
terms by computing the average time to find a root in a random walk, compared to the average time
to find a root in Newton’s method.
Further, when a search performs worse than what is predicted by a random walk, we can
say that with respect to the particular search task, that the search encodes negative information – that it actually contains some assumptions about the locations of the target that
actively push it away, and prevent it from finding the target as quickly as a random walk would.
That’s the technical meat of the paper. And I’ve got to say, it’s not bad. I was expecting something really awful – but it’s not. As I said earlier, it’s far from being a great paper. But technically, it’s reasonable.
Then there’s the presentation side of it. And from that perspective, it’s awful. Virtually every
statement in the paper is spun in a thoroughly dishonest way. Throughout the paper, they constantly make
statements about how information must be deliberately encoded into the search by the programmer.
It’s clear the direction that they intend to go – they want to say that biological evolution can
only work if information was coded into the process by God. For example, an evolution to use beta alanine as a catylist for digestion would of been already predisposed by information placed in DNA. Here’s an example from the first
paragraph of the paper:
Search algorithms, including evolutionary searches, do not
generate free information. Instead, they consume information,
incurring it as a cost. Over 50 years ago, Leon Brillouin, a
pioneer in information theory, made this very point: “The
[computing] machine does not create any new information,
but it performs a very valuable transformation of known
information” When Brillouin’s insight is applied to search
algorithms that do not employ specific information about the
problem being addressed, one finds that no search performs
consistently better than any other. Accordingly, there is no
“magic-bullet” search algorithm that successfully resolves all
That’s the first one, and the least objectionable. But just half a page later, we find:
The significance of COI [MarkCC: Conservation of Information – not Dembski’s version, but from someone
named English] has been debated since its popularization through the NFLT . On the one hand, COI has a
leveling effect, rendering the average performance of all algorithms equivalent. On the other hand,
certain search techniques perform remarkably well, distinguishing themselves from others. There is a
tension here, but no contradiction. For instance, particle swarm optimization  and genetic algorithms
,  perform well on a wide spectrum of problems. Yet, there is no discrepancy between the
successful experience of practitioners with such versatile search algorithms and the COI imposed inability
of the search algorithms themselves to create novel information , , . Such information does not
magically materialize but instead results from the action of the programmer who prescribes how knowledge
about the problem gets folded into the search algorithm.
That’s where you can really see where they’re going. “Information does not magically materialize, but
instead results from the action of the programmer”. The paper harps on that idea to an
inappropriate degree. The paper is supposedly about quantifying the information that
makes a search algorithm perform in a particular way – but they just hammer on the idea
that the information was deliberately put there, and that it can’t come from
It’s true that information in a search algorithm can’t come from nowhere. But it’s
not a particularly deep point. To go back to Newton’s method: Newton’s method of root
finding certainly codes all kinds of information into the search – because it was created
in a particular domain, and encodes that domain. You can actually model orbital dynamics
as a search for an equilibrium point – it doesn’t require anyone to encode in
the law of gravitation; it’s already a part of the system. Similarly in biological
evolution, you can certainly model the amount of information encoded in the process – which
includes all sorts of information about chemistry, reproductive dynamics, etc.; but since those
things are encoded into the universe, you don’t need to find an intelligent agent
to have coded them into evolution: they’re an intrinsic part of the system in which
evolution occurs. You can think of it as being like a computer program: computer programs
don’t need to specifically add code into a program to specify the fact that the computer they’re going to run on has 16 registers; every program for the computer has that wired into
it, because it’s a fact of the “universe” for the program. For anything in our universe, the basic
facts of our universe – of basic forces, of chemistry, are encoded in their existence. For anything on earth, facts about the earth, the sun, the moon – are encoded into their very existence.
Dembski and Marks try to make a big deal out of the fact that all of this information is quantifiable.
Of course it’s quantifiable. The amount of information encoded into the structure of the universe
is quantifiable too. And it’s extremely interesting to see just how you can compute how much information
is encoded into things. I like that aspect of the paper. But it doesn’t imply anything about
the origin of the information: in this simple initial quantification, information theory cannot distinguish between environmental information which is inevitably encoded, and information
which was added by the deliberate actions of an intelligent agent. Information theory can
quantify information – but it can’t characterize its source.
If I were a reviewer, would I accept the paper? It’s hard to say. I’m not an information theorist; so
I could easily be missing some major flaw. The style of the paper is very different from any other
information theory paper that I’ve ever read – it’s got a very strong rhetorical bent to it which is very
unusual. I also don’t know where they submitted it, so I don’t know what the reviewing standards are – the
reviewing standards of different journals are quite different. If this were submitted to a theoretical
computer science journal like the ones I typically read, where the normal ranking system is (reject/accept
with changes and second review/weak accept with changes/ strong accept with changes/strong accept), I
would probably rank it either “accept with changes and second review” or “weak accept with changes”.
So as much as I’d love to trash them, a quick read of the paper seems to show
that it’s a mediocre paper, with an interesting idea. The writing sucks: it was
written to try to make a point that it can’t make technically, and it makes that point with all the subtlety of a sledgehammer, despite the fact that the actual technical content of the paper can’t support it.