I haven’t taken a look at Uncommon Descent in a while; seeing the same nonsense
get endlessly rehashed, seeing anyone who dares to express disagreement with the
moderators get banned, well, it gets old. But then… Last week, DaveScott (which is, incidentally, a psueudonym!) decided to retaliate against my friend and fellow ScienceBlogger Orac, by “outing” him, and publishing his real name and employer.
Why? Because Orac had dared to criticize the way that a potential, untested
cancer treatment has been hyped recently in numerous locations on the web, including UD.
While reading the message thread that led to DaveScott’s “outing” of Orac, I came
across a claim by Sal Cordova about a new paper that shows how Greg Chaitin’s work in
information theory demonstrates the impossibility of evolution. He even promoted it to
a top-level post on UD. I’m not going to provide a link to Sal’s introduction
of this paper; I refuse to send any more links UDs way. But you can find the paper at
They start out with yet another rehash of the good old “Life is too complicated for evolution” shpiel. But from there, they go on to invoke not just Greg Chaitin, but Kurt Gödel and Alan Turing as allegedly providing support for intelligent design:
Why – in a sense – are the works of great mathematicians as K.Gödel, A.Turing, G.Chaitin, J.Von Neumann “friends” of the Intelligent Design movement and – in the same time – “enemy” of Darwinism, the biological theory according to which life and species arose without the need of intelligence? As said above, biological complexity, organization and order need information at the highest degree. The works of Gödel, Turing, Chaitin and Von Neumann all deal with mathematical information theory, from some point of view. So information is the link between the field of mathematics and the field of biology. For this reason some truths and results of mathematics can illuminate the field of biology, namely about the origin-of-biological-complexity in general and specifically the origin-of-life problem. Roughly biologists divide in two groups: design skeptics (Darwinists) and intelligent design theorists (ID supporters and creationists). The formers claim that life arose without need of an intelligent agency. The latters claim that life arose thanks to intelligent agency. IDers are developing a theory about that, which is called “Intelligent Design Theory” (IDT).
To say it in few words, Gödel’s works in metamathematics, Turing’s ideas in computability theory, Chaintin’s results in algorithmic information theory (AIT) and Von Neumann’s researches in informatics are friend to ID because all are different expressions of a unique universal truth: “more” doesn’t come from “less”; a lower thing cannot cause a higher thing; causes are more than effects; intelligence stays above and its results below.
This is, of course, nonsense, from the start. (Really, what biologists support ID?)
But the really annoying part to me is the abuse of math – and not just any math, but
the work of three of the major figures in my personal area of expertise.
Turing, Gödel, and Chaitin are three of the greats in what has become the theory
of computation. (And I’ll just briefly add that as much as I respect and admire Chaitin, I don’t quite think that he quite ranks with Turing and Gödel.)
The fact is, none of the work of Gödel, Turing, or Chaitin can
legitimately be read as saying anything remotely like “more doesn’t come from
less”. In fact, I would argue quite the opposite: Gödel showed how logics could be
used to represent and reason about themselves; Turing showed how the entire
concept of computation could be reduced to the capability of a remarkably simple
machine – and yet that simple machine could do incredibly complex things. (In fact,
Turing was convinced that the human mind was nothing but a remarkably complicated
computing device – that all of the products of human minds were nothing but the result
of completely deterministic computations.)
Let’s move on to see their specific descriptions of the works of these three
Gödel proved that, in general, a complete mathematical theory cannot be derived entirely from a finite number of axioms. In general mathematics is too rich to be derived from a limited number of propositions (what mathematicians name a “formal system”). In particular just arithmetic is too rich to be reducible in a finite set of axioms. What we can derive from a finite formal system is necessarily incomplete.
Bullshit and nonsense! That’s an astonishingly bad explanation of Gödel’s incompleteness theorem. In fact, Gödel’s theorem actually means something rather
opposite: that mathematics itself is limited. You can describe all
of arithmetic using a formal system with a finite set of axioms; in fact, Gödel
himself proved that, in his completeness theorem. What you cannot do is something quite different. You cannot create a single formal system in which
every true statement is provably true, and every false statement is provably false, and
no paradoxical statements can be written. This is a misrepresentation of Gödel’s theorem, mis-stated in a way that attempts to make it look as if it supports their
Moving on to Turing, here’s what they say:
Turing proved that, in general, there are functions not computable by means of
algorithms. In other words, there are problems non solvable simply by means of sets of
instructions. For example the “halting problem” is incomputable. This means that a
mechanical procedure, able to tell us if a certain computer program will halt after a
finite number of steps, cannot exist. In general, information is too rich to be derived
from a limited number of instructions.
This is a thoroughly dreadful pile of crap. Turing did show that some
things were non-computable – that’s the Halting Theorem. But the
step from that to “information is too rich” is a complete non-sequitur. Worse, it’s a
meaningless statement. “information is too rich” is an incomplete statement:
what information? A statement like that cannot stand on its own.
Information can be generated by anything – finite sets of
instructions, noise on a telephone line, emissions of a particular wavelength from a
star – all produce information. That statement is ill-formed – it has no meaning. And
the description of Turing as a whole is, once again, thoroughly misleading. They quite
deliberately leave Turing’s greatest contributions: the fundamental concept of the
universal computing machine, and the meaning of computation!
On to Chaitin!
Chaitin saw relations between the Gödel’s results and Turing’s ones. Gödel’s incompleteness and Turing’s incomputability are two aspects of the same problem. Chaitin expressed that problem yet another way. In AIT one defines the algorithmic complexity H(x) of a bit string “x” as the minimal computer program able to output it. When H(x) is near equal to “x’ one says the “x” string is “uncompressible” or “irreducibly complex” (IC). In other words it contains “non-minimizable” information. Expressed in AIT terminology, the Gödel’s and Turing’s theorems prove that the major part of information in different fields is in general uncompressible. In particular a Turing machine (a specialized computer) is an uncompressible system. The AIT definition of complexity can be related to the concepts of the ID theory. The information algorithmic content H(x) is related to “complex specified information” (CSI). Moreover the information incompressibility of AIT is related to the “irreducible complexity” concept (IC).
Once again, we get a dreadful misrepresentation, full of errors.
A Turing machine is not an uncompressible system. In fact, the statement
that a Turing machine is uncompressible is, once again, an incomplete statement. A
Turing machine is a general construct: there are an infinite number of Turing
machines that compute the same result. The smallest of those machines might be uncompressible – or their might be a smaller device that could compute the same result. (For example, a Turing machine that computes a decision
function on a simple regular language would likely be larger than an equivalent NFA; in that case, the Turing machine for the language would not be considered a non-compressible description of the language.)
And they get worse. They claim that the information content described by Chaitin
theory is related to CSI – when in fact, as I’ve argued before, CSI is gibberish in terms of Chaitin; and they claim that non-compressibility is related to irreducible complexity. That last is true – except that that’s not a good thing for their argument: as I’ve explained before, Chaitin has proved in his theory that it is impossible to recognize when something is uncompressible – and in terms of irreducible complexity, that means that it is impossible to determine
whether or not something is IC!
Next, they get around to starting to make their real argument: that somehow,
the kinds of incompleteness results that were proven by Turing, Gödel, and Chaitin somehow imply that there must be an intelligent agent intervening to make life work:
In the origin-of-life biological problem we have as inputs: matter (atoms of chemical elements), energy (in all its forms), natural laws and randomness. Evolutionists believe that these inputs are sufficient to obtain a living cell without the need of intelligence. Natural laws are a set of rules. ID theorists believe these laws are intelligently designed. Moreover they think the universe is fine tuned for life. Randomness is the simplest rule: a blind choice among atoms. If evolutionists were right, accordingly to the AIT terminology, the algorithmic complexity of cell would be compressible. Life would have an information content reducible.
This is absolutely incorrect, on two levels.
First, according to Chaitin, randomness is by definition uncompressible. In fact, per Chaitin, it’s remarkably easy to prove that most things are uncompressible. We would not expect the outcome of a purely random process to have a low (i.e., highly compressible) information content.
Second, evolution is not a random process. It’s a highly selective process. So in fact, we would expect the results of
evolution to be more compressible that true randomness, and most likely
less compressible than something produced by a design process driven by
an intelligent agent.
The rest of the paper just continually rehashes the same points made in the quoted sections. They repeat their mischaracterizations of the work of Gödel,
Turing, and Chaitin. They repeat their errors concerning “the complexity of information”. They repeat their errors about randomness not being able to produce uncompressible information. And they add a few more long-winded non-sequiturs. But there’s no more actual content to this paper. In fact, it’s a highly compressible mish-mash. Which is, in fact, the only part of their paper that actually supports their argument: because clearly this mess is not the result of intelligent design, and yet, it’s highly compressible.