Category Archives: Debunking Creationism

Restudying Math in light of The First Scientific Proof of God?

A reader sent me a link to [this amusing blog][blog]. It’s by a guy named George Shollenberger, who claims to have devised The First scientific Proof of God (and yes, he always capitalizes it like that).
George suffers from some rather serious delusions of grandeur. Here’s a quote from his “About Me” bio on his blog:
>I retired in 1994 and applyied my hard and soft research experience to today’s
>world social problems. After retirement, my dual research career led to my
>discovery of the first scientific proof of God. This proof unifies the fields
>of science and theology. As a result of my book, major changes can be expected
>throughout the world.
>…
>I expect these blogs and the related blogs of other people to be detected by
>Jesus Christ and those higher intelligent humans who already live on other
>planets.
So far, he has articles on his blog about how his wonderful proof should cause us to start over again in the fields of science, mathematics, theology, education, medical care, economics, and religion.
Alas, the actual First Scientific Proof of God is [only available in his book][buymybook]. But we can at least look at why he thinks we need to [restudy the field of mathematics][restudy].
>The field of mathematics is divided into pure and applied mathematics. Pure
>mathematicians use mathematics to express their own thoughts and thus express
>the maximum degree of freedom found in the field of mathematics. On the other
>hand, applied mathematicians lose a degree of their freedom because they use
>mathematics to express the thoughts of people in the fields they serve. Most
>mathematicians are applied mathematicians and serve either counters (e.g.,
>accountants, pollsters, etc.) or sciences (e.g., physicists, sociologists,
>etc.).
That’s a pretty insulting characterization of mathematicians, but since George is an engineer by training, it’s not too surprising – that’s a fairly common attitude about mathematicians among engineers.
>The field of physics is served by applied mathematicians who are called
>mathematical physicists. These physicists are the cause of the separation of
>theologians and scientists in the 17th century, after Aristotle’s science was
>being challenged and the scientific method was beginning to be applied to all
>sciences. But, these mathematical physicists did not challenge Aristotle’s
>meaning of infinity. Instead, they accepted Aristotle’s infinity, which is
>indeterminate and expressed by infinite series such as the series of integers (
>1, 2, 3, ….etc.). Thus, to the mathematical physicist, a determinate infinity
>does not exist. This is why many of today’s physicists reject the idea of an
>infinite God who creates the universe. I argue that this is a major error in
>the field of mathematics and explain this error in the first chapter of The
>First Scientific Proof of God.
So, quick aside? What was Aristotle’s infinity? The best article I could find quickly is [here][aristotle-infinity]. The short version? Aristotle believed that infinity doesn’t really *exist*. After all, there’s no number you can point to and say “That’s infinity”. You can never assemble a quantity of apples where you can say “There’s infinity apples in there”. Aristotle’s idea about infinity was that it’s a term that describes a *potential*, but not an *actual* number. He also went on the describe two different kinds of infinity – infinity by division (which describes zero, which he wasn’t sure should really be considered a *number*); and infinity by addition (which corresponds to what we normally think of as infinity).
So. George’s argument comes down to: mathematics, and in particular, mathematical physics, needs to be rebooted, because it uses the idea of infinity as potential – that is, there is no specific *number* that we can call infinity. So since our math says that there isn’t, well, that means we should throw it all away. Because, you see, according to George, there *is* a number infinity. It’s spelled G O D.
Except, of course, George is wrong. George needs to be introduced to John Conway, who devised the surreal numbers, which *do* contain infinity as a number. Oh, well.
Even if you were to accept his proposition, what difference would it make?
Well – there’s two ways it could go.
We could go the [surreal][onag] [numbers][surreal] route. In the surreal numbers (or several similar alternatives), infinity *does* exist as a number; but despite that, it has the properties that we expect of infinity; e.g., dividing it by two doesn’t change it. If we did that, it would have no real effect on science: surreal numbers are the same as normal reals in most ways; they differ when you hit infinitesimals and infinities.
If we didn’t go the surreal-ish route, then we’re screwed. If infinity is a *real* real number, then the entire number system collapses. What’s 1/0? If infinity is *real*, then 1/0 = infinity. What about 2/0? Is that 2*infinity? If it is, it makes no sense; if it isn’t, it makes no sense.
>I believe that the field of mathematics must restudy their work by giving ample
>consideration to the nature of man’s symbolic languages, the nature of the
>human mind, Plato’s negative, and the nature of dialectical thinking.
Plato’s negative is, pretty much, the negative of intuitionistic logic. Plato claimed that there’s a difference between X, not-X, and the opposite of X. His notion of the opposite of X is the intuitionistic logic notion of not-X; his notion of not-X is the intuitionistic notion of “I don’t have a proof of X”.
In other words, George is hopelessly ignorant of real mathematics; and his reasoning about what needs to be changed about math makes no sense at all.
[aristotle-infinity]: http://plato.stanford.edu/entries/aristotle-mathematics/supplement3.html
[blog]: http://georgeshollenberger.blogspot.com
[restudy]: http://georgeshollenberger.blogspot.com/2006/07/restudying-field-of-mathematics.html
[buymybook]: http://rockstarramblings.blogspot.com/2006/06/doggerel-19-read-my-book.html
[surreal]: http://www.tondering.dk/claus/surreal.html
[onag]: http://www.akpeters.com/product.asp?ProdCode=1276

Mathematicians and Evolution: My Two Cents

There’s been a bunch of discussion here at ScienceBlogs about whether or not mathematicians are qualified to talk about evolution, triggered by [an article by ID-guy Casey Luskin][luskin]. So far, [Razib at Gene Expression][gnxp], [Jason at][evblog1][EvolutionBlog][evblog2], and [John at Stranger Fruit][sf] have all commented on the subject. So I thought it was about time for me to toss in my two cents as well, given that I’m a math geek who’s done rather a lot of writing about evolution here at this blog.
I don’t want to spend a lot of time rehashing what’s already been said by others. So I’ll start off by just saying that absolutely agree that just being a mathematician gives you absolutely *no* qualifications to talk about evolution, and that an argument about evolution should *not* be considered any more credible because it comes from a PhD in mathematics rather than a plumber. That’s not to say that there is no role for mathematics in the discussion of evolution – just that being a mathematician doesn’t give you any automatic expertise or credibility about the subject. A mathematician who wants to study the mathematics of evolution needs to *study evolution* – and it’s the knowledge of evolution that they gain from studying it that gives them credibility about the topic, not their background in mathematics. Luskin’s argument is nothing but an attempt to cover up for the fact that the ID “scientists petition” has a glaring lack of signatories who actually have any qualifications to really discuss evolution.
What I would like to add to the discussion is something about what I do here on this blog with respect to writing about evolution. As I’ve said plenty of times, I’m a computer scientist. I certainly have no qualifications to talk about evolution: I’ve never done any formal academic study of evolution; I’ve certainly never done any professional work involving evolution; I can barely follow [work done by qualified mathematicians who *do* study evolution][gm-good-ev].
But if you look at my writing on this blog, what I’ve mainly done is critiques of the IDists and creationists who attempt to argue against evolution. And here’s the important thing: the math that they do – the kind of arguments coming from the people that Luskin claims are uniquely well suited to argue about evolution – are so utterly, appallingly horrible that it doesn’t take a background in evolution to be able to tear them to ribbons.
To give an extreme example, remember the [infamous Woodmorappe paper][woodie] about Noah’s ark? You don’t need to be a statistician to know that using the *median* is wrong. It’s such a shallow and obvious error that anyone who knows any math at all should be able to knock it right down. *Every* mathematical argument that I’ve seen from IDists and/or creationists has exactly that kind of problems: errors so fundamental and so obvious that even without having to get into the detailed study of evolution, anyone who takes the time to actually *look at the math* can see why it’s wrong. It’s not always as bad as Woodie, but just look at things like [Dembski’s specified complexity][dembski-sc]: anyone who knows information theory can see that it’s a self-contradicting definition; you don’t need to be an expert in mathematical biology to see the problem – the problem is obvious in the math itself.
That fact in itself should be enough to utterly discredit Luskin’s argument: the so-called mathematicians that he’s so proud to have on his side aren’t even capable of putting together remotely competent mathematical arguments about evolution.
[luskin]: http://www.evolutionnews.org/2006/07/mathematicians_and_evolution.html
[gnxp]: http://scienceblogs.com/gnxp/2006/07/math_and_creation.php
[evblog1]: http://scienceblogs.com/evolutionblog/2006/07/are_mathematicians_qualified_t.php
[evblog2]: http://scienceblogs.com/evolutionblog/2006/07/are_mathematicians_qualified_t_1.php
[sf]: http://scienceblogs.com/strangerfruit/2006/07/more_on_mathematicians_1.php
[gm-good-ev]: http://scienceblogs.com/goodmath/2006/07/using_good_math_to_study_evolu.php
[woodie]: http://goodmath.blogspot.com/2006/06/more-aig-lying-with-statistics-john.html
[dembski-sc]: http://scienceblogs.com/goodmath/2006/06/dembskis_profound_lack_of_comp.php

Why I Hate Religious Bayesians

Last night, a reader sent me a link to yet another wretched attempt to argue for the existence of God using Bayesian probability. I really hate that. Over the years, I’ve learned to dread Bayesian arguments, because so many of them are things like this, where someone cobbles together a pile of nonsense, dressing it up with a gloss of mathematics by using Bayesian methods. Of course, it’s always based on nonsense data; but even in the face of a lack of data, you can cobble together a Bayesian argument by pretending to analyze things in order to come up with estimates.

You know, if you want to believe in God, go ahead. Religion is ultimately a matter of personal faith and spirituality. Arguments about the existence of God always ultimately come down to that. Why is there this obsessive need to justify your beliefs? Why must science and mathematics be continually misused in order to prop up your belief?

Anyway… Enough of my whining. Let’s get to the article. It’s by a guy named Robin Collins, and it’s called “God, Design, and Fine-Tuning“.

Let’s start right with the beginning.

Suppose we went on a mission to Mars, and found a domed structure in which everything was set up just right for life to exist. The temperature, for example, was set around 70o F and the humidity was at 50%; moreover, there was an oxygen recycling system, an energy gathering system, and a whole system for the production of food. Put simply, the domed structure appeared to be a fully functioning biosphere. What conclusion would we draw from finding this structure? Would we draw the conclusion that it just happened to form by chance? Certainly not. Instead, we would unanimously conclude that it was designed by some intelligent being. Why would we draw this conclusion? Because an intelligent designer appears to be the only plausible explanation for the existence of the structure. That is, the only alternative explanation we can think of–that the structure was formed by some natural process–seems extremely unlikely. Of course, it is possible that, for example, through some volcanic eruption various metals and other compounds could have formed, and then separated out in just the right way to produce the “biosphere,” but such a scenario strikes us as extraordinarily unlikely, thus making this alternative explanation unbelievable.

The universe is analogous to such a “biosphere,” according to recent findings in physics. Almost everything about the basic structure of the universe–for example, the fundamental laws and parameters of physics and the initial distribution of matter and energy–is balanced on a razor’s edge for life to occur. As eminent Princeton physicist Freeman Dyson notes, “There are many . . .lucky accidents in physics. Without such accidents, water could not exist as liquid, chains of carbon atoms could not form complex organic molecules, and hydrogen atoms could not form breakable bridges between molecules” (1979, p.251)–in short, life as we know it would be impossible.

Yes, it’s the good old ID argument about “It looks designed, so it must be”. That’s the basic argument all the way through; they just dress it up later. And as usual, it’s wrapped up in one incredibly important assumption, which they cannot and do not address: that we understand what it would mean to change the fundamental structure of the universe.

What would it mean to change, say, the ratio of the strengths of the electromagnetic force and gravity? What would matter look like if we did? Would stars be able to exist? Would matter be able to form itself into the kinds of complex structures necessary for life?

We don’t know. In fact, we don’t even really have a clue. And not knowing that, we cannot meaningfully make any argument about how likely it is for the universe to support life.

They do pretend to address this:

Various calculations show that the strength of each of the forces of nature must fall into a very small life-permitting region for intelligent life to exist. As our first example, consider gravity. If we increased the strength of gravity on earth a billionfold, for instance, the force of gravity would be so great that any land-based organism anywhere near the size of human beings would be crushed. (The strength of materials depends on the electromagnetic force via the fine-structure constant, which would not be affected by a change in gravity.) As astrophysicist Martin Rees notes, “In an imaginary strong gravity world, even insects would need thick legs to support them, and no animals could get much larger.” (Rees, 2000, p. 30). Now, the above argument assumes that the size of the planet on which life formed would be an earth-sized planet. Could life forms of comparable intelligence to ourselves develop on a much smaller planet in such a strong-gravity world? The answer is no. A planet with a gravitational pull of a thousand times that of earth — which would make the existence of organisms of our size very improbable– would have a diameter of about 40 feet or 12 meters, once again not large enough to sustain the sort of large-scale ecosystem necessary for organisms like us to evolve. Of course, a billion-fold increase in the strength of gravity is a lot, but compared to the total range of strengths of the forces in nature (which span a range of 1040 as we saw above), this still amounts to a fine-tuning of one part in 1031. (Indeed,other calculations show that stars with life-times of more than a billion years, as compared to our sun’s life-time of ten billion years, could not exist if gravity were increased by more than a factor of 3000. This would have significant intelligent life-inhibiting consequences.) (3)

Does this really address the problem? No. How would matter be different if gravity were a billion times stronger, and EM didn’t change? We don’t know. For the sake of this argument, they pretend that mucking about with those ratios wouldn’t alter the nature of matter at all. That’s what they’re going to build their argument on: the universe must support life exactly like us: it’s got to be carbon-based life on a planetary surface that behaves exactly like matter does in our universe. In other words: if you assume that everything has to be exactly as it is in our universe, then only our universe is suitable.

They babble on about this for quite some time; let’s skip forwards a bit, to where they actually get to the Bayesian stuff. What they want to do is use the likelihood principle to argue for design. (Of course, they need to obfuscate, so they cite it under three different names, and finally use the term “the prime principle of confirmation” – after all, it sounds much more convincing than “the likelihood principle”!)

The likelihood principle is a variant of Bayes’ theorem, applied to experimental systems. The basic idea of it is to take the Bayesian principle of modifying an event probability based on a prior observation, and to apply it backwards to allow you to reason about the probability of two possible priors given a final observation. In other words, take the usual Bayesian approach of asking: “Given that Y has already occurred, what’s the probability of X occurring?”; turn it around, and say “X occurred. For it to have occurred, either Y or Z must have occurred as a prior. Given X, what are the relative probabilities for Y and Z as priors?”

There is some controversy over when the likelihood principle is applicable. But let’s ignore that for now.

To further develop the core version of the fine-tuning argument, we will summarize the argument by explicitly listing its two premises and its conclusion:

Premise 1. The existence of the fine-tuning is not improbable under theism.

Premise 2. The existence of the fine-tuning is very improbable under the atheistic single-universe hypothesis. (8)

Conclusion: From premises (1) and (2) and the prime principle of confirmation, it follows that the fine-tuning data provides strong evidence to favor of the design hypothesis over the atheistic single-universe hypothesis.

At this point, we should pause to note two features of this argument. First, the argument does not say that the fine-tuning evidence proves that the universe was designed, or even that it is likely that the universe was designed. Indeed, of itself it does not even show that we are epistemically warranted in believing in theism over the atheistic single-universe hypothesis. In order to justify these sorts of claims, we would have to look at the full range of evidence both for and against the design hypothesis, something we are not doing in this paper. Rather, the argument merely concludes that the fine-tuning strongly supports theism over the atheistic single-universe hypothesis.

That’s pretty much their entire argument. That’s as mathematical as it gets. Doesn’t stop them from arguing that they’ve mathematically demonstrated that theism is a better hypothesis than atheism, but that’s really their whole argument.

Here’s how they argue for their premises:

Support for Premise (1).

Premise (1) is easy to support and fairly uncontroversial. The argument in support of it can be simply stated as follows: since God is an all good being, and it is good for intelligent, conscious beings to exist, it not surprising or improbable that God would create a world that could support intelligent life. Thus, the fine-tuning is not improbable under theism, as premise (1) asserts.

Classic creationist gibberish: pretty much the same stunt that Swinburne pulled. They pretend that there are only two possibilities. Either (a) there’s exactly one God which has exactly the properties that Christianity attributes to it; or (b) there are no gods of any kind.

They’ve got to stick to that – because if they admitted more than two possibilities, they’d have to actually consider why their deity is more likely that any of the other possibilities. They can’t come up with an argument that Christianity is better than atheism if they acknowledge that there are thousands of possibilities as likely as theirs.

Support for Premise (2).

Upon looking at the data, many people find it very obvious that the fine-tuning is highly improbable under the atheistic single-universe hypothesis. And it is easy to see why when we think of the fine-tuning in terms of the analogies offered earlier. In the dart-board analogy, for example, the initial conditions of the universe and the fundamental constants of physics can be thought of as a dart- board that fills the whole galaxy, and the conditions necessary for life to exist as a small one-foot wide target. Accordingly, from this analogy it seems obvious that it would be highly improbable for the fine-tuning to occur under the atheistic single-universe hypothesis–that is, for the dart to hit the board by chance.

Yeah, that’s pretty much it. The whole argument for why fine-tuning is less probably in a universe without a deity than in a universe with one. Because “many people find it obvious”, and because they’ve got a clever dartboard analogy.

They make a sort of token effort to address the obvious problems with this, but they’re really all nothing but more empty hand-waving. I’ll just quote one of them as an example; you can follow the link to the article to see the others if you feel like giving yourself a headache.

Another objection people commonly raise against the fine-tuning argument is that as far as we know, other forms of life could exist even if the constants of physics were different. So, it is claimed, the fine-tuning argument ends up presupposing that all forms of intelligent life must be like us. One answer to this objection is that many cases of fine-tuning do not make this presupposition. Consider, for instance, the cosmological constant. If the cosmological constant were much larger than it is, matter would disperse so rapidly that no planets, and indeed no stars could exist. Without stars, however, there would exist no stable energy sources for complex material systems of any sort to evolve. So, all the fine-tuning argument presupposes in this case is that the evolution of life forms of comparable intelligence to ourselves requires some stable energy source. This is certainly a very reasonable assumption.

Of course, if the laws and constants of nature were changed enough, other forms of embodied intelligent life might be able to exist of which we cannot even conceive. But this is irrelevant to the fine-tuning argument since the judgement of improbability of fine-tuning under the atheistic single-universe hypothesis only requires that, given our current laws of nature, the life-permitting range for the values of the constants of physics (such as gravity) is small compared to the surrounding range of non-life-permitting values.

Like I said at the beginning: the argument comes down to a hand-wave that if the universe didn’t turn out exactly like ours, it must be no good. Why does a lack of hydrogen fusion stars like we have in our universe imply that there can be no other stable energy source? Why is it reasonable to constrain the life-permitting properties of the universe to be narrow based on the observed properties of the laws of nature as observed in our universe?

Their argument? Just because.

Peer Reviewed Bad ID Math

In comments to [my recent post about Gilder’s article][gilder], a couple of readers asked me to take a look at a [DI promoted][dipromote] paper by
Albert Voie, called [Biological function and the genetic code are interdependent][voie]. This paper was actually peer reviewed and accepted by a journal called “Chaos, Solitons, and Fractals”. I’m not familiar with the journal, but it is published by Elsevier, a respectable publisher.
Overall, it’s a rather dreadful paper. It’s one of those wretched attempts to take Gödel’s theorem and try to apply it to something other than formal axiomatic systems.
Let’s take a look at the abstract: it’s pretty representative of the style of the paper.
>Life never ceases to astonish scientists as its secrets are more and more
>revealed. In particular the origin of life remains a mystery. One wonders how
>the scientific community could unravel a one-time past-tense event with such
>low probability. This paper shows that there are logical reasons for this
>problem. Life expresses both function and sign systems. This parallels the
>logically necessary symbolic self-referring structure in self-reproducing
>systems. Due to the abstract realm of function and sign systems, life is not a
>subsystem of natural laws. This suggests that our reason is limited in respect
>to solve the problem of the origin of life and that we are left taking life as
>an axiom.
We get a good idea of what we’re in for with that second sentence: there’s no particular reason to throw in an assertion about the probability of life; but he’s signaling his intended audience by throwing in that old canard without any support.
The babble about “function” and “sign” systems is the real focus of the paper. He creates this distinction between a “function” system (which is a mechanism that performs some function), and a “sign” system (which is information describing a system), and then tries to use a Gödel-based argument to claim that life is a self-referencing system that produces the classic problematical statements of incompleteness.
Gödel formulas are subsystems of the mind
———————————————–
So. Let’s dive in a hit the meat of the paper. Section one is titled “Gödel formulas are subsystems of the mind”. The basic argument of the section is that the paradoxical statements that Gödel showed are unavoidable are strictly products of intelligence.
He starts off by providing a summary of the incompleteness theorem. He uses a quote from Wikipedia. The interesting thing is that he *misquotes* wikipedia; my guess is that it’s deliberate.
His quotation:
>In any consistent formalization of mathematics that is sufficiently strong to
>axiomatize the natural numbers — that is, sufficiently strong to define the
>operations that collectively define the natural numbers — one can construct a
>true (!) statement that can be neither proved nor disproved within that system
>itself.
In the [wikipedia article][wiki-incompleteness] that that comes from, where he places the “!”, there’s actually a footnote explaining that “true” in used in the disquotational sense, meaning (to quote the wikipedia article on disquotationalism): “that ‘truth’ is a mere word that is conventional to use in certain contexts of discourse but not a word that points to anything in reality”. (As an interesting sidenote, he provides a bibliographic citation for that quote that it comes from wikipedia; but he *doesn’t* identify the article that it came from. I had to go searching for those words.) Two paragraphs later, he includes another quotation of a summary of Godel, which ends midsentence with elipsis. I don’t have a copy of the quoted text, but let’s just say that I have my doubts about the honesty of the statement.
The reason that I believe this removal of the footnote is deliberate is because he immediately starts to build on the “truth” of the self-referential statement. For example, the very first statement after the misquote:
>Gödel’s statement says: “I am unprovable in this formal system.” This turns out
>to be a difficult statement for a formal system to deal with since whether the
>statement is true or not the formal system will end up contradicting itself.
>However, we then know something that the formal system doesn’t: that the
>statement is really true.
The catch of course is that the statement is *not* really true. Incompleteness statements are neither true *nor* false. They are paradoxical.
And now we start to get to his real point:
>What might confuse the readers are the words *”there are true mathematical
>statements”*. It sounds like they have some sort of pre-existence in a Platonic
>realm. A more down to earth formulation is that it is always possible to
>**construct** or **design** such statements.
See, he’s trying to use the fact that we can devise the Gödel type circular statements as an “out” to demand design. He wants to argue that *any* self-referential statement is in the family of things that fall under the rubric of incompleteness; and that incompleteness means that no mechanical system can *produce* a self-referential statement. So the only way to create these self-referencing statements is by the intervention of an intelligent mind. And finally, he asserts that a self-replicating *device* is the same as a self-referencing *statement*; and therefore a self-replicating device is impossible except as a product of an intelligent mind.
There are lots of problems with that notion. The two key ones:
1. There are plenty of self-referential statements that *don’t* trigger
incompleteness. For example, in set theory, I *can* talk about “the set of
all sets that contain themselves”. I can prove that there are two
sets that meet that description: one contains itself, the other doesn’t.
There’s no paradox there; there’s no incompleteness issue.
2. Unintelligent mechanical systems can produce self-referential statements
that do fall under incompleteness. It’s actually not difficult: it’s
a *mechanical* process to generate canonical incompleteness statements.
Computer programs and machines are subsystems of the mind
———————————————————-
So now we’re on to section two. Voie wants to get to the point of being able to
“prove” that life is a kind of a machine that has an incompleteness property.
He starts by saying a formal system is “abstract and non-physical”, and as such “is is really easy to see that they are subsystems of the human mind”, and “belong to another category of phenomena than subsystems of the laws of nature”.
One one level, it’s true; a formal system is an abstract set of rules, with no physical form. It does *not* follow that they are “subsystems of the human mind”. In fact, I’d argue that the statement “X is a subsystem of the human mind” is a totally meaningless statement. Given that we don’t understand quite what the mind is or how it works, what does it mean that something is a “subsystem” of it.
There’s a clear undercurrent here of mind/body dualism here; but he doesn’t bother to argue the point. He simply asserts its difference as an implicit part of his argument.
From this point, he starts to try to define “function” in an abstract sense. He quotes wikipedia again (he doesn’t have much of a taste for citations in the primary literature!), leading to the statement (his statement, not a wikipedia quotation):
>The non-physical part of a machine fit into the same category of phenomena as
>formal systems. This is also reflected by the fact that an algorithm and an
>analogue computer share the same function.
Quoting wikipedia again, he moves on to: “A machine, for example, cannot be explained in terms of physics and chemistry.” Yeah, that old thing again. I’m sure the folks at Intel will be absolutely *shocked* to discover that they can’t explain a computer in terms of physics and chemistry. This is just degenerating into silliness.
>As the logician can manipulate a formal system to create true statements that
>are not formally derivable from the system, the engineer can manipulate
>inanimate matter to create the structure of the machine, which harnesses the
>laws of physics and chemistry for the purposes the machine is designed to
>serve. The cause to a machine’s functionality is found in the mind of the
>engineer and nowhere else.
Again: dualism. According to Voie, the “purpose” or “function” of the machine is described as a formal system; the machine itself is a physical system; and those are *two distinctly different things*: one exists only in the mind of the creator; one exists in the physical world.
The interdependency of biological function and sign systems
————————————————————-
And now, section three.
He insists on the existence of a “sign system”. A sign system, as near as I can figure it out (he never defines it clearly) is a language for describing and/or building function systems. He asserts:
>Only an abstract sign based language can store the abstract information
>necessary to build functional biomolecules.
This is just a naked assertion, completely unsupported. Why does a biomolecule *require* an abstract sign-based language? Because he says so. That’s all.
Now, here’s where the train *really* goes off the tracks:
>An important implication of Gödel’s incompleteness theorem is that it is not
>possible to have a finite description with itself as the proper part. In other
>words, it is not possible to read yourself or process yourself as process. We
>will investigate how this parallels the necessary coexistence of biological
>function and biological information.
This is the real key point of this section; and it is total nonsense. Gödel’s theorem says no such thing. In fact, what it does is demonstrate exactly *how* you can represent a formal system with itself as a part, There’s no problem there at all.
What’s a universal turing machine? It’s a turing machine that takes a description of a turing machine as an input. And there *is* a universal turing machine implementation of a universal turing machine: a formal system which has itself as a part.
Life is not a subsystem of the laws of nature
———————————————-
It gets worse.
Now he’s going to try to put thing together: he’s claimed that a formal system can’t include itself; he’s argued that biomolecules are the result of a formal sign system; so now, he’s going to try to combine that to say that life is a self-referential thing that requires the kind of self-reference that can only be the product of an intelligent mind:
>Life is fundamentally dependent upon symbolic representation in order to
>realize biological function. A system based on autocatalysis, like the
>hypothesized RNA-world, can’t really express biological function since it is a
>pure dynamical process. Life is autonomous with something we could call
>”closure of operations” or a cluster of functional parts relating to a whole
>(see [15] for a wider discussion of these terms). Functional parts are only
>meaningful under a whole, in other words it is the whole that gives meaning to
>its parts. Further, in order to define a sign (which can be a symbol, an index,
>or an icon) a whole cluster of self-referring concepts seems to be presupposed,
>that is, the definition cannot be given on a priori grounds, without implicitly
>referring to this cluster of conceptual agents [16]. This recursive dependency
>really seals off the system from a deterministic bottom up causation. The top
>down causation constitutes an irreducible structure.
Got it? Life is dependent on symbolic representation. But biochemical processes can’t possibly express biological function, because biological function is dependent on symbolic representations, which are outside of the domain of physical processes. He asserts the symbolic nature of biochemicals; then he asserts that symbolic stuff is a distinct domain separate from the physical; and therefore physical stuff can’t represent it. Poof! An irreducible structure!
And now, the crowning stupidity, at least when it comes to the math:
>In algorithmic information theory there is another concept of irreducible
>structures. If some phenomena X (such as life) follows from laws there should
>be a compression algorithm H(X) with much less information content in bits than
>X [17].
Nonsense, bullshit, pure gibberish. There is absolutely no such statement anywhere in information theory. He tries to build up more argument based on this
statement: but of course, it makes no more sense than the statement it’s built on.
But you know where he’s going: it’s exactly what he’s been building all along. The idea is what I’ve been mocking all along: Life is a self-referential system with two parts: a symbolic one, and a functional one. A functional system cannot represent the symbolic part of the biological systems. A symbolic system can’t perform any function without an intelligence to realize it in a functional system. And the two can’t work together without being assembled by an intelligent mind, because when the two are combined, you have a self-referential
system, which is impossible.
Conclusion
————
So… To summarize the points of the argument:
1. Dualism: there is a distinction between the physical realm of objects and machines, and the idealogical realm of symbols and functions; if something exists in the symbolic realm, it can’t be represented in the physical realm except by the intervention of an intelligent mind.
2. Gödel’s theorem says that self-referential systems are impossible, except by intervention of an intelligent mind. (wrong)
3. Gödel’s theorem says that incompleteness statements are *true*.(wrong)
4. Biological systems are a combination of functional and symbol parts which form a self-referential system.
5. Therefore, biological systems can only exist as the result of the deliberate actions of an intelligent being.
This stinker actually got *peer-reviewed* and *accepted* by a journal. It just goes to show that peer review can *really* screw up badly at times. Given that the journal is apparently supposed to be about fractals and such that the reviewers likely weren’t particularly familiar with Gödel and information theory. Because anyone with a clue about either would have sent this to the trashbin where it belongs.
[wiki-incompleteness]: http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorem
[gilder]: http://scienceblogs.com/goodmath/2006/07/the_bad_math_of_gilders_new_sc.php
[dipromote]: http://www.uncommondescent.com/index.php/archives/722
[voie]: http://home.online.no/~albvoie/index.cfm

The Bad Math of Gilder's New Screed

As several [other][panda] [folks][pz] have mentioned, George Gilder has written a [new anti-evolution article][gilder-article] which was published in the National Review.
[panda]: http://www.pandasthumb.org/archives/2006/07/the_technogeek.html
[pz]: http://scienceblogs.com/pharyngula/2006/07/if_it_werent_for_those_feminis.php
[gilder-article]: http://www.discovery.org/scripts/viewDB/index.php?command=view&id=3631
There’s a lot to hate in this article. It’s a poorly written screed, which manages to mix together all of Gilder’s bogeymen: feminists, liberals, anti-supply-siders, peer reviewers, academics, and whoever else dares to disagree with him about, well, anything.
Plenty of folks are writing about the problems in this article; as usual, I’m going to ignore most of it, and focus specifically on the mathematical parts of it. Given that his argument is mathematical at root, those errors are fatal to the argument of the article as a whole.
We start with a really strange characterization of Shannon information theory:
>After Wealth & Poverty, my work focused on the subject of human creativity as
>epitomized by science and technology and embodied in computers and
>communications. At the forefront of this field is a discipline called
>information theory. Largely invented in 1948 by Claude Shannon of MIT, it
>rigorously explained digital computation and transmission by zero-one, or
>off-on, codes called “bits.” Shannon defined information as unexpected bits, or
>”news,” and calculated its passage over a “channel” by elaborate logarithmic
>rules. That channel could be a wire or another other path across a distance of
>space, or it could be a transfer of information across a span of time, as in
>evolution.
What’s weird about this characterization is that there’s a very strange shift in it. He starts off OK: “the channel could be a wire or another path across a distance of space”. Where he gets strange is when he *drops the channel* as he transitions from talking about transmitting information across space to transmitting information across time. Space versus time is not something that we talk about in Shannon’s information theory. Information is something abstract; it can be transferred over a channel. What “transferred” means is that the information originated at entity A; and after communication, that information has been seen by entity B. Space, time – they don’t make a difference. Gilder doesn’t get that.
>Crucial in information theory was the separation of content from conduit —
>information from the vehicle that transports it. It takes a low-entropy
>(predictable) carrier to bear high-entropy (unpredictable) messages. A blank
>sheet of paper is a better vessel for a new message than one already covered
>with writing. In my book Telecosm (2000), I showed that the most predictable
>available information carriers were the regular waves of the electromagnetic
>spectrum and prophesied that all digital information would ultimately flow over
>it in some way. Whether across time (evolution) or across space
>(communication), information could not be borne by chemical processes alone,
>because these processes merged or blended the medium and the message, leaving
>the data illegible at the other end.
There’s a technical term for this kind of writing. We call it “bullshit”. He’s trying to handwave his way past the facts that disagree with him.
If you want to talk about information carried by a medium, that’s fine. But his arguments about “information can not be borne by chemical processes alone?” Gibberish.
DNA is a chemical that makes a rather nice communication channel. It’s got a common stable substrate on which you can superimpose any message you want – any information, any length. It’s an absolutely *wonderful* example of a medium for carrying information. But he can’t admit that; he can’t even really discuss it in detail, because it would blow his argument out of the water. Thus the handwaving “chemical processes can’t do it”, with absolutely no real argument for *why* a chemical process “merges the medium and the message”.
For another example of how this argument fails: consider a CD/RW drive in a computer. The medium is a piece of plastic with magnetic materials in it. The message is patterns of polarization of those materials. To “record” information on it, you heat it up, and you *modify the medium itself* by changing the polarization of the particles at a point.
Or best of all: take electromagnetic waves, his example of the “very best” communication medium. It’s a waveform, where we superimpose our signal on the wave – the wave isn’t like a piece of paper where we’ve stuck ink to its surface: we force it to carry information *by changing the wave itself*. The basic frequency of the wave, the carrier, is not modified, but the wave amplitudes *are* modified – it’s not just a simple wave anymore, we’ve combined the signal and the medium into something different.
What’s the difference between that and DNA? You can look at DNA as a long chain of sockets. Each socket must be filled with one of 4 different letters. When we “write” information onto DNA, we’re filling those sockets. We’ve changed the DNA by filling the sockets; but just like the case of radio waves, there’s a basic carrier (the underlying chain/carrier wave), and a signal coded onto it (the letters/wave amplitudes).
From this, he tries to go further, and start mixing in some computation theory, building on his lack of comprehension of information theory.
>I came to see that the computer offers an insuperable obstacle to Darwinian
>materialism. In a computer, as information theory shows, the content is
>manifestly independent of its material substrate. No possible knowledge of the
>computer’s materials can yield any information whatsoever about the actual
>content of its computations.
This is manifestly not true. In fact, there was a fascinating piece of work a few years ago where people were able to decode the cryptographic system used by a smartcard by using a combination of knowledge of its physical structure, and monitoring its power consumption. From these two things, they were able to backtrack to determine exactly what it was doing, and backtrack to stealing a supposedly inaccessible password.
>The failure of purely physical theories to describe or explain information
>reflects Shannon’s concept of entropy and his measure of “news.” Information is
>defined by its independence from physical determination: If it is determined,
>it is predictable and thus by definition not information. Yet Darwinian science
>seemed to be reducing all nature to material causes.
Again, gibberish, on many levels.
Shannon’s theory does *not* define information by its “independence from physical determination”. In fact, the best “information generators” that we know about are purely physical: radioactive decay and various quantum phenomena are the very best sources we’ve discovered so far for generating high-entropy information.
And even the most predictable, deterministic process produces information. It may be *a small amount* of information – deterministic processes are generally low-entropy wrt to information – but they do generate information.
And then, he proceeds to shoot himself in the foot. He’s insisted that chemical processes can’t be information carriers. But now he asserts that DNA is an information carrier in his sense:
>Biologists commonly blur the information into the slippery synecdoche of DNA, a
>material molecule, and imply that life is biochemistry rather than information
>processing. But even here, the deoxyribonucleic acid that bears the word is not
>itself the word. Like a sheet of paper or a computer memory chip, DNA bears
>messages but its chemistry is irrelevant to its content. The alphabet’s
>nucleotide “bases” form “words” without help from their bonds with the helical
>sugar-phosphate backbone that frames them. The genetic words are no more
>dictated by the chemistry of their frame than the words in Scrabble are
>determined by the chemistry of their wooden racks or by the force of gravity
>that holds them.
Yup, He says earlier “information could not be borne by chemical processes alone, because these processes merged or blended the medium and the message, leaving the data illegible at the other end.” And here he describes how DNA can carry information using nothing but a chemical process. Ooops.
And he keeps on babbling. Next he moves on to “irreducible complexity”, and even tries to use Chaitin as a support:
>Mathematician Gregory Chaitin, however, has shown that biology is irreducibly
>complex in a more fundamental way: Physical and chemical laws contain hugely
>less information than biological phenomena. Chaitin’s algorithmic information
>theory demonstrates not that particular biological devices are irreducibly
>complex but that all biology as a field is irreducibly complex. It is above
>physics and chemistry on the epistemological ladder and cannot be subsumed
>under chemical and physical rules. It harnesses chemistry and physics to its
>own purposes. As chemist Arthur Robinson, for 15 years a Linus Pauling
>collaborator, puts it: “Using physics and chemistry to model biology is like
>using lego blocks to model the World Trade Center.” The instrument is simply
>too crude.
This is, again, what’s technically known as “talking out your ass”. Chaitin’s theory demonstrates no such thing. Chaitin’s theory doesn’t even come close to discussing anything that could be interpreted as saying anything about biology or chemistry. Chaitin’s theory talks about two things: what computing devices are capable of doing; and what the fundamental limits of mathematical reasoning are.
One of the most amazing things about Chaitin’s theory is that it shows how *any* computing device – even something as simple as a [Turing machine][turing] can do all of the computations necessary to demonstrate the fundamental limits of any mathematical process. It doesn’t say “chemistry can’t explain biology”; in fact, it’s *can’t* say “chemistry can’t explain biology”.
[turing]: http://goodmath.blogspot.com/2006/03/playing-with-mathematical-machines.html
In fact, in this entire section, he never actually supports anything he says. It’s just empty babble. Biology is irreducibly complex. Berlinski is a genius who demonstrates IC in mathematics and biology. Chaitin supports the IC nature of biology. Blah, blah, blah. But in all of this, where he’s allegedly talking about how mathematical theories support his claim, he never actually *does any math*, or even talks about *how the theories he’s discussing applying to his subject*.

What timing! Dembski again demonstrates innumeracy

Right after finishing my post about how Dembski has convinced me that he is not a competent mathematician, I find PZ linking to a Panda’s Thumb post about Dembski, which shows how he does not understand the meaning of the mathematical term “normalization”.
Go look at the PT post: Something rotten in Denmark?
Is this guy really the best mathematician the ID folks have available to represent them?

Dembski's Profound Lack of Comprehension of Information Theory

I was recently sent a link to yet another of Dembski’s wretched writings about specified complexity, titled Specification: The Pattern The Signifies Intelligence.
While reading this, I came across a statement that actually changes my opinion of Dembski. Before reading this, I thought that Dembski was just a liar. I thought that he was a reasonably competent mathematician who was willing to misuse his knowledge in order to prop up his religious beliefs with pseudo-intellectual rigor. I no longer think that. I’ve now become convinced that he’s just an idiot who’s able to throw around mathematical jargon without understanding it.
In this paper, as usual, he’s spending rather a lot of time avoiding defining specification. Purportedly, he’s doing a survey of the mathematical techniques that can be used to define specification. Of course, while rambling on and on, he manages to never actually say just what the hell specification is – just goes on and on with various discussions of what it could be.
Most of which are wrong.
“But wait”, I can hear objectors saying. “It’s his theory! How can his own definitions of his own theory be wrong? Sure, his theory can be wrong, but how can his own definition of his theory be wrong?” Allow me to head off that objection before I continue.
Demsbki’s theory of specicfied complexity as a discriminator for identifying intelligent design relies on the idea that there are two distinct quantifiable properties: specification, and complexity. He argues that if you can find systems that posess sufficient quantities of both specification and complexity, that those systems cannot have arisen except by intelligent intervention.
But what if Demsbki defines specification and complexity as the same thing? Then his definitions are wrong: because he requires them to be distinct concepts, but he defines them as being the same thing.
Throughout this paper, he pretty ignores the complexity to focus on specification. He’s pretty careful never to say “specification is this”, but rather “specification can be this”. If you actually read what he does say about specification, and you go back and compare it to some of his other writings about complexity, you’ll find a positively amazing resemblance.
But onwards. Here’s the part that really blew my mind.
One of the methods that he purports to use to discuss specification is based on Kolmogorov-Chaitin algorithmic information theory. And in his explanation, he demonstrates a profound lack of comprehension of anything about KC theory.
First – he purports to discuss K-C within the framework of probability theory. K-C theory has nothing to do with probability theory. K-C theory is about the meaning of quantifying information; the central question of K-C theory is: How much information is in a given string? It defines the answer to that question in terms of computation and the size of programs that can generate that string.
Now, the quotes that blew my mind:

Consider a concrete case. If we flip a fair coin and note the occurrences of heads and tails in
order, denoting heads by 1 and tails by 0, then a sequence of 100 coin flips looks as follows:

(R) 11000011010110001101111111010001100011011001110111
00011001000010111101110110011111010010100101011110.

This is in fact a sequence I obtained by flipping a coin 100 times. The problem algorithmic
information theory seeks to resolve is this: Given probability theory and its usual way of
calculating probabilities for coin tosses, how is it possible to distinguish these sequences in terms
of their degree of randomness? Probability theory alone is not enough. For instance, instead of
flipping (R) I might just as well have flipped the following sequence:

(N) 11111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111.

Sequences (R) and (N) have been labeled suggestively, R for “random,” N for “nonrandom.”
Chaitin, Kolmogorov, and Solomonoff wanted to say that (R) was “more random” than (N). But
given the usual way of computing probabilities, all one could say was that each of these
sequences had the same small probability of occurring, namely, 1 in 2100, or approximately 1 in
1030. Indeed, every sequence of 100 coin tosses has exactly this same small probability of
occurring.
To get around this difficulty Chaitin, Kolmogorov, and Solomonoff supplemented conventional
probability theory with some ideas from recursion theory, a subfield of mathematical logic that
provides the theoretical underpinnings for computer science and generally is considered quite far
removed from probability theory.

It would be difficult to find a more misrepresentative description of K-C theory than this. This has nothing to do with the original motivation of K-C theory; it has nothing to do with the practice of K-C theory; and it has pretty much nothing to do with the actual value of K-C theory. This is, to put it mildly, a pile of nonsense spewed from the keyboard of an idiot who thinks that he knows something that he doesn’t.
But it gets worse.

Since one can always describe a sequence in terms of itself, (R) has the description

copy '11000011010110001101111111010001100011011001110111
00011001000010111101110110011111010010100101011110'.

Because (R) was constructed by flipping a coin, it is very likely that this is the shortest
description of (R). It is a combinatorial fact that the vast majority of sequences of 0s and 1s have
as their shortest description just the sequence itself. In other words, most sequences are random
in the sense of being algorithmically incompressible. It follows that the collection of nonrandom
sequences has small probability among the totality of sequences so that observing a nonrandom
sequence is reason to look for explanations other than chance.

This is so very wrong that it demonstrates a total lack of comprehension of what K-C theory is about, how it measures information, or what it says about anything. No one who actually understands K-C theory would ever make a statement like Dembski’s quote above. No one.
But to make matters worse – this statement explicitly invalidates the entire concept of specified complexity. What this statement means – what it explicitly says if you understand the math – is that specification is the opposite of complexity. Anything which posesses the property of specification by definition does not posess the property of complexity.
In information-theory terms, complexity is non-compressibility. But according to Dembski, in IT terms, specification is compressibility. Something that possesses “specified complexity” is therefore something which is simultaneously compressible and non-compressible.
The only thing that saves Dembski is that he hedges everything that he says. He’s not saying that this is what specification means. He’s saying that this could be what specification means. But he also offers a half-dozen other alternative definitions – with similar problems. Anytime you point out what’s wrong with any of them, he can always say “No, that’s not specification. It’s one of the others.” Even if you go through the whole list of possible definitions, and show why every single one is no good – he can still say “But I didn’t say any of those were the definition”.
But the fact that he would even say this – that he would present this as even a possibility for the definition of specification – shows that Dembski quite simply does not get it. He believes that he gets it – he believes that he gets it well enough to use it in his arguments. But there is absolutely no way that he understands it. He is an ignorant jackass pretending to know things so that he can trick people into accepting his religious beliefs.

Dembski and No Free Lunch with Competitive Agents (updated repost from blogger)

(Continuing in my series of updates of the GM/BM posts about the bad math of the IDists, I’m posting an update of my original critique of Dembski and his No Free Lunch nonsense. This post has more substantial changes than my last repost; based on the large numbers of comments I’ve gotten in the months since then, I’m addressing a bit more of the basics of how Dembski abuses NFL.)

It’s time to take a look at one of the most obnoxious duplicitous promoters of Bad Math, William Dembski. I have a deep revulsion for this character, because he’s actually a decent mathematician, but he’s devoted his skills to creating convincing mathematical arguments based on invalid premises. But he’s careful: he does his meticulous best to hide his assumptions under a flurry of mathematical jargon.

One of Dembski’s favorite arguments is based on the no free lunch theorems. In simple language, the NFL theorems say “Averaged over all fitness landscapes, no search function can perform better than a random walk”.

Let’s take a moment to consider what Dembski says NFL means when applied to evolution.

In Dembski’s framework, evolution is treated as a search algorithm. The search space is a graph. (This is graph in the discrete mathematics sense: a set of discrete nodes, with a finite number of edges to other nodes.) The nodes of the graph in this search space are outcomes of the search process at particular points in time; the edges exiting a node correspond to the possible changes that could be made to that node to produce a different outcome. To model the quality of a nodes outcome, we apply a fitness function, which produces a numeric value describing the fitness (quality) of the node.

The evolutionary search starts at some arbitrary node. It proceeds by looking at the edges exiting that node, and computes the fitness of their targets. Whichever edge produces the best result is selected, and the search algorithm progresses to that node, and then repeats the process.

How do you test how well a search process works? You select a fitness function which describes the desired outcome, and see how well the search process matches your assigned fitness. The quality of your search process is defined by the limit of the following:

  1. For all possible starting points in the graph:
    1. Run your search using your fitness metric for maxlength steps to reach an end point.
    2. Using the desired outcome fitness, compute the fitness of
      the end point

    3. Compute the ratio of your outcome to the the maximum result
      the desired outcome. This is the quality of your search for this length

So – what does NFL really say?

“Averaged over all fitness functions”: take every possible assignment of fitness values to nodes. For each one, compute the quality of its result. Take the average of the overall quality. This is the quality of the directed, or evolutionary, search.

“blind search”: blind search means instead of using a fitness function, at each step just pick an edge to traverse randomly.

So – NFL says that if you consider every possible assignment of fitness functions, you get the same result as if you didn’t use a fitness function at all.

At heart, this is a fancy tautology. The key is that “averaged over all fitness functions” bit. If you average over all fitness functions, then every node has the same fitness. So, in other words, if you consider a search in which you can’t tell the difference between different nodes, and a search in which you don’t look at the difference between different nodes, then you’ll get equivalently bad results.

Ok. So, let’s look at how Dembski responds to critiques of his NFL work. I’m going to focus on his paper Fitness Among Competitive Agents.

Now, in this paper, he’s responding to the idea that if you limit yourself to competitive fitness functions (loosely defined, that is, fitness functions where the majority of times that you compare two edges from a node, the target you select will be the one that is better according to the desired fitness function), then the result of running the search will, on average, be better than a random traversal.

Dembski’s response to this is to go into a long discussion of pairwise competitive functions. His focus is on the fact that a pairwise fitness function is not necessarily transitive. In his words (from page 2 of the PDF):

From the symmetry properties of this matrix, it is evident that just because one item happens to be pairwise superior to another does not mean that it is globally superior to the other. But that’s precisely the challenge of assigning fitness of competitive agents inasmuch as fitness is a global measure of adaptedness to an environment.

To provide such a global measure of adaptedness and thereby to overcome the intransitivities inherent in pairwise comparisons, fitness in competitive environments needs therefore to factor in average performance of agents as they compete across the board with other agents.

To translate that out of Dembski-speak: in pairwise competition, if A is better than B, and B is better than C, that doesn’t mean A is better than C. So, what you need to do to measure competitive fitness, you need to average the performance of your competitive agents over all possible competitions.

The example he uses for this is a chess tournament: if you create a fitness function for chess players from the results of a serious of tournaments, you can wind up with results like player A can consistently beat player B; B can consistently beat C, and C can consistently beat A.

That’s true. Competitive fitness functions can have that property. But it doesn’t actually matter: because that’s not what’s happening in an evolutionary process. He’s pulling the same old trick that he played in the non-competitive case: he’s averaging out the differences. In a given situation, a competitor does not have to beat every possible other fitness function. It does not have to be the best possible competitor in every possible situation. It just has to be good enough.

And to make matters worse for Dembski, in an evolutionary process, you aren’t limited to picking one “best” path. Evolution allows you to explore many paths at once, and the ones that meet the “good enough” criteria will survive. That’s what speciation is. In one situation, A is better, so it “wins”. Starting from the same point, but in a slightly different environment, B is better, so it wins. Both A and B win.

You’re still selecting a better result. The fact that you can’t always select one as best doesn’t matter. And it doesn’t change the fundamental outcome, which Dembski doesn’t really address, that in an evolutionary landscape, competitive fitness functions do produce a better result that random walks.

In my taxonomy of statistical errors, this is basically modifying the search space: he’s essentially arguing for properties of the search space that eliminate any advantage that can be gained by the nature of the evolutionary search algorithm. But his only argument for making those modifications have nothing to do with evolution: he’s carefully picking search spaces that have the properties he wants, even though they have fundamentally different properties from evolution.

It’s all hidden behind a lot of low-budget equations which are used to obfuscate things. (In “A Brief History of Time”, Steven Hawking said that his publisher told him that each equation in the book would cut the readership in half. Dembski appears to have taken that idea to heart, and throws in equations even when they aren’t needed, in order to try to prevent people from actually reading through the details of the paper where this error is hidden.)

The Problem with Irreducibly Complexity (revised post from blogger)

As I mentioned yesterday, I’m going to repost a few of my critiques of the bad math of the IDists, so that they’ll be here at ScienceBlogs. Here’s the first: Behe and irreducibly complexity. This isn’t quite the original blogger post; I’ve made a few clarifications and formatting fixes; but the content remains essentially the same. You can find the original post in my blogger information theory index. The original publication date was March 13, 2006.
Today, I thought I’d take on another of the intelligent design sacred cows: irreducible complexity. This is the cornerstone of some of the really bad arguments used by people like Michael Behe.
To quote Behe himself:

By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. An irreducibly complex biological system, if there is such a thing, would be a powerful challenge to Darwinian evolution. Since natural selection can only choose systems that are already working, then if a biological system cannot be produced gradually it would have to arise as an integrated unit, in one fell swoop, for natural selection to have any thing to act on.

Now, to be clear and honest upfront: Behe does not claim that this is a mathematical argument. But that doesn’t mean that I don’t get to use math to shred it.
There are a ton of problems with the whole IC argument, but I’m going to take a different tack, and say that even if those other flaws weren’t there, it’s still a meaningless argument. Because from a mathematical point of view, there’s a critical, fundamental problem with the entire idea of irreducible complexity: you can’t prove that something is irreducible complex.
This is a result of some work done by Greg Chaitin in Algorithmic Complexity Theory. A fairly nifty version of this can be found on Greg’s page.
The fundamental result is: given a system S, you cannot in general show that there is no smaller/simpler system that performs the same task as S.
As usual for algorithmic information theory, the proof is in terms of computer programs, but it works beyond that; you can think of the programs as the instructions to build and/or operate an arbitrary device.
First, suppose that we have a computing system φ, which we’ll treat as a function. So φ(x) = the result of running program x on φ. x is both a program and its input data coded into a single string, so x=(c,d), where c is code, and d is data.
Now, suppose we have a formal axiomatic system, which describes the basic rules that φ operates under. We can call this FAS.
If it’s possible to tell where you have minimal program using the axiomatic system, then you can write a program that examines other programs, and determines if they’re minimal. Even better: you can write a program that will generate a list of every possible minimal program, sorted by size.


Let’s jump aside for just a second to show how you can generate a list of every possible minimum program. Here’s a sketch of the program:
minimal.jpg

  1. First, write a program which generates every possible string of one character, then every possible string of two characters, etc., and outputs them in sequence.
  2. Connect the output of that program to another program, which checks each string that it receives as input to see if it’s a syntactically valid program for φ. If it is, it outputs it. If it isn’t, it just discards it.
  3. At this point, we’ve got a program which is generating every possible program for φ. Now, remember that we said that using FAS, we can write a program that tests an input program to determine if its minimal. So, we use that program to test our inputs, to see if they’re minimal. If they are, we output them; if they aren’t, we discard them.

Now, let’s take a second, and write out the program in mathematical terms:
Remember that φ is a function modeling our computing system, FAS is the formal axiomatic system. We can describe φ as a function from a combination of program and data to an output: φ(c,d)=result.
In this case, c is the program above; d is FAS. So φ(c,FAS)=a list of minimal programs.


Now, back to the main track.
Using the program that we sketched above, given any particular length, we can easily generate programs larger than that length.
Take our program, c, and our formal axiomatic system, FAS. and compute their
length. Call that l(c,FAS). If we know l(c,FAS), we can run φ(c,FAS) until it generates a string longer than l(c,FAS).
Ok. Now, write a program c’ that for φ that runs φ(c,FAS) until it finds a program K, where the length of the output of φ(K) is larger than l(c,FAS) + length(c’). c’ then outputs the same thing as φ(K).
This is the tricky part. What does this program do? It runs a program which generates a sequence of provably minimal programs. It runs those provably minimal programs until it finds one larger than itself plus all of its data. Then it runs that and emits the output.
So – c’ outputs the same result as a supposedly minimal program K, where K is larger than c’ and its data. But since c’ is a program which emits the same result as K, but is smaller, then K cannot be minimal.
No matter what you do – no matter what kind of formal system you’ve developed for showing that something is minimal, you’re screwed. Godel just came in and threw a wrench into the works. There is absolutely no way that you can show that any system is minimal – the idea of doing it is intrinsically contradictory.
Evil, huh?
But the point of it is quite deep. It’s not just a mathematical game. We can’t tell when something complicated is minimal. Even if we knew every relevant fact about all of the chemistry and physics that affects things, even if the world were perfectly deterministic, we can’t tell when something is as simple as it can possibly be.
So irreducibly complexity is useless in an argument; because we can’t know when something is irreducibly complex.

Creationists Respond to Debunking Dembski

While perusing my sitemeter stats for the page, I noticed that I’d been linked to in a discussion at creationtalk.com. Expecting amusement, I wandered on over to see who was linking to me.
Someone linked to my index of articles debunking Dembski and Berlinski. The moderator of the creationtalk forum responded to my series of articles on information theory and Dembski with:

No offense to you or him, but his arguments kind of suck. I looked at his response to Behe on IC, and Dembski on Specified Complexity , to Behe’s he didn’t refute it, and to Dembski’s his only arguement was basically summed up to “I don’t know the definition of specified complexity oh mercy”.

For readers who remember, my critique of Behe was that the entire concept of “irreducible complexity” is mathematically meaningless. It’s true that I didn’t refute Behe, in the sense that I didn’t waste any time arguing about whether or not irreducible complexity is indicative of design: there’s no point arguing about the implications of an irreducibly complex system if, in fact, we can never recognize whether a system is irreducibly complex. Sort of like arguing about how many steps it takes to square a circle, after you’ve seen the proof that it can’t be done in a finite number of steps.
But the Dembski line is the one that’s particularly funny. Because, you see, my critique of “specified complexity” was that you can’t mathematically refute specified complexity because Dembski never defines it. In paper after paper, he uses obfuscatory presentations of information theory to define complexity, and then handwaves his way past “specification”. The reason for this is that “specification” is a meaningless term. He can’t define it: because if he did, the vacuity of the entire concept becomes obvious.
A complex system is one which contains a lot of information; which, in information theory, means a system which can’t be described with a brief description. But specification, intuitively, means “can be described concisely”. So you wind up with two possibilities:

  1. “Specification” has a mathematical meaning, which is the opposite of “complexity”, and so
    “specified complexity” is a contradiction; or

  2. “Specification” is mathematically meaningless, in which case “specified complexity” is a meaningless concept in information theory.

The problem isn’t that “I don’t know the definition of specified complexity”. It’s not even that there is no definition of specified complexity. It’s that there cannot be a definition of specified complexity.
I’ll probably drag out my original Dembski and Berlinski tomorrow, polish them up a bit, and repost them here at ScienceBlogs.