Category Archives: Debunking Creationism

Second Law Slop from Granville Sewell

A reader sent me a link to an article by that inimatable genius of the intelligent design community, Granville Sewell. (As much as I hate to admit it, Sewell is a professor of mathematics at Texas A&M. I don’t know what his professional specialty is, but if his work in that area is anything like the dreck he produces in defense of ID, then it’s shocking that he got a faculty position, much less tenure.) Sewell wrote *yet another* one of those horrible “second law of thermodynamics” papers and submitted it *as an opinion piece* to a math journal (“The Mathematical Intelligencer”). It was, needless to say, not received well by people who actually care about quality math, and he was roundly flamed in letters in the following issue. The paper that I’m looking at is [his *defense* to criticisms in the original paper.](http://www.iscid.org/papers/Sewell_EvolutionThermodynamics_012304.pdf)
As one might expect from one of the ICSID guys, it’s a sloppy rehash of the same-old creationist arguments – it’s mainly the same old creationist thermodynamic crap, mixed with a bit of big numbers, and a little dose of obfuscatory mathematics.

Continue reading

De-Debunking Evolutionary Algorithms

Just for fun, I’ve been doing a bit of poking around lately in evolutionary algorithms. It’s really fascinating to experiment, and see what pops out – the results can be really surprising.
There is one fascinating example for which, alas, I’ve lost the reference, but here’s the summary. Several groups have been looking at using evolutionary algorithm techniques for hardware design. (A good example of this is [Alexander Nicholson’s work](http://citeseer.ist.psu.edu/nicholson00evolution.html) .) A year or so ago, I saw a talk given by a group which was doing some experiments with EA for hardware design. One of the most interesting outcomes of their work was that for one of the solutions that their system generated, they were *completely* unable to comprehend how it worked. There was just no logical way for it to work. It included two *disconnected* sets of components – one of which wasn’t wired in *at all*. When they tried getting rid of the disconnected set, the circuit stopped working. It turned out that the evolutionary process had discovered and exploited a previously unknown bug/behavior in the FPGA they were using. You can read about this work [here](http://www.cogs.susx.ac.uk/users/adrianth/ices96/paper.html); thanks to commenters for helping me find the link!
Anyway, the point here isn’t to talk in detail about evolutionary algorithms; that’s a fascinating topic for another time. My goal for this evening is to show you yet another example of how creationists like to distort math in order to make bad arguments. The specific target this time around is an article by Eric Anderson called [“Bits, Bytes and Biology: What Evolutionary Algorithms (Don’t) Teach Us About Biology”](http://www.iscid.org/pcid/2005/4/2/anderson_bits_bytes_biology.php). This article looks at some of the work on evolutionary algorithms that came out of [the Avida project](http://dllab.caltech.edu/avida/) , and tries to make an argument for why the demonstration of how Avida created “irreducibly complex” results are invalid.

Continue reading

Is Evolution Good Enough? It Beats Us.

One of the bad arguments that I’ve frequently seen from creationists
is the argument that some biological system is *too good* to be a possible result of an evolutionary process. On its face, this seems like it’s not a mathematical argument. But it actually is, and math is key to showing what the argument really is, and what’s wrong with it.

Continue reading

Berlinski responds: A Digested Debate

I thought that for a followup to yesterday’s repost of my takedown of Berlinksi, that today I’d show you a digested version of the debate that ensued when Berlinksi showed up to defend himself. You can see the original post and the subsequent discussion here.
It’s interesting, because it demonstrates the kinds of debating tactics that people like Berlinski use to avoid actually confronting the genuine issue of their dishonesty. The key thing to me about this is that Berlinski is a reasonably competent mathematician – but watch how he sidesteps to avoid discussing any of the actual mathematical issues.
Berlinksi first emailed his response to me rather than posting it in a comment. So I posted it, along with my response. As I said above, this is a digest of the discussion; if you want to see the original debate in all its gory detail, you can follow the link above.
——-
The way the whole thing started was when Berlinski emailed me after my original post criticizing his sloppy math. He claimed that it was “impossible for him to post comments to my site”; although he never had a problem after I posted this. His email contained several points, written in Berlinski’s usual arrogant and incredibly verbose manner:
1. He didn’t make up the numbers; he’s quoting established literature: “As I have indicated on any number of occasions, the improbabilities that I cited are simply those that are cited in the literature”
2. His probability calculations were fine: “The combinatorial calculations I
made were both elementary and correct.”
3. Independence is a valid assumption in his calculations: “Given the chemical
structure of RNA, in which nucleotides are bound to a sugar phosphate backbone
but not to one another, independence with respect to template formation is not
only reasonable as an assumption, but inevitable.”
4. He never actually said there could be only one replicator: “There may be many sequences in the pre-biotic environment capable of carrying out various chemical activities.”
Of course, he was considerably more verbose than that.
This initial response is somewhat better at addressing arguments than the later ones. What makes that a really sad statement is that even this initial response doesn’t *really* address anything:
* He didn’t address the specific criticisms of his probability calculations, other than merely asserting that they were correct.
* He doesn’t address questions about the required sizes of replicating molecules, other than asserting that his minimum length is correct, and attributing the number to someone else. (While neglecting to mention that there are *numerous* predictions of minimum length, and the one he cited is the longest.)
* He doesn’t explain why, even though he doesn’t deny that there may have been many potential replicators, his probability calculation is based on the assumption that there is *exactly one*. As I said in my original response to him: In your space of 1060 alleged possibilities, there may be 1 replicator; there may be 1040. By not addressing that, you make your probability calculation utterly worthless.
* He doesn’t address his nonsensical requirement for a “template” for a replicator. The idea of the “template” is basically that for a replicator to copy itself, it needs to have a unique molecule called a “template” that it can copy itself onto. It can’t replicate onto anything *but* a template, and it can’t create the template itself. The “template” is a totally independent chemical, but there is only one possible template that a replicator can copy itself onto. He doesn’t address that point *at all* in his response.
* He doesn’t address the validity of his assumption that all nucleotide chains of the same length are equally likely.
Berlinksi responded, in absolutely classic style. On of the really noteworthy things about Berlinksi’s writing style is its incredible pomposity. This was no disappointment on that count; however, it was quite sad with respect to content. I have to quote the first several lines verbatim, to give you a sense of what I’m talking about when I say he’s pompous:
>Paris
>6 April, 2006
>
>I have corrected a few trivial spelling errors in your original posting, and I
>have taken the liberty of numbering comments:
>
>I discuss these points seriatim:
You see, we *really* needed to know that he was in Paris. Crucially important to his arguments to make sure that we realize that! And I’m a bad speller, which is a very important issue in the discussed. And look, he can use latin words for absolutely no good reason!
He spends fifty lines of prose on the issue of whether or no 100 bases is the correct minimum possible length for a replicator. Those 50 lines come down to “I have one source that says that length, so nyah!” No acknowledgment that there are other sources; no reason for *why* this particular source must be correct. Just lots and lots of wordy prose.
He response to the question about the number of replicators by sidestepping. No math, no science; just evading the question:
>2a) On the contrary. Following Arrhenius, I entertain the possibility that
>sequence specificity may not, after all, be a necessary condition for
>demonstrable ligase activity — or any other biological function, for that
>matter. I observed — correctly, of course — that all out evidence is against
>it. All evidence – meaning laboratory evidence; all evidence – meaning our
>common experience with sequence specificity in linguistics or in any other
>field in which an alphabet of words gives rise to a very large sample space in
>which meaningful sequences are strongly non-generic – the space of all
>proteins, for example.
The template stuff? More sidestepping. He rambles a bit, cites several different sources, and asserts that he’s correct. The basic idea of his response is: the RNA-world hypothesis assumes Watson-Crick base pairing replication, which needs a template. And the reason that it needs to be Watson-Crick is because anything else is too slow and too error prone. But why does speed matter, if there’s no competition? And *of course* the very first replicator would be error prone! Error correction is not something that we would suppose would happen spontaneously and immediately as the first molecule started self-replicating. Error correction is something that would be *selected for* by evolution *after* replication and competition had been established.
Then he sidesteps some more, by playing linguistic games. I referred to the chemicals from which an initial self-replicator developed as “the precursors of a replicator”; he criticizes that phrasing. That’s his entire response.
And finally, we get to independence. It’s worth quoting him again, to show his tactics:
>There remains the issue of independence. Independence is, of course, the de
>facto hypothesis in probability calculations; and in the case of pre-biotic
>chemistry, strongly supported by the chemical facts. You are not apt to
>dismiss, I suppose, the hypothesis that if two coins are flipped the odds in
>favor if seeing two heads is one in four on the grounds that, who knows?, the
>coins might be biased. Who knows? They might be. But the burden of
>demonstrating this falls on you.
One other quote, to give you more of the flavor of a debate with Berlinski:
>5a) There are two issues here: The first is the provenance of my argument; the
>second, my endorsement of its validity. You have carelessly assumed that
>arguments I drew from the literature were my own invention. This is untrue. I
>expect you to correct this misunderstanding as a matter of scholarly probity.
>
>As for the second point, it goes without saying that I endorsed the arguments
>that I cited. Why on earth would I have cited them otherwise?
I really love that quote. Such delightful fake indignance; how *dare* I accuse him of fabricating arguments! Even though he *did* fabricate them. The fake anger allows him to avoid actually *discussing* his arguments.
After that, it descends into seeming endless repetition. It’s just more of the “nyah nyah I’m right” stuff, without actually addressing the criticism. There’s always a way to sidestep the real issue by either using excess wordiness to distract people, or fake indignance that anyone would dare to question anything so obvious!
My response to that is short enough that I’ll just quote it, rather than redigesting it:
>As I’ve said before, I think that there are a few kinds of fundamental errors
>that you make repeatedly; and I don’t think your comments really address them
>in a meaningful way. I’m going to keep this as short as I can; I don’t like
>wasting time rehashing the same points over and over again.
>
>With regard to the basic numbers that you use in your probability calculations:
>no probability calculation is any better than the quality of the numbers that
>get put into it. As you admit, no one knows the correct length of a minimum
>replicator. And you admit that no one has any idea how many replicators of
>minimum or close to minimum length there are – you make a non-mathematical
>argument that there can’t be many. But there’s no particular reason to believe
>that the actual number is anywhere close to one. A small number of the possible
>patterns of minimum length? No problem. *One*? No way, sorry. You need to make
>a better argument to support eliminating 10^60 – 1 values. (Pulling out my old
>favorite, recursive function theory: the set of valid turing machine programs
>is a space very similar to the set of valid RNA sequences; there are numerous
>equally valid and correct universal turing machine programs at or close to the
>minimum length. The majority of randomly generated programs – the *vast*
>majority of randomly generated programs – are invalid. But the number of valid
>ones is still quite large.)
>
>Your template argument is, to be blunt, silly. No, independence is not the de
>facto hypothesis, at least not in the sense that you’re claiming. You do not
>get to go into a probability calculation, say “I don’t know the details of how
>this works, and therefore I can assume these events are independent.” You need
>to eliminate dependence. In the case of some kind of “pool” of pre-biotic
>polymers and fragments (which is what I meant by precursors), the chemical
>reactions occuring are not occuring in isolation. There are numerous kinds of
>interactions going on in a chemically active environment. You don’t get to just
>assume that those chemical interactions have no effect. It’s entirely
>reasonable to believe that there is a relationship between the chains that form
>in such an environment; if there’s a chance of dependence, you cannot just
>assume independence. But again – you just cook the numbers and use the
>assumptions that suit the argument you want to make.
———-
The rest of the debate was more repetition. Some selected bits:
>No matter how many times I offer a clear and well-supported answers to certain
>criticisms of my essays, those very same criticisms tend to reappear in this
>discussion, strong and vigorous as an octopus.
>
>1 No one knows the minimum ribozyme length for demonstrable replicator
>activity. The figure of the 100 base pairs required for what Arrhenius calls
>”demonstrable ligase activity,” is known. No conceivable purpose is gained from
>blurring this distinction.
>
>Does it follow, given a sample space containing 1060 polynucleotides of 100
>NT’s in length, that the odds in favor of finding any specific polynucleotide
>is one in 1060?
>Of course it does. It follows as simple mathematical fact, just as it follows
>as simple mathematical fact that the odds in favor of pulling any particular
>card from a deck of cards is one in fifty two.
>Is it possible that within a random ensemble of pre-biotic polynucleotides
>there may be more than one replicator?
>Of course it is possible. Whoever suggested the contrary?
This is a great example of Berlinksi’s argument style. Very arrogant argument by assertion, trying to throw as much text as possible at things in order to confuse them.
The issues we were allegedly “discussing” here was whether or not the space of nucleotide chains of a given length could be assumed to be perfectly uniform; and whether or not it made sense to assert that there was only *one* replicator in that space of 1060 possible chains.
As you can see, his response to the issue of distribution is basically shouting: “**Of course** it’s uniform, any moron can see that!”.
Except that it *isn’t* uniform. In fact, quite a number of chains of length 100 *are impossible*. It’s a matter of geometry: the different chains take different shapes depending on their constituents. Many of the possible chains are geometrically impossible in three dimensions. How many? No one is sure: protein folding is still a big problem: given our current level of knowledge, figuring out the shape of a protein that we *know* exists is still very difficult for us.
And his response to his claim that there is exactly one replicator in that space? To sidestep it by claiming that he never said that. Of course, he calculated his probability using that as an assumption, but he never explicitly *said* it.
His next response opened with a really wonderful example of his style: pure pedantry that avoids actually *discussing* the criticisms of his points.
>>The point is that we’re talking about some kind of pool of active chemicals
>>reacting with one another and forming chains ….
>
>What you are talking about is difficult to say. What molecular biologists are
>talking about is a) a random pool of beta D-nucleotides; and b) a random
>ensemble of polynucleotides. The polynucleotides form a random ensemble because
>chain polymerization is not sequence-specific.
>
>>The set of very long chains that form is probably not uniform ….
>
>Sets are neither uniform nor non-uniform. It is probability distributions that
>are uniform. Given a) and b) above, one has a classical sampling with
>replacement model in the theory of probability, and thus a uniform and discrete
>probability measure.
Ok. Anyone out there who could read this argument, and *not* know what I was talking about when I said “some kind of pool of active chemicals reacting with one another and forming chains”?
How about anyone who thinks that my use of the word “set” in the quote above is the least bit unclear? Anyone who thinks that “the set of very long chains that form is probably not uniform” is the *least* bit ambiguous?
No, I thought not. The point, as usual, is to avoid actually *addressing* difficult arguments. So when confronted with something hard to answer, you look for a good distraction, like picking on grammar or word choice, so that you can pretend that the reason you can’t answer an argument is because the argument didn’t make sense. So he focuses on the fact that I didn’t use the word “set” in its strict mathematical sense, and then re-asserts his argument.
Another example. At one point in the argument, disputing his assertion of independent probabilities for his “template” and his “replicator”, I said the following:
>I’m *not* saying that in general, you can’t make assumptions of independence.
>What I’m saying is what *any* decent mathematician would say: to paraphrase my
>first semester probability book: “independence between two events is a valid
>assumption *if and only if* there is no known interaction between the events.”
>That is the *definition* of independence…
Berlinksi’s response? Again, pure distractive pedantry:
>If you are disposed to offer advice about mathematics, use the language, and
>employ the discipline, common to mathematics itself. What you have offered is
>an informal remark, and not a definition. The correct definition is as follows:
>Two events A and B are independent if P(AB) = P(A)P(B). As a methodological
>stricture, the remark you have offered is, moreover, absurd inasmuch as some
>interaction between events can never be ruled out a priori, at least in the
>physical sciences.
Does this address my criticism? No.
The Bayesian rules for combining probabilities say “If A and B are independent, then the probability of AB is the probability of A times the probability of B”. You *can* invert that definition, and use it to show that two events are independent, by showing that the probability of their occurring together is
the product of their individual probabilities. What he’s doing up there is a pedantic repetition of a textbook definition in the midst of some arrogant posturing. But since he’s claiming to be talking mathematically, let’s look at what he says *mathematically*. I’m asserting that you need to show that events are independent if you want to treat them as independent in a probability calculation. He responds by saying I’m not being mathematical; and spits out the textbook definition. So let’s put the two together, to see what Berlinksi is arguing mathematically:
>We can assume that the probability of two events, A and B are independent and
>can be computed using P(AB) = P(A)×P(B) if and only if the probability
>P(AB) = P(A)×P(B).
Not a very useful definition, eh?
And his response to my criticism of that?
>Paris
>David Berlinski
>
>I am quite sure that I have outstayed my welcome. I’m more than happy to let
>you have the last words. Thank you for allowing me to post my own comments.
>
>DB
And that was the end of the debate.
Sad, isn’t it?

Bad Math from David Berlinksi

I’m away on vacation this week, so this is a repost of one of early GM/BM entries when it was on Blogger. As usual, I’ve revised it slightly. Berlinksi actually showed up and responded; a digest of the discussion back and forth is scheduled to appear here later this week.
——————————-
In my never-ending quest for bad math to mock, I was taking a look at the Discovery Institute’s website, where I found an essay, On the Origin of Life, by David Berlinksi. Bad math? Oh, yeah. Bad, sloppy, crappy math. Some of which is just duplication of things I’ve criticized before, but there’s a few different tricks in this mess.
Before I jump in to look at a bit of it, I’d like to point out a general technique that’s used in this article. It’s *very* wordy. It rambles, it wanders off on tangents, it mixes quotes from various people into its argument in superfluous ways. The point of this seems to be to keep you, the reader, somewhat off balance; it’s harder to analyze an argument when the argument is so scattered around, and it’s easier to miss errors when the steps of the argument are separated by large quantities of cutesy writing. Because of this, the section I’m going to quote is fairly long; it’s the shortest I could find that actually contained enough of the argument I want to talk about to be coherent.
>The historical task assigned to this era is a double one: forming chains of
>nucleic acids from nucleotides, and discovering among them those capable of
>reproducing themselves. Without the first, there is no RNA; and without the
>second, there is no life.
>
>In living systems, polymerization or chain-formation proceeds by means of the
>cell’s invaluable enzymes. But in the grim inhospitable pre-biotic, no enzymes
>were available. And so chemists have assigned their task to various inorganic
>catalysts. J.P. Ferris and G. Ertem, for instance, have reported that activated
>nucleotides bond covalently when embedded on the surface of montmorillonite, a
>kind of clay. This example, combining technical complexity with general
>inconclusiveness, may stand for many others.
>
>In any event, polymerization having been concluded–by whatever means–the result
>was (in the words of Gerald Joyce and Leslie Orgel) “a random ensemble of
>polynucleotide sequences”: long molecules emerging from short ones, like fronds
>on the surface of a pond. Among these fronds, nature is said to have discovered
>a self-replicating molecule. But how?
>
>Darwinian evolution is plainly unavailing in this exercise or that era, since
>Darwinian evolution begins with self-replication, and self-replication is
>precisely what needs to be explained. But if Darwinian evolution is unavailing,
>so, too, is chemistry. The fronds comprise “a random ensemble of polynucleotide
>sequences” (emphasis added); but no principle of organic chemistry suggests
>that aimless encounters among nucleic acids must lead to a chain capable of
>self-replication.
>
>If chemistry is unavailing and Darwin indisposed, what is left as a mechanism?
>The evolutionary biologist’s finest friend: sheer dumb luck.
>
>Was nature lucky? It depends on the payoff and the odds. The payoff is clear:
>an ancestral form of RNA capable of replication. Without that payoff, there is
>no life, and obviously, at some point, the payoff paid off. The question is the
>odds.
>
>For the moment, no one knows how precisely to compute those odds, if only
>because within the laboratory, no one has conducted an experiment leading to a
>self-replicating ribozyme. But the minimum length or “sequence” that is needed
>for a contemporary ribozyme to undertake what the distinguished geochemist
>Gustaf Arrhenius calls “demonstrated ligase activity” is known. It is roughly
>100 nucleotides.
>
>Whereupon, just as one might expect, things blow up very quickly. As Arrhenius
>notes, there are 4100 or roughly 1060 nucleotide sequences that are 100
>nucleotides in length. This is an unfathomably large number. It exceeds the
>number of atoms contained in the universe, as well as the age of the universe
>in seconds. If the odds in favor of self-replication are 1 in 1060, no betting
>man would take them, no matter how attractive the payoff, and neither
>presumably would nature.
>
>”Solace from the tyranny of nucleotide combinatorials,” Arrhenius
>remarks in discussing this very point, “is sought in the feeling
>that strict sequence specificity may not be required through all
>the domains of a functional oligmer, thus making a large number of
>library items eligible for participation in the construction of the
>ultimate functional entity.” Allow me to translate: why assume that
>self-replicating sequences are apt to be rare just because they are long? They
>might have been quite common.
>They might well have been. And yet all experience is against it. Why
>should self-replicating RNA molecules have been common 3.6 billion
>years ago when they are impossible to discern under laboratory
>conditions today? No one, for that matter, has ever seen a ribozyme
>capable of any form of catalytic action that is not very specific in its
>sequence and thus unlike even closely related sequences. No one has ever seen a
>ribozyme able to undertake chemical action without a suite of enzymes in
>attendance. No one has ever seen anything like it.
>
>The odds, then, are daunting; and when considered realistically, they are even
>worse than this already alarming account might suggest. The discovery of a
>single molecule with the power to initiate replication would hardly be
>sufficient to establish replication. What template would it replicate against?
>We need, in other words, at least two, causing the odds of their joint
>discovery to increase from 1 in 1060 to 1 in 10120. Those two sequences would
>have been needed in roughly the same place. And at the same time. And organized
>in such a way as to favor base pairing. And somehow held in place. And buffered
>against competing reactions. And productive enough so that their duplicates
>would not at once vanish in the soundless sea.
>
>In contemplating the discovery by chance of two RNA sequences a mere 40
>nucleotides in length, Joyce and Orgel concluded that the requisite “library”
>would require 1048 possible sequences. Given the weight of RNA, they observed
>gloomily, the relevant sample space would exceed the mass of the earth. And
>this is the same Leslie Orgel, it will be remembered, who observed that “it was
>almost certain that there once was an RNA world.”
>
>To the accumulating agenda of assumptions, then, let us add two more: that
>without enzymes, nucleotides were somehow formed into chains, and that by means
>we cannot duplicate in the laboratory, a pre-biotic molecule discovered how to
>reproduce itself.
Ok. Lots of stuff there, huh? Let’s boil it down.
The basic argument is the good old *big numbers* argument. Berlinski wants to come up with some really whoppingly big numbers to make things look bad. So, he makes his first big numbers appeal by looking at polymer chains that could have self-replicated. He argues (not terribly well) that the minimum length for a self-replicating polymer is 100 nucleotides. From this, he then argues that the odds of creating a self-replicating chain is 1 in 1060.
Wow, that’s a big number. He goes to some trouble to stress just what a whopping big number it is. Yes Dave, it’s a big number. In fact, it’s not just a big number, it’s a bloody *huge* number. The shame of it is, it’s *wrong*; and what’s worse, he *knows* it’s wrong. Right after he introduces it, he quotes a biochemist who pointed out the fact that that’s stupid odds because there’s probably more than one replicator in there. In fact, we can be pretty certain that there’s more than one: we know lots of ways of modifying RNA/DNA chains that don’t affect their ability to replicate. How many of those 1060 cases self-replicate? We don’t know. Berlinski just handwaves. Let’s look again at how he works around that:
>They might well have been. And yet all experience is against it. Why should
>self-replicating RNA molecules have been common 3.6 billion years ago when they
>are impossible to discern under laboratory conditions today? No one, for that
>matter, has ever seen a ribozyme capable of any form of catalytic action that
>is not very specific in its sequence and thus unlike even closely related
>sequences. No one has ever seen a ribozyme able to undertake chemical action
>without a suite of enzymes in attendance. No one has ever seen anything like
>it.
So – first, he takes a jump away from the math, so that he can wave his hands around. Then he tries to strengthen the appeal to big numbers by pointing out that we don’t see simple self-replicators in nature today.
Remember what I said in my post about Professor Culshaw, the HIV-AIDS denialist? You can’t apply a mathematical model designed for one environment in another environment without changing the model to match the change in the environment. The fact that it’s damned unlikely that we’ll see new simple self-replicators showing up *today* is irrelevant to discussing the odds of them showing up billions of years ago. Why? Because the environment is different. In the days when a first self-replicator developed, there was *no competition for resources*. Today, any time you have the set of precursors to a replicator, they’re part of a highly active, competitive biological system.
And then, he goes back to try to re-invoke the big-numbers argument by making it look even worse; and he does it by using an absolutely splendid example of bad combinatorics:
>The odds, then, are daunting; and when considered realistically, they are even
>worse than this already alarming account might suggest. The discovery of a
>single molecule with the power to initiate replication would hardly be
>sufficient to establish replication. What template would it replicate against?
>We need, in other words, at least two, causing the odds of their joint
>discovery to increase from 1 in 1060 to 1 in 10120. Those
>two sequences would have been needed in roughly the same place. And at the same
>time. And organized in such a way as to favor base pairing. And somehow held in
>place. And buffered against competing reactions. And productive enough so that
>their duplicates would not at once vanish in the soundless sea.
The odds of one self-replicating molecule developing out of a soup of pre-biotic chemicals is, according to Berlinksi, 1 in 1060. But then, the replicator can’t replicate unless it has a “template” to replicate against; and the odds of that are also, he claims, 1 in 1060. Therefore, the probability of having both the replicator and the “template” is the product of the probabilities of either one, or 1 in 10120.
The problem? Oh, no biggie. Just totally invalid false-independence. The Bayesian product formulation probabilities only works if the two events are completely independent. They aren’t. If you’ve got a soup of nucleotides and polymers, and you get a self-replicating polymer, it’s in an environment where the “target template” is quite likely to occur. So the odds are *not* independent – and so you can’t use the product rule.
Oh, and he repeats the same error he made before: assuming that there’s exactly *one* “template” molecule that can be used for replication.
And even that is just looking at a tiny aspect of the mathematical part: the entire argument about a template is a strawman; no one argues that the earliest self-replicator could only replicate by finding another perfectly matched molecule of exactly the same size that it could reshape into a duplicate of itself.
Finally, he rehashes his invalid-model argument: because we don’t see primitive self-replicators in todays environment, that must mean that they were unlikely in a pre-biotic environment.
This is what mathematicians call “slop”. A pile of bad reasoning based on fake numbers pulled out of thin air, leading to assertions based on the presence of really big numbers. All of which is what you expect from an argument that deliberately uses wrong numbers, invalid combinatorics, and misapplication of models. It’s hard to imagine what else he could have gotten wrong.

Big Numbers: Bad Anti-Evolution Crap from anncoulter.com

A reader sent me a copy of an article posted to “chat.anncoulter.com”. I can’t see the original article; anncoulter.com is a subscriber-only site, and I’ll be damned before I *register* with that site.
Fortunately, the reader sent me the entire article. It’s another one of those stupid attempts by creationists to assemble some *really big* numbers in order to “prove” that evolution is impossible.
>One More Calculation
>
>The following is a calculation, based entirely on numbers provided by
>Darwinists themselves, of the number of small selective steps evolution would
>have to make to evolve a new species from a previously existing one. The
>argument appears in physicist Lee Spetner’s book “Not By Chance.”
>
>At the end of this post — by “popular demand” — I will post a bibliography of
>suggested reading on evolution and ID.
>
>**********************************************
>
>Problem: Calculate the chances of a new species emerging from an earlier one.
>
>What We Need to Know:
>
>(1) the chance of getting a mutation;
>(2) the fraction of those mutations that provide a selective advantage (because
>many mutations are likely either to be injurious or irrelevant to the
>organism);
>(3) the number of replications in each step of the chain of cumulative >selection;
>(4) the number of those steps needed to achieve a new species.
>
>If we get the values for the above parameters, we can calculate the chance of
>evolving a new species through Darwinian means.
Fairly typical so far. Not *good* mind you, but typical. Of course, it’s already going wrong. But since the interesting stuff is a bit later, I won’t waste my time on the intro 🙂
Right after this is where this version of this argument turns particularly sad. The author doesn’t just make the usual big-numbers argument; they recognize that the argument is weak, so they need to go through some rather elaborate setup in order to stack things to produce an even more unreasonably large phony number.
It’s not just a big-numbers argument; it’s a big-numbers *strawman* argument.
>Assumptions:
>
>(1) we will reckon the odds of evolving a new horse species from an earlier
>horse species.
>
>(2) we assume only random copying errors as the source of Darwinian variation.
>Any other source of variation — transposition, e.g., — is non-random and
>therefore NON-DARWINIAN.
This is a reasonable assumption, you see, because we’re not arguing against *evolution*; we’re arguing against the *strawman* “Darwinism”, which arbitrarily excludes real live observed sources of variation because, while it might be something that really happens, and it might be part of real evolution, it’s not part of what we’re going to call “Darwinism”.
Really, there are a lot of different sources of variation/mutation. At a minimum, there are point mutations, deletions (a section getting lost while copying), insertions (something getting inserted into a sequence during copying), transpositions (something getting moved), reversals (something get flipped so it appears in the reverse order), fusions (things that were separate getting merged – e.g., chromasomes in humans vs. in chimps), and fissions (things that were a single unit getting split).
In fact, this restriction *a priori* makes horse evolution impossible; because the modern species of horses have *different numbers of chromasomes*. Since the only change he allows is point-mutation, there is no way that his strawman Darwinism can do the job. Which, of course, is the point: he *wants* to make it impossible.
>(3) the average mutation rate for animals is 1 error every 10^10 replications
>(Darnell, 1986, “Molecular Cell Biology”)
Nice number, shame he doesn’t understand what it *means*. That’s what happens when you don’t bother to actually look at the *units*.
So, let’s double-check the number, and discover the unit. Wikipedia reports the human mutation rate as 1 in 108 mutations *per nucleotide* per generation.
He’s going to build his argument on 1 mutation in every 10^10 reproductions *of an animal*, when the rate is *per nucleotide*, *per cell generation*.
So what does that tell us if we’re looking at horses? Well, according to a research proposal to sequence the domestic horse genome, it consists of 3×109 nucleotides. So if we go by wikipedia’s estimate of the mutation rate, we’d expect somewhere around 30 mutations per individual *in the fertilized egg cell*. Using the numbers by the author of this wretched piece, we’d still expect to see 1 out of every three horses contain at least one unique mutation.
The fact is, pretty damned nearly every living thing on earth – each and every human being, every animal, every plant – each contains some unique mutations, some unique variations in their genetic code. Even when you start with a really big number – like one error in every 1010 copies; it adds up.
>(4) To be part of a typical evolutionary step, the mutation must: (a) have a
>positive selective value; (b) add a little information to the genome ((b) is a
>new insight from information theory. A new species would be distinguished from
>the old one by reason of new abilities or new characteristics. New
>characteristics come from novel organs or novel proteins that didn’t exist in
>the older organism; novel proteins come from additions to the original genetic
>code. Additions to the genetic code represent new information in the genome).
I’ve ripped apart enough bullshit IT arguments, so I won’t spend much time on that, other to point out that *deletion* is as much of a mutation, with as much potential for advantage, as *addition*.
A mutation also does not need to have an immediate positive selective value. It just needs to *not* have negative value, and it can propagate through a subset of the population. *Eventually*, you’d usually (but not always! drift *is* an observed phenomenon) expect to see some selective value. But that doesn’t mean that *at the moment the mutation occurs*, it must represent an *immediate* advantage for the individual.
>(5) We will also assume that the minimum mutation — a point mutation — is
>sufficient to cause (a) and (b). We don’t know if this is n fact true. We don’t
>know if real mutations that presumably offer positive selective value and small
>information increases can always be of minimum size. But we shall assume so
>because it not only makes the calculation possible, but it also makes the
>calculation consistently Darwinian. Darwinians assume that change occurs over
>time through the accumulation of small mutations. That’s what we shall assume,
>as well.
Note the continued use of the strawman. We’re not talking about evolution here; We’re talking about *Darwinism* as defined by the author. Reality be damned; if it doesn’t fit his Darwinism strawman, then it’s not worth thinking about.
>Q: How many small, selective steps would we need to make a new species?
>
>A: Clearly, the smaller the steps, the more of them we would need. A very
>famous Darwinian, G. Ledyard Stebbins, estimated that to get to a new species
>from an older species would take about 500 steps (1966, “Processes of Organic
>Evolution”).
>
>So we will accept the opinion of G. Ledyard Stebbins: It will take about 500
>steps to get a new species.
Gotta love the up-to-date references, eh? Considering how much the study of genetics has advanced in the last *40 years*, it would be nice to cite a book younger than *me*.
But hey, no biggie. 500 selective steps between speciation events? Sounds reasonable. That’s 500 generations. Sure, we’ve seen speciation in less than 500 generations, but it seems like a reasonable guestimate. (But do notice the continued strawman; he reiterates the “small steps” gibberish.)
>Q: How many births would there be in a typical small step of evolution?
>
>A: About 50 million births / evolutionary step. Here’s why:
>
>George Gaylord Simpson, a well known paleontologist and an authority on horse
>evolution estimated that the whole of horse evolution took about 65 million
>years. He also estimated there were about 1.5 trillion births in the horse
>line. How many of these 1.5 trillion births could we say represented 1 step in
>evolution? Experts claim the modern horse went through 10-15 genera. If we say
>the horse line went through about 5 species / genus, then the horse line went
>through about 60 species (that’s about 1 million years per species). That would
>make about 25 billion births / species. If we take 25 billion and divided it by
>the 500 steps per species transition, we get 50 million births / evolutionary
>step.
>
>So far we have:
>
>500 evolutionary steps/new species (as per Stebbins)
>50 million births/evolutionary step (derived from numbers by G. G. Simpson)
Here we see some really stupid mathematical gibberish. This is really pure doubletalk – it’s an attempt to generate *another* large number to add into the mix. There’s no purpose in it: we’ve *already* worked out the mutation rate and the number of mutations per speciation. This gibberish is an alternate formulation of essentially the same thing; a way of gauging how long it will take to go through a sequence of changes leading to speciation. So we’re adding an redundant (and meaningless) factor in order to inflate the numbers.
>Q: What’s the chance that a mutation in a particular nucleotide will occur and
>take over the population in one evolutionary step?
>
>A: The chance of a mutation in a specific nucleotide in one birth is 10^-10.
>Since there are 50 million births / evolutionary step, the chance of getting at
>least one mutation in the whole step is 50 million x 10^-10, or 1-in-200
>(1/200). For the sake of argument we can assume that there is an equal chance
>that the base will change to any one of the other three (not exactly true in
>the real world, but we can assume to make the calculation easier – you’ll see
>that this assumption won’t influence things so much in the final calculation);
>so the chance of getting specific change in a specific nucleotide is 1/3rd of
>1/200 or 1-in-600 (1/600).
>
>So far we have:
>
>500 evolutionary steps/new species (as per Stebbins)
>50 million births/evolutionary step (derived from numbers by G. G. Simpson)
>1/600 chance of a point mutation taking over the population in 1 evolutionary >step (derived from numbers by Darnell in his standard reference book)
This is pure gibberish. It’s so far away from being a valid model of things that it’s laughable. But worse, again, it’s redundant. Because we’ve already introduced a factor based on the mutation rate; and then we’ve introduced a factor which was an alternative formulation of the mutation rate; and now, we’re introducing a *third* factor which is an even *worse* alternative formulation of the mutation rate.
>Q: What would the “selective value” have to be of each mutation?
>
>A: According to the population-genetics work of Sir Ronald Fisher, the chances
>of survival for a mutant is about 2 x (selective value).
>”Selective Value” is a number that is ASSIGNED by a researcher to a species in
>order to be able to quantify in some way its apparent fitness. Selective Value
>is the fraction by which its average number of surviving offspring exceeds that
>of the population norm. For example, a mutant whose average number of surviving
>offspring is 0.1% higher than the rest of the population would have a Selective
>Value = 0.1% (or 0.001). If the norm in the population were such that 1000
>offspring usually survived from the original non-mutated organism, 1001
>offspring would usually survive from the mutated one. Of course, in real life,
>we have no idea how many offspring will, IN FACT, survive any particular
>organism – which is the reason that Survival Value is not something that you go
>into the jungle and “measure.” It’s a special number that is ASSIGNED to a
>species; not MEASURED in it (like a species’ average height, weight, etc.,
>which are objective attributes that, indeed, can we can measure).
>
>Fisher’s statistical work showed that a mutant with a Selective Value of 1% has
>a 2% chance of survival in a large population. A chance of 2-in-100 is that
>same as a chance of 1-in-50. If the Selective Value were 1/10th of that, or
>0.1%, the chance would be 1/10th of 2%, or about 0.2%, or 1-in-500. If the
>Selective Value were 1/100th of 1%, the chance of survival would be 1/100th of
>2%, or 0.02%, or 1-in-5000.
>
>We need a Selection Value for our calculation because it tells us what the
>chances are that a mutated species will survive. What number should we use? In
>the opinion of George Gaylord Simpson, a frequent value is 0.1%. So we shall
>use that number for our calculation. Remember, that’s a 1-in-500 chance of
>survival.
>
>So far we have:
>
>500 evolutionary steps/new species (as per Stebbins)
>50 million births/evolutionary step (derived from numbers by G. G. Simpson)
>1/600 chance of a point mutation taking over the population in 1 evolutionary
>step (derived from numbers by Darnell in his standard reference book)
>1/500 chance that a mutant will survive (as per G. G. Simpson)
And, once again, *another* meaningless, and partially redundant factor added in.
Why meaningless? Because this isn’t how selection works. He’s using his Darwinist strawman again: everything must have *immediate* *measurable* survival advantage. He also implicitly assumes that mutation is *rare*; that is, a “mutant” has a 1-in-500 chance of seeing its mutated genes propagate and “take over” the population. That’s not at all how things work. *Every* individual is a mutant. In reality, *every* *single* *individual* possesses some number of unique mutations. If they reproduce, and the mutation doesn’t *reduce* the likelihood of its offspring’s survival, the mutation will propagate through the generations to some portion of the population. The odds of a mutation propagating to some reasonable portion of the population over a number of generations is not 1 in 500. It’s quite a lot better.
Why partially redundant? Because this. once again, factors in something which is based on the rate of mutation propagating through the population. We’ve already included that twice; this is a *third* variation on that.
>Already, however, the numbers don’t crunch all that well for evolution.
>
>Remember, probabilities multiply. So the probability, for example, that a point
>mutation will BOTH occur AND allow the mutant to survive is the product of the
>probabilities of each, or 1/600 x 1/500 = 1/300,000. Not an impossible number,
>to be sure, but it’s not encouraging either … and it’s going to get a LOT
>worse. Why? Because…
**Bzzt. Bad math alert!**
No, these numbers *do not multiply*. Probabilities multiply *when they are independent*. These are *not* independent factors.
>V.
>
>Q. What are the chances that (a) a point mutation will occur, (b) it will add
>to the survival of the mutant, and (c) the last two steps will occur at EACH of
>the 500 steps required by Stebbins’ statement that the number of evolutionary
>steps between one species and another species is 500?
See, this is where he’s been going all along.
* He created the darwinian strawman to allow him to create bizzare requirements.
* Then he added a ton of redundant factors.
* Then he combined probabilities as if they were independent when they weren’t.
* and *now* he adds a requirement for simultaneity which has no basis in reality.
>A: The chances are:
>
>The product of 1/600 x 1/500 multiplied by itself 500 times (because it has to
>happen at EACH evolutionary step). Or,
>
>Chances of Evolutionary Step 1: 1/300,000 x
>Chances of Evolutionary Step 2: 1/300,000 x
>Chances of Evolution Step 3: 1/300,000 x …
>. . . Chances of Evolution Step 500: 1/300,000
>
>Or,
>
>1/300,000^500
*Giggle*, *snort*. I seriously wonder if he actually believe this gibberish. But this is just silly. For the reasons mentioned above: this is taking the redundant factors that he already pushed into each step, inflating them by adding the simultaneity requirement, and then *exponentiating* them.
>This is approximately equal to:
>
>2.79 x 10^-2,739
>
>A number that is effectively zero.
As I’ve said before: no one who understands math *ever* uses the phrase *effectively zero* in a mathematical argument. There is no such thing as effectively zero.
On a closing note, this entire thing, in addition to being both an elaborate strawman *and* a sloppy big numbers argument is also an example of another kind of mathematical error, which I call a *retrospective error*. A retrospective error is when you take the outcome of a randomized process *after* it’s done, treat it as the *only possible outcome*, and compute the probability of it happening.
A simple example of this is: shuffle a deck of cards. What’s the odds of the particular ordering of cards that you got from the shuffle? 1/52! = 1/(8 * 1067). If you then ask “What was the probability of a shuffling of cards resulting in *this order*?”, you get that answer: 1 in 8 * 1067 – an incredibly unlikely event. But it *wasn’t* an unlikely event; viewed from the proper perspective, *some* ordering had to happen: any result of the shuffling process would have the same probability – but *one* of them had to happen. So the odds of getting a result whose *specific* probability is 1 in 8 * 1067 was actually 1 in 1.
The entire argument that our idiot friend made is based on this kind of an error. It assumes a single unique path – a single chain of specific mutations happening in a specific order – and asks about the likelihood that *single chain* leading to a *specific result*.
But nothing ever said that the primitive ancestors of the modern horse *had* to evolve into the modern horse. If they weren’t to just go extinct, they would have to evolve into *something*; but demanding that the particular observed outcome of the process be the *only possibility* is simply wrong.

Innumerate Fundamentalists and π

The stupidity and innumeracy of Americans, and in particular American fundamentalists, never ceases to astound me.
Recently on Yahoo, some bozo posted [something claiming that the bible was all correct][yahoo], and that genetics would show that bats were actually birds. But that’s not the real prize. The *real* prize of the discussion was in the ensuing thread.
A doubter posted the following question:
>please explain 1 kings 7.23 and how a circle can have a circumference of 30 of
>a unit and a radiius of 10 of a unit and i will become a christian
>
>23 And he made the Sea of cast bronze, ten cubits from one brim to the other;
>it was completely round. Its height was five cubits, and a line of thirty
>cubits measured its circumference. (1 Kings 7:23, NKJV)
And the answer is one of the all-time greats of moronic innumeracy:
>Very easy. You are talking about the value of Pi.
>That is actually 3 not 3.14…….
>The digits after the decimal forms a geometric series and
>it will converge to the value zero. So, 3.14…..=3.00=3.
>Nobody still calculated the precise value of Pi. In future
>they will and apply advenced Mathematics to prove the value of Pi=3.
[yahoo]: http://answers.yahoo.com/question/?qid=20060808164320AAl8z7K&r=w#EsArCTu7WTNaDSL.CVTGFHpKzx2nixwD70ICPWo2wTRcAQawQUIY

Causeless Math from Dembski and Friend

Over at his blog, William Dembski, my least-favorite pathetic excuse for a mathematician, [has cited an article][dembski] written by one John Davison about the theory of evolution. According to Dembski, this article explains “why the naked emperor still lingers”; that is, why the theory of evolution is still around even though it’s so obviously wrong. (Update: I originally typed “Paul Davison” instead of “John Davison”, I don’t know why. Any “Paul Davison”s out there, sorry for associating your name with this dreck. Additionally, the article is posted on Dembski’s blog, but it wasn’t posted by Dembski himself; it was posted by one of the site moderators “Scott”.)
It’s easy to see why Dembski likes this article. I’ve always found Dembski’s writing to be obnoxiously pretentious; this guy writes in that same snotty faux-intellectual style. Here’s a taste from the beginning of Davison’s article.
>Darwinism has persisted because it failed to recognize the nature of first
>causes. It is only natural to assume that results had causes and it is the duty
>of the scientist to find and reveal those causes. At this science has been
>incredibly successful. Many examples are presented in medical science with the
>discovery of the causes, treatments and cures for hundreds of diseases. All of
>Chemistry has been revealed from the consideration of how atomic forces have
>caused molecules to have the structure and properties that they have. This is
>analytical science and it is great science.
>
>I like titles presented as questions because that is what science is really
>supposed to be all about – answering questions. One cannot answer a question
>until it has been posed.
>
>I have used this technique in the past with “Is evolution finished” and most
>recently, also in the current issue of Rivista di Biologia, “Do we have an
>evolutionary theory?”
>
>You will note that I choose my words carefully. I do not question that it has
>persisted because that is self-evident, but rather how has that been possible?
>
>I have the answer and here it is in abbreviated form.
See what I mean? This section also already starts to hint at what’s wrong; but what really set me off, and let me to write about it here, on a math blog, is what comes next:
>Darwinism has persisted because it failed to recognize the nature of first
>causes. It is only natural to assume that results had causes and it the duty of
>the scientist to find and reveal those causes. At this science has been
>incredibly successful. Many examples are presented in medical science with the
>discovery of the causes, treatments and cures for hundreds of diseases. All of
>Chemistry has been revealed from the consideration of how atomic forces have
>caused molecules to have the structure and properties that they have. This is
>analytical science and it is great science.
>
>But does this approach have limits beyond which it cannot DIRECTLY proceed?
>This is another very critical question and I will answer it with a resounding
>yes.
>
>Those limits are met when we attempt to identify the causes of the tools with
>which we proceed. I will use mathematics as an example. Mathematics has
>rightfully been described as “The Queen of the Sciences.” Without math there
>could be no science, at least a science as we know it.
Yeah, he’s going to invoke mathematics as his argument. And of course, it’s going to be *bad*. **Really** bad. Stupid bad.
>So here comes the moment of truth as it were. What is the cause of mathematics?
>More accurately we should ask – what WAS the cause of mathematics because it
>has always been there just waiting to be discovered. That discovery began with
>the Pythagoreans and continues to this day.
>
>Mathematics has no discernable cause does it? Now what does this all have to do
>with evolution? It has everything to do with evolution because both ontogeny
>and phylogeny, like mathematics have no discernable cause.
Yes, the root of his argument is that mathematics has *no cause*. And evolution, like mathematics, also has no discernable cause.
What the hell does this *mean*? Well, to be frank, absolutely bloody nothing. This is what is crudely known as “talking out your ass”.
>And so we come to the answer to the question posed in my title.
>
>Darwinism has persisted because it assumes a detectable, discernable cause, a
>cause which never existed. It even claims to tell us all about this
>non-existent cause. The cause is random changes in genes (mutations) coupled
>with nature acting to select which of these should survive. These two
>processes, genetic variation and selection, have been the sole means by which
>organisms have evolved.
Yeah, y’see, evolution has *no cause*, just like mathematics. But the theory of evolution has hung around not because it actually explains anything; not because it has evidence to support it; not because it matches the facts; it’s because it creates an *illusion* of a cause.
>Now what is the actual tangible evidence to support this model? That is another
>very good question by the way. That is what science is all about, asking
>questions and then trying to answer them. In this case the answers that emerge
>are very clear.
That’s a very good question indeed. Shame he doesn’t bother to answer it.
>Natural selection first of all is very real. Its effect is to prevent change
>rather than to promote it. This was first recognized by Mivart and then
>subsequently and independently by Reginald C. Punnett and then Leo Berg.
Yeah, y’see there were these two guys, and like we were talking? and they said that natural solution prevents change, and they were, like, really convincing.
That’s his “tangible evidence” for the argument that evolution as a theory has persisted because it creates an illusion of cause where there is none.
>So you see there are really two reasons that Darwinism has persisted.
>
>The first I have already presented. It assumes a cause which never existed. The
>second reason it has persisted is because it has also assumed that no one ever
>existed who questioned the cause which never existed.
And yet again, random bullshit comes out of nowhere. Evolution has persisted because it denies the existence of people who question it.
>Like mathematics, both ontogeny and phylogeny never had exogenous causes. Both
>are manifestations of the expression of preformed pre-existent blocks of highly
>specific information which has been released over the millennia as part of a
>self-limiting process known as organic evolution, a phenomenon, my opinion, no
>longer in progress.
And again, we come back to that horrible comparison to math. Math, according to Davison is “causeless”; it consists of a set of facts that exist independently of any cause. Likewise, he claims that evolution is “causeless”; it’s nothing but the expression of a set of genetic information that has been coded into life from the very beginning. Evidence? He’s so smart, he doesn’t need any stinking evidence! Evidence is for stuff that has a cause!
>Everything we are now learning supports this interpretation which I have
>presented in summary form in my recent paper – “A Prescribed Evolutionary
>Hypothesis.”
Everything we’re learning supports this. Of course, he doesn’t mention *any* of it; not one fact, not one scrap of evidence; not anything about all of the genomes we’ve mapped out; not the name of one biologist who’s done work supporting this, not one paper that talks about this evidence. Nothing.
*This* is what Dembski thinks of as a *good* article arguing in favor of ID.
[dembski]: http://www.uncommondescent.com/index.php/archives/1353

Bad, bad, bad math! AiG and Information Theory

While taking a break from some puzzling debugging, I decided to hit one of my favorite comedy sites, Answers in Genesis. I can pretty much always find something sufficiently stupid to amuse me on their site. Today, I came across a gem called [“Information, science and biology”][gitt], by the all too appropriately named “Werner Gitt”. It’s yet another attempt by a creationist twit to find some way to use information theory to prove that life must have been created by god.
It looks like the Gitt hasn’t actually *read* any real information theory, but has rather just read Dembski’s wretched mischaracterizations, and then regurgitated and expanded upon them. Dembski was bad enough; building on an incomplete understand of Dembski’s misrepresentations and errors is just astonishing.
Anyway, after butchering an introduction to Shannon theory, he moves onward.
>The highest information density known to us is that of the DNA
>(deoxyribonucleic acid) molecules of living cells. This chemical storage medium
>is 2 nm in diameter and has a 3.4 NM helix pitch (see Figure 1). This results
>in a volume of 10.68 x 10-21 cm3 per spiral. Each spiral contains ten chemical
>letters (nucleotides), resulting in a volumetric information density of 0.94 x
>1021 letters/cm3. In the genetic alphabet, the DNA molecules contain only the
>four nucleotide bases, that is, adenine, thymine, guanine and cytosine. The
>information content of such a letter is 2 bits/nucleotide. Thus, the
>statistical information density is 1.88 x 1021 bits/cm3.
This is, of course, utter gibberish. DNA is *not* the “highest information density known”. In fact, the concept of *information density* is not well-defined *at all*. How do you compare the “information density” of a DNA molecule with the information density of an electromagnetic wave emitted by a pulsar? It’s meaningless to compare. But even if we accept information as only physically encoded, Consider the information density of a crystal, like a diamond. A diamond is an incredibly compact crystal of carbon atoms. There are no perfect diamonds: all crystals contain irregularities and impurities. Consider how dense the information of that crystal is: the position of every flaw, every impurity, the positions of the subset of carbon atoms in the crystal that are carbon-14 as opposed to carbon-12. Considerably denser than DNA, huh?
After this is where it *really* starts to get silly. Our Gitt claims that Shannon theory is incomplete, because after all, it’s got a strictly *quantitative* measure of information: it doesn’t care about what the message *means*. So he sets out to “fix” that problem. He proposes five levels of information: statistics, syntax, semantics, pragmatics, and apobetics. He claims that Shannon theory (and in fact information theory *as a whole*) only concerns itself with the first; because it doesn’t differentiate between syntactically valid and invalid information.
Let’s take a quick run through the five, before I start mocking them.
1. Statistics. This is what information theory refers to as information content, expressed in terms of an event sequence (as I said, he’s following Dembski); so we’re looking at a series of events, each of which is receiving a character of a message, and the information added by each event is how surprising that event was. That’s why he calls it statistical.
2. Syntax. The structure of the language encoded by the message. At this level, it is assumed that every message is written in a *code*; you can distinguish between “valid” and “invalid” messages by checking whether they are valid strings of characters for the given code.
3. Semantics. What the message *means*.
4. Pragmatics. The *primitive intention* of the transmitter of the message; the specific events/actions that the transmitter wanted to occur as a result of sending the message.
5. Apobetics: The *purpose* of the message.
According to him, level 5 is the most important one.
Throughout the article, he constantly writes “theorems”. He clearly doesn’t understand what the word “theorem” means, because these things are just statements that he would *like* to be true, but which are unproven, and often unprovable. A few examples?
For example, if we look at the section about “syntax”, we find the following as theorems:
>Theorem 4: A code is an absolutely necessary condition for the representation
>of information.
>
>Theorem 5: The assignment of the symbol set is based on convention and
>constitutes a mental process.
>
>Theorem 6: Once the code has been freely defined by convention, this definition
>must be strictly observed thereafter.
>
>Theorem 7: The code used must be known both to the transmitter and receiver if
>the information is to be understood.
>
>Theorem 8: Only those structures that are based on a code can represent
>information (because of Theorem 4). This is a necessary, but still inadequate,
>condition for the existence of information.
>
>These theorems already allow fundamental statements to be made at the level of
>the code. If, for example, a basic code is found in any system, it can be
>concluded that the system originates from a mental concept.
How do we conclude that a code is a necessary condition for the representation of information? We just assert it. Worse, how do we conclude that *only* things that are based on a code represent information? Again, just an assertion – but an *incredibly* strong one. He is asserting that *nothing* without a
structured encoding is information. And this is also the absolute crux of his argument: information only exists as a part of a code *designed by an intelligent process*.
Despite the fact that he claims to be completing Shannon theory, there is *nothing* to do with math in the rest of this article. It’s all words. Theorems like the ones quoted above, but becoming progressively more outrageous and unjustified.
For example, his theorem 11:
>The apobetic aspect of information is the most important, because it embraces
>the objective of the transmitter. The entire effort involved in the four lower
>levels is necessary only as a means to an end in order to achieve this
>objective.
After this, we get to his conclusion, which is quite a prize.
>On the basis of Shannon’s information theory, which can now be regarded as
>being mathematically complete, we have extended the concept of information as
>far as the fifth level. The most important empirical principles relating to the
>concept of information have been defined in the form of theorems.
See, to him, a theorem is nothing but a “form”: a syntactic structure. And this whole article, to him, is mathematically complete.
>The Bible has long made it clear that the creation of the original groups of
>fully operational living creatures, programmed to transmit their information to
>their descendants, was the deliberate act of the mind and the will of the
>Creator, the great Logos Jesus Christ.
>
>We have already shown that life is overwhelmingly loaded with information; it
>should be clear that a rigorous application of the science of information is
>devastating to materialistic philosophy in the guise of evolution, and strongly
>supportive of Genesis creation.
That’s where he wanted to go all through this train-wreck. DNA is the highest-possible density information source. It’s a message originated by god, and transmitted by each generation to its children.
And as usual for the twits (or Gitts) that write this stuff, they’re pretending to put together logical/scientific/mathematical arguments for god; but they can only do it by specifically including the necessity of god as a premise. In this case, he asserts that DNA is a message; and a message must have an intelligent agent creating it. Since living things cannot be the original creators of the message, since the DNA had to be created before us. Therefore there must be a god.
Same old shit.
[gitt]: http://www.answersingenesis.org/tj/v10/i2/information.asp

Debunking "A Mathematicians View of Evolution"

This weekend, I came across Granville Sewell’s article “[A Mathematicians View of Evolution][sewell]”. My goodness, but what a wretched piece of dreck! I thought I’d take a moment to point out just how bad it is. This article, as described by the [Discovery Institute][diref], purportedly shows:
>… that Michael Behe’s arguments against neo-Darwinism from irreducible
>complexity are supported by mathematics and the quantitative sciences,
>especially when applied to the problem of the origin of new genetic
>information.
I have, in the past, commented that the *worst* math is no math. This article contains *no math*. It’s supposedly arguing that mathematics supports the idea of irreducible complexity. Only there’s no math – none!
The article claims that there are *two* arguments from mathematics that disprove evolution. Both are cheap rehashes of old creationist canards, so I won’t go into much depth. But it’s particularly appalling to see someone using trash like this with the claim that it’s a valid *mathematical* argument.
The First Argument: You can’t make big changes by adding up small ones.
————————————————————————-
Sewell:
>The cornerstone of Darwinism is the idea that major (complex) improvements can
>be built up through many minor improvements; that the new organs and new
>systems of organs which gave rise to new orders, classes and phyla developed
>gradually, through many very minor improvements.
This is only the first sentence of the argument, but it’s a good summary of what follows. There are, of course, several problems with this, but the biggest one coming from a mathematician is that this asserts that it’s impossible to move a large finite distance by taking small finite steps. This is allegedly a mathematician making this argument – but that’s what he’s claiming: that it’s impossible for any large change to occur as a result of a large number of small changes.
It also incorrectly assumes a *directionality* to evolution. This is one of the requirements of Behe’s idea: that evolution can only *add*. So if we see a complex system, the only way it could have been produced by an evolutionary process is by *adding* parts to an earlier system. That’s obviously not true – and it’s not even consistent with the other creationist arguments that he uses. And again, as a mathematician, he *should* be able to see the problem with that quite easily. In mathematical terms, this is the assertion that evolution is monotonically increasing in complexity over time. But neither he nor Behe makes any argument for *why* evolution would be monotonically increasing with respect to complexity.
So there’s the first basic claim, and my summary of what’s wrong with it. How does he support this claim?
Quite badly:
>Behe’s book is primarily a challenge to this cornerstone of Darwinism at the
>microscopic level. Although we may not be familiar with the complex biochemical
>systems discussed in this book, I believe mathematicians are well qualified to
>appreciate the general ideas involved. And although an analogy is only an
>analogy, perhaps the best way to understand Behe’s argument is by comparing the
>development of the genetic code of life with the development of a computer
>program. Suppose an engineer attempts to design a structural analysis computer
>program, writing it in a machine language that is totally unknown to him. He
>simply types out random characters at his keyboard, and periodically runs tests
>on the program to recognize and select out chance improvements when they occur.
>The improvements are permanently incorporated into the program while the other
>changes are discarded. If our engineer continues this process of random changes
>and testing for a long enough time, could he eventually develop a sophisticated
>structural analysis program? (Of course, when intelligent humans decide what
>constitutes an “improvement”, this is really artificial selection, so the
>analogy is far too generous.)
Same old nonsense. This is a *bad* analogy. A *very* bad analogy.
First of all, in evolution, *we start with a self-reproducing system*. We don’t start with completely non-functional noise. Second of all, evolution *does not have a specific goal*. The only “goal” is continued reproduction.
But most importantly for an argument coming from a supposed mathematician: he deliberately discards what is arguably *the* most important property of evolution. In computer science terms (since he’s using a programming argument, it seems reasonable to use a programming-based response): parallelism.
In evolution, you don’t try *one* change, test it to see if it’s good and keep it if it is, then go on and try another change. In evolution, you have millions of individuals *all reproducing at the same time*. You’re trying *millions* of paths at the same time.
In real evolutionary algorithms, we start with some kind of working program. We then copy it, *Many* times; as many as we can given the computational resources available to us. While copying, we randomly “mutate” each of the copies. Then we run them, and see what does best. The best ones, we keep for the next generation.
What kind of impact does parallelism have?
As an experiment, I grabbed a rather nifty piece of software for my mac called [Breve Creatures][breve]. Breve is an evolutionary algorithms toolkit; BC uses it to build moving machines. The way it works is that it produces a set of random assemblies of blocks, interconnected by hinges, based on an internal “genetic code”. For each one, it flexes the hinges. Each generation, it picks the assemblies that managed to move the farthest, and mutates it 20 times. Then it tries each of those. And so on. So Breve gives us just 20 paths per generation.
Often, in the first generation, you see virtually no motion. The assemblies are just random noise; one or two just happen to wiggle in a way that makes them fall over, which gives that a tiny bit of distance.
Typically within 20 generations, you get something that moves well; within 50, you get something that looks amazingly close to the way that some living creature moves. Just playing with this a little bit, I’ve watched it evolve things that move like inchworms, like snakes, like tripeds (two legs in front, one pusher leg in back), and quadrapeds (moving like a running dog).
In 20 generations of Breve, we’ve basically picked a path to successful motion from a tree of 2020 possible paths. Each generation, we’ve pruned off the ones that weren’t likely to lead us to faster motion, and focused on the subtrees that showed potential in the tests.
Breve isn’t a perfect analogy for biological evolution either; but it’s better than Sewell’s. There’s two important things to take from this Breve example:
1. Evolution doesn’t have a specific goal. In the case of Breve Creations, we didn’t say “I want to evolve something that walks like a dog.” The selection criteria was nothing more than “the ones that moved the furthest”. Different runs of BC create very different results; similarly, if you were to take a given species, and put isolated two populations of it in similar conditions, you’d likely see them evolve in *different* ways.
2. Evolution is a process that is massively parallel. If you want to model it as a search, it’s a massively parallel search that prunes the search space as it goes. Each selection step doesn’t just select one “outcome”; it prunes off huge areas of the search space.
So comparing the process to *one* guy randomly typing, trying *each* change to see how it works – it’s a totally ridiculous analogy. It deliberately omits the property of the process that allows it to work.
The Second Argument: Thermodynamics
————————————-
>The other point is very simple, but also seems to be appreciated only by more
>mathematically-oriented people. It is that to attribute the development of life
>on Earth to natural selection is to assign to it–and to it alone, of all known
>natural “forces”–the ability to violate the second law of thermodynamics and
>to cause order to arise from disorder.
Yes, it’s the old argument from thermodynamics.
I want to focus on one aspect of this which I think has been very under-discussed in refutations of the thermodynamic argument. Mostly, we tend to focus on the closed-system aspect: that is, the second law of thermodynamics says that in a *closed system*, entropy increases monotonically. Since the earth is manifestly *not* a closed system, there’s nothing about seeing a local decrease in entropy that would be a problem from a thermodynamic point of view.
But there’s another very important point. Entropy is *not* chaos. An system that seems ordered is *not* necessarily lower entropy than a system that seems chaotic. With respect to thermodynamics, the real question about biology is: do the chemical processes of life result in a net increase in entropy? The answer? *I don’t know*. But neither does Sewell or the other creationists who make this argument. Certainly, watching the action of life: the quantity of energy we consume, and the quantity of waste we produce, it doesn’t seem *at all* obvious that overall, life represents a net decrease in entropy. Sewell and folks like him make the argument from thermodynamics *never even try* to actually *do the math* and figure out if if the overall effect of any biological system represents a net increase or decrease in entropy.
For someone purportedly writing a *mathematicians* critique of evolution, to argue about thermodynamic entropy *without bothering to do the math necessary to make the argument* is a disgrace.
[sewell]: http://www.math.utep.edu/Faculty/sewell/articles/mathint.html
[diref]: http://www.discovery.org/scripts/viewDB/index.php?command=view&id=3640&program=CSC%20-%20Scientific%20Research%20and%20Scholarship%20-%20Science
[breve]: http://www.spiderland.org/breve/