ID Garbage: CSI as Non-Computability

An alert reader pointed me at a recent post over at Uncommon Descent by a guy who calls
himself “niwrad”, which argues (among other things) that life is
non-computable. In fact, it basically tries to use computability
as the basis of Yet Another Sloppy ID Argument (TM).

As you might expect, it’s garbage. But it’s garbage that’s right
up my alley!

It’s not an easy post to summarize, because frankly, it’s
pretty incoherent. As you’ll see when we starting looking
at the sections, niwrad contradicts himself freely, without seeming
to even notice it, much less realize that it’s actually a problem
when your argument is self-contradictory!

To make sense out of it, the easiest thing to do is to put it into the
context of the basic ID arguments. Bill Dembski created a concept called
“specified complexity” or “complex specified information”. I’ll get to the
definition of that in a moment; but the point of CSI is that according to
IDists, only an intelligent agent can create CSI. If a mechanical process
appears to create CSI, that’s because the CSI was actually created by an
intelligent agent, and embedded in the mechanical process. What our new friend
niwrad does is create a variant of that: instead of just saying “nothing but
an intelligent agent can create CSI”, he says “CSI is uncomputable, therefore
nothing but an intelligent agent can create it” – that it, he’s just injecting
computability into the argument in a totally arbitrary way.

So, what’s CSI? Long-time readers will have seen my old
critique of it
. I’ll just reiterate the key points here. CSI is something
that you can never really pin down: it’s a contradiction wrapped up in
obfuscatory mathematics to make it appear meaningful. Nothing
actually has specified complexity, because nothing can have specified
complexity, because specified complexity is fundamentally self-contradictory:
by looking at the basic definitions of the terms using information theory, you
find that specification equals not-complex, and complex equals not-specified.
So to have specified complexity is something like being both invisible and
florescent pink at the same time.

Ok, background out of the way. Let’s look at his article. In the first
section, he presents his version the standard CSI argument, with the random
insertion of computability. I think the best summary of it is the following:

IDT shows that CSI cannot be generated by chance and necessity
(randomness and laws). An algorithm (which is a generalization of law) can
output only what is computable and CSI is not. The concept of intelligence as
“generator of CSI” can be generalized as “generator of what is incomputable”.
Obviously, needless to say, intelligence eventually can generate also what is
computable (in fact what can do more can do less). Intelligence can work as a
machine but a machine cannot work as intelligence. Between the two there is a
non invertible relation. This is the reason why intelligence designs machines
and the inverse is impossible. To consider intelligence as “generator of what
is incomputable” makes sense because we know that intelligence is able for
instance to develop math. Metamathematics (Gödel theorems) states that math is
in general incomputable. It establishes limits to the mechanistic deducibility
but doesn’t establish limits to the intelligence and creativity of
mathematicians.

Now it’s straightforward to see that the generator of what is incomputable
is incomputable. Let’s hypothesize that it is computable, i.e. can be
generated by a TM. If this TM can generate it and in turn it can generate what
is incomputable then, given that an output of an output is an output, this TM
could compute what is incomputable and this is a contradiction. Since we get a
contradiction the premise is untrue, then intelligence is not computable.

Before I get to the meat of it… Gödel’s theorems don’t say that
“Math in general is uncomputable”. I’m going to pick on this, because
I’ve mentioned Gödel many times on this blog, and I’ve frequently
been guilty of over-simplifying when I talk about what Gödel’s incompleteness
theorem actually says. It’s hard to state simply in a way that actually gets
the true depth and meaning of it across clearly. But as bad as I’ve been,
I’ve never come close to botching Gödel this badly. I don’t know
whether niwrad has ever actually studied Gödel or not; I suspect
not, and that this is just his wretched misstatement of his own misunderstanding
of an over-simplified statement of Gödel by someone like me. But it
does point out the danger of having people like me try to present
simplified explanations of complicated things: there are always bozos
who believe that by hearing a simplified intuitive explanation of something,
that they’ve understood the whole thing, and will then go off and run
with it.

(What Gödel actually said is something closer to “Any sufficiently
powerful formal reasoning system will be either incomplete or inconsistent. If
it’s incomplete, that means that it will be capable of expressing true
statements which are not provable in the system. If it’s inconsistent, it will
be capable of expressing statements which are neither true nor
false.” And that is, itself, a wretched over-simplification, and I’m willing
to bet that a couple of commenters will call me on it. Trying to state
Gödel simply is really difficult, because it’s simultaneously
a simple statement mathematically, while also being incredibly deep
and profound. It just doesn’t render well into english.)

But that’s not close to the worst part this babble. That’s
the second paragraph quoted above: “The generator of what is incomputable
is incomputable”.

The fundamental example, the first example, the most canonical example of
un-computability that anyone who studies computation knows about is called
the halting problem. The whole point of the halting problem
is that you can easily create a program which generates non-computable
results! The proof of the halting problem shows a completely mechanical
computable mechanism by which any supposed halting oracle can be
defeated – thus showing that the halting problem is uncomputable.

That’s also part of what Gödel did. He showed a mechanical
process by which you can trick any sufficiently powerful formal system into
producing problematical statements. You don’t even need to understand the
system. I can write a program which takes a description of a formal
system as input, and generates the series of steps to produce a Gödel
statement for that system. So the claim that Gödel proved that
you can’t generate uncomputable things by a computable mechanism is
complete nonsense – in fact, it’s the opposite of what Gödel
proved.

But you can basically take this section, and reduce it to a simple circle:
Intelligence is the ability to generate CSI. Why can intelligent things
generate CSI? Because the definition of intelligence is the ability to
generate CSI, therefore if something is intelligent, it can generate CSI.
Really, computability is just a red-herring: he’s equated CSI with
a particular kind of non-computability, and then written the CSI circular
argument with “CSI” replaced with “CSI equivalent noncomputability”.

In the next section, he tries to go a bit farther, and not just prove that
intelligence is non-computable, but that the non-computability of intelligence
proves that there must be a God, which is the “infinite information
source.”. In this section, he proceeds to disprove his argument
from the previous section.

The argument: Intelligent beings produce information. Information can’t
come from nowhere. Where did it come from? In the last section, he said that
intelligent beings can create CSI. But now he’s actually reneging on that. The
information that intelligent beings produce can’t just come from the
intelligent beings: that would be creating something from nothing, which is
impossible. So there must be a higher being – an infinite information
source
which produced all of the information. Intelligent beings
don’t create information. They just regurgitate it.

He doesn’t seem to have the slightest clue that he’s contradicting himself
here. But by the argument in this section, a human being is no different from
a Turing machine: both cannot, by his argument, produce CSI/CSI-equivalent
noncomputable information, unless they’ve obtained it from some other source.
Suddenly, life isn’t “non-computable” anymore – what he describes now is life
as a computable device which takes inputs from the “infinite information
source”. (Which is, of course, just another shallow renaming: just as he
substitutes “CSI-equivalent noncomputability” for “CSI”, he substitutes
“infinite information source” for “intelligent designer”). It’s the
same stupid trick.

After lots of rambling around that basic argument, he moves on to his next
section, in which he proceeds to pretend that he didn’t just obliterate his
own argument. He returns back to the argument that since supposedly intelligent
beings can create information, they must be non-materialistic – because
materialistic things can’t do non-computable stuff.

The whole computability argument comes down to this little bit of
circularity. niwrad asserts that life in general, and intelligence in
particular, must contain CSI. But he can’t prove it. He can’t point at
any specific property and prove that it has CSI. Instead he just relies on
intuition: it’s just obvious that these things have specified
complexity.

Then he asserts that CSI can’t be generated by any non-intelligent
process. Again, he doesn’t prove it; he doesn’t even really argue
it. He just blindly asserts it.

Finally, he takes his two assertions: life has CSI; CSI can’t be produced
by a non-intelligent process, and based on those, he can conclude that life is
non-computable. Since a computing device isn’t intelligent, it can’t produce
CSI: CSI is, by definition, non-computable. Therefore, if life (or intelligent
life) contains CSI, then by definition, life is non-computable. QED.

Same old, same old.

0 thoughts on “ID Garbage: CSI as Non-Computability

  1. James Sweet

    the non-computability of intelligence proves that there must be a God, which is the “infinite information source.”

    There’s very little that irritates me as much as an argument of the form:
    1) In order for the universe to exist the way it does, it requires something with trait X (e.g. uncaused, “infinite information source”, etc.)
    2) It sounds poetic to say that “God has trait X”
    3) Therefore, Jesus. Take that, atheists!
    Nevermind that point (1) almost always requires a whole pile of fallacies, or at least misunderstands the nature of causality and singularities. Even if I grant point (1) — hell, even if I grant point (2) — how the hell do these jokers get to point (3)? Why not, “Therefore, Zeus?” The mind reels.
    The leap to point (3) is so stupid, it’s hard for me to maintain interest in picking apart what is wrong with the various arguments leading to point (1).

    Reply
  2. Dan Doel

    A couple nits, which might be helpful for people who don’t already have computer science as their area of expertise.
    The proof of the incomputability of the halting problem doesn’t defeat all halting oracles. It only defeats halting oracles that can allegedly be implemented on Turing machines (and thus could allegedly be applied to themselves, or machines built using themselves). You could still have an oracle that can magically solve the halting problem by some uncomputable process. You can then, of course, use a similar argument that those oracles can not decide the halting problem on themselves, but you could have an extra magic oracle that could decide it, and so on.
    Also, I’ve never heard inconsistent defined as ‘being able to express statements that are neither true nor false.’ Rather, the typical description is that the system can prove statements that are false, or that it can prove a statement both true and false (which, in the most common logics, means you can prove every statement).
    When it comes to the halting stuff, it seems commonly misapplied to show (as in the stuff you reference) that our brains (or minds if you prefer) are the magic oracles in question, because, well, I can tell that this program right here doesn’t terminate, but we don’t have a good algorithm implemented to check that, so that must be the halting problem, and I must be doing uncomputable magic. But really, the halting construction is much weirder (if an algorithm could allegedly tell that it terminates, then it wouldn’t terminate, and vice versa; it’s really quite similar/related to the Goedel stuff, like if you could find a proof that it terminated, then it wouldn’t terminate, so you’d be working in an inconsistent theory, but take that analogy with a grain of salt).
    And it seems to me that what our brains have, once trained in thinking about programs, is much better heuristics for guessing/checking/demonstrating the cases where the halting problem is decidable (since it’s merely that there are some programs which can’t be so decided, not that all programs are so undecidable), and nobody’s figured out how to codify those heuristics in something that could run on the hardware we have today (and maybe we’ll never do much better than “train a brain-like neural net appropriately, and it will be able to decide the heuristic, but we won’t know exactly how it works.”).

    Reply
  3. Joshua Zelinsky

    Related to what Dan says regarding halting oracles, one can conceive of universes where the halting problem can be solved in finite time. Consider for example, a universe with a central preferred point and time runs faster the farther one gets from that point and does so quickly enough that an object moving fast from the center can have an infinite amount of time pass compared to objects near the center. Then by taking a physical representation of a Turing machine that sends a signal out if it halts and throwing it away from the central point one can solve the halting problem. (You need to handle some issues like having a system to make sure you can extend the tape indefinitely but it isn’t hard to come up with physical properties that would let you do this).
    Halting issues are thus a function primarily of the sort of universe we seem to operate in, in that the standard Turing machine seems to be about the limit of what is computational possible in our universe.
    Regarding summarizing Godel’s theorem: The way I normally summarize Godel’s theorem is more or less as follows: Assume we have a consistent axiomatic systems which has a set of axioms which is computable, has simple rules of inference, and is powerful enough to model the integer. Then it must have an undecidable statement.

    Reply
  4. Dan Doel

    Well, as far as oracles go, one could note that the infamous James Anderson’s perspex machines (which have actually been covered here) are, I think, halting oracles, since they can do infinite amounts of (Turing machine) work in finite time, because they operate on genuine real numbers. But, as far as I know, he hasn’t built one yet, and probably won’t be able to do so (insert hand waving here). But he nevertheless claims that super-Turing computation is possible in this universe.

    Reply
  5. Mark C. Chu-Carroll

    @3, Joshua:
    That’s what I meant about it being a simple statement where it’s hard to understand its depth or profundity. That is a reasonably accurate short version of Gödel’s incompleteness theorem. But if you don’t really understand what a “consistent axiomatic system” is, or what an “undecidable statement” is, you completely miss the point. So people like me try to find some way of saying it that gets the basic meaning, and that also manages to capture some sense of the depth of it, and of why it’s one of the most important mathematical theorems ever.
    It’s a tall order, and it’s no wonder that I fail whenever I try to explain it. 100 years ago, pretty much every mathematician in the world believed that if you constructed a formal system in the right way, there would be no such thing as an undecidable statement. What could possibly be a bigger change in how we understand math and logic than the fact that we can’t build a decidable system of mathematics?

    Reply
  6. Mark C. Chu-Carroll

    @5:
    Perspex is rubbish, plain and simple.
    If you could compute with true real numbers, you could do all sorts of fascinating things. But the unfortunate fact for Mr. Anderson is that in a discrete, quantum universe like ours, it’s impossible to build any machine that actually works with real numbers.
    I’ve never even bothered to really look at Anderson’s super-turing claims. They’re based on things that his Perspex machine can do – but the Perspex machine is impossible. Like any other impossible thing, if you accept it, you can use it to prove absolutely anything.

    Reply
  7. Dan Doel

    Yeah, I didn’t mean to imply that I think perspex machines are realizable. And I haven’t read his stuff on them, either, and am just guessing that, as conceptual machines, you could probably use them as halting oracles.
    On a tangent, the Goedel-halting connection gets more precise in the light of type theory, and propositions-as-types that you’ve mentioned recently in connection with Haskell. The typed lambda calculi languages like Haskell are rooted in are strongly normalizing, meaning that every program you can construct in them terminates (Haskell has stuff added that makes this no longer true, though). Proofs of strong normalization (or generally even weak normalization, where you can prove that there is some terminating evaluation strategy, rather than that all strategies terminate) correspond to proofs of consistency of the logic that corresponds to the type system.
    Now, the logics of these type theories are generally strong enough to encode arithmetic, so via the Curry-Howard correspondence, we can apply the incompleteness theorems to the type theory. The second theorem is the more interesting one in this context, I think, which says that the type theory can only prove itself consistent if it is inconsistent. An inconsistent theory has a proof of false, but there are no terminating proofs of false (by construction), so an inconsistent theory must include non-terminating programs.
    But, we mentioned earlier that a normalization proof is a proof of logical consistency. And a terminating program that computes normal forms is such a normalization proof. And, programs in our type theory are all allegedly terminating. So, we can consider constructing a normalizer/interpreter for our type theory inside our type theory, and there are two possible outcomes:
    1) We can do it, which means our type theory is inconsistent, and has non-terminating programs.
    2) Our type theory is normalizing, so it can’t include its own interpreter as a program.
    So, Goedel’s incompleteness theorems translate, via Curry-Howard, to theorems about termination of functions in typed lambda calculi.

    Reply
  8. william e emba

    Joshua Zelinsky wrote:

    Related to what Dan says regarding halting oracles, one can conceive of universes where the halting problem can be solved in finite time. …

    Quite possibly, one can conceive of our own universe to be such.
    See Geroch and Hartle, “Computability and Physical Theories”, Foundations of Physics 16 (1986), pp 533-550. They discuss the possibility that quantum gravity is computationally intractable. Their argument, in brief, is that perturbative expansions of integrals over 4-manifolds up to homotopy run into various forms of the word problem (because all relevant groups show up as the fundamental group of some 4-manifold).
    That is, a standard computational approach to summing over 4-manifolds would be doomed by an inability to list the 4-manifolds uniquely. It’s unknown if a superior approach exists, but it seems unlikely.
    This would not, however, mean an end to experimental physics. It might be, for example, the mass of the electron encodes the halting problem in it, but the mass of the muon is computable relative to the mass of the electron.
    In other words, we could end up roughly where we are today, with a Standard Model that contains mystery parameters, and then does everything else. We’d have, instead, a model that contains uncomputable parameters, but it still keeps experimentalists busy with their consequences. One such consequence is a genuine working Turing oracle.

    Reply
  9. _Arthur

    Is CSI supposed to be a measurable quantity, or some ineffable essence, entirely in the eye of the human beholder ?
    If it is a quantity, what are its units and how is it measured ?
    What is the CSI of a bandicoot, of a penguin, and how do they compare to each other ?
    Which one has the highest CSI, the penguin or the kangaroo ?
    Inquiring minds want to know.

    Reply
  10. Mark C. Chu-Carroll

    @11:
    CSI is, purportedly, a quantity. I believe that it’s measured in bits, although it’s been a while since I looked.
    Dembski has constantly tweaked the definition of it – so it’s hard to pin down.
    No biological system of any type has ever had its CSI quantified. Dembski has presented what he claims to be the method for computing it, but he’s never actually followed through and done it.
    It’s really just an elaborate shell-game. Right now, it’s an inconsistent definition, defining a meaningless quantity, calculated by an incompletely-described method. That way, any time that anyone claims to show that something has CSI in a way which would falsify Dembski’s argument, he can just say “That’s not CSI”, or “That’s not calculated correctly”, and boom-presto, he can dismiss it.

    Reply
  11. Brian

    So to have specified complexity is something like being both invisible and fluorescent pink at the same time.

    Blessed be her holy hooves!

    Reply
  12. Tyler DiPietro

    The most ironic thing about this is that we already a theory of information where the actual quantity is uncomputable: Kolmogorov complexity. But the definition of Kolmogorov complexity is the length of a computer program on a UTM. The problem is that there is no partial recursive function defined on the set of binary strings that can compute the shortest effective computer program to produce that string (in essence, because you’d have to solve the halting problem). So the basic premise of the whole argument is false on the face of it.

    Reply
  13. Tyler DiPietro

    “Consider for example, a universe with a central preferred point and time runs faster the farther one gets from that point and does so quickly enough that an object moving fast from the center can have an infinite amount of time pass compared to objects near the center. Then by taking a physical representation of a Turing machine that sends a signal out if it halts and throwing it away from the central point one can solve the halting problem.”
    I’m no expert, but it would seem to me that termination conditions would be meaningless if infinite time-passage is allowed. At that point you wouldn’t be solving the halting problem as you were rendering it a non-problem. I should probably look at the mathematics myself, if I even have the background to comprehend it.

    Reply
  14. Joshua Zelinsky

    I’d be deeply surprised if our universe allows any physical solution of the halting problem simply based on empirical evidence of how it seems to work.
    However, it seems that as a speed matter our universe is likely able to do things faster than a strictly Newtonian universe due to the fun of quantum computation. But even then, that’s just an issue of time speed ups (exponential to polynomial and such). Also, as far as I’m aware we don’t have any example of a problem which is polynomial for a quantum computer and known to be not polynomial for a non-quantum computer. I think, but am not sure, that such a problem would imply that P != NP since qP is a subset of NP. Disclaimer: I know very little about complexity theory and close to zero about the quantum end of it. So this whole paragraph could be wrong.

    Reply
  15. Tyler DiPietro

    If we had a quantum algorithm that could solve NP-complete problems in polynomial time, then that would imply that either that P = NP or that BQP is a superset of P. The issue with P versus NP is not exactly one of exponential and polynomial asymptotes. Rather, P is deterministic polynomial time and NP is non-deterministic polynomial-time. A problem is only NP complete is every language in NP polynomially transforms to every instance of that problem. Some problems, like factoring, have exponential speedups on quantum computers but don’t have the NP-completeness property.

    Reply
  16. opkdx

    Slightly offtop.

    But it does point out the danger of having people like me try to present simplified explanations of complicated things: there are always bozos who believe that by hearing a simplified intuitive explanation of something, that they’ve understood the whole thing, and will then go off and run with it.

    Do you know that monads are burritos? 😉

    Reply
  17. Tyler DiPietro

    Sorry, Josh. I made the same mistake Scott Aaronson made when he first unveiled his banner. Factoring is not known to be NP-complete. However, it still stands as a problem that is exponential on classical computers and polynomial on quantum computers, but doesn’t prove that P = NP.
    Then again, I should probably also say that factoring is not known to be polynomial on classical computers, since if P = NP then every problem would have a polynomial time solution. That’s why we’re almost certain these days that P != NP.

    Reply
  18. Blake Stacey

    CSI is, purportedly, a quantity. I believe that it’s measured in bits, although it’s been a while since I looked.

    Sometimes it is, and sometimes it ain’t. Quoting Elsberry and Shallit (2003),

    Despite his insistence that his “program has a rigorous information-theoretic underpinning” [19, p. 371], CSI is used inconsistently in Dembski’s own work. Sometimes CSI is a quantity that one can measure in bits: “the CSI of a flagellum far exceeds 500 bits” [17, p. 178]. Other times, CSI is treated as a threshold phenomenon: something either “exhibits” CSI or doesn’t: “The Law of Conservation of Information says that if X exhibits CSI, then so
    does Y” [19, p. 163]. Sometimes numbers or bit strings “constitute” CSI [17, p. 159]; other times CSI refers to a pair (T;E) where E is an observed event and T is a pattern to which E conforms [19, p. 141]. Sometimes CSI refers to specified events of probability &lt 10^150; other times it can be contained in “the sixteen-digit number on your VISA card” or “even your phone number” [17, p. 159]. Sometimes CSI is treated as if, like Kolmogorov complexity, it is a property independent of the observer — this is the case in a faulty mathematical “proof” that functions cannot generate CSI [19, p. 153]. Other times it is made clear that computing CSI crucially depends on the background knowledge of the observer. Sometimes CSI inheres in a string regardless of its causal history (this seems always to be the case in natural language utterances); other times the causal history is essential to judging whether or not a string has CSI. CSI is indeed a measure with remarkably fluid properties! Like Blondlot’s N-rays, however, the existence of CSI seems clear only to its discoverer.

    In the bibliography, “17” is Dembski’s Intelligent Design: The Bridge Between Science & Theology (1999) and “19” is No Free Lunch: Why Specified Complexity Cannot Be Purchased Without Intelligence (2002).

    Reply
  19. Neil Fitzgerald

    Disclaimers: I’ve only skimmed through this, and I have zero respect for ‘dumbski’ and his intellectually bankrupt ideas, but actually I think that “math is uncomputable” is about as close as one can come to encapsulating Gödel’s theorem in a single slogan.
    To be more precise, consider the following proposition: “The set of true statements about the natural numbers, expressed in first order logic using only the functions + and * and the constant 0 as non-logical symbols, is not recursively enumerable.”
    It’s easy to see that this is equivalent to Gödel’s first incompleteness theorem:
    In one direction, we note that for any formal axiomatic system we can easily write a computer program that will enumerate all of its theorems. In the other, we’re given some computer program that, while it’s running, enumerates some theorems of arithmetic, and we turn it into a formal axiomatic system by taking a ‘proof’ of a statement to be a partial running of the program up until the point where it emits that particular statement.

    Reply
  20. Bob O'H

    The consensus of the CBEBs is that niwrad is Giuseppe Sermonti, former editor of Rivista di Biologia, which has published ID/creationist stuff in the past (e.g. John A. Davison’s stuff).

    Reply
  21. AL

    Intelligence can work as a machine but a machine cannot work as intelligence. Between the two there is a non invertible relation. This is the reason why intelligence designs machines and the inverse is impossible.

    Not surprising in the least that IDers are also mind-body dualists.

    Reply
  22. Peter

    Who gives a rat’s ass about what some pseudo-intellectual id’er said. I want an analysis of something relevant like the climate pseudo-scientists. Don’t tell me no one here has heard of climategate. My favourite quote is: “It’s not a smoking gun, it’s a mushroom cloud.” That’s relevant. That goes to the heart of the subversion of science and the destruction of its credibility.

    Reply
  23. MPL

    @2
    I don’t think there’s any reason to assume that trained humans are good at heuristically solving halting questions in general, just on human-written programs, which are carefully created to be easily understood by humans.
    If you presented programmers with some randomly generated, but legal programs, they would probably have little chance of determining any property of the program, unless you get very lucky.

    Reply
  24. Tom

    Thomas Aquinas made a career out of nearly identical arguments for the existence of God: he, too, developed circular and often contradictory arguments that forced him to assume his conclusions.
    It’s either reasuring or deeply disappointing that the ID movement hasn’t developed any greater intelligence in more than seven hundred years.

    Reply
    1. Jay

      “…seven hundred years”, that is nothing Tom.

      Darwinists have been pushing their secular belief and philosophies since Epicurus (d. 270 b.c). Come on folks, it has been over 2200 years and you still have not been able to prove spontaneous generation.

      What I found more astonishing is that you acolytes have no clue as to the origins of your secular religion. Darwin was simply a pawn, tasked with modernising an ancient old secular belief system. It didn’t work then, it doesn’t work now.

      And what is worse, you have the nerve to call yourselves “enlightened” haha….

      Reply
      1. clonus

        You must live in a crazy universe where the veracity of facts is determined by the identity of the speaker and not by evidence.
        I would like to personally welcome you to the real world.

        Reply
      2. Andrew

        Pretty good spoof – but no one is quite that dumb. If you want to mock the “Intelligent Design” guys, you’ll need to be a little more subtle.

        Reply
  25. Jay

    On the topic of sloppy arguments, it has been almost 3 years since your OP and nobody has successfully refuted the CSI argument. Not for a lack of trying of course, various Darwinian acolytes have lined up, only to be shot down one by one. It is no wonder so many Darwinists are jumping ship and swimming to the shores of Intelligent Design where it is safer.

    Reply

Leave a Reply to MPL Cancel reply