Dembski's Profound Lack of Comprehension of Information Theory

I was recently sent a link to yet another of Dembski’s wretched writings about specified complexity, titled Specification: The Pattern The Signifies Intelligence.
While reading this, I came across a statement that actually changes my opinion of Dembski. Before reading this, I thought that Dembski was just a liar. I thought that he was a reasonably competent mathematician who was willing to misuse his knowledge in order to prop up his religious beliefs with pseudo-intellectual rigor. I no longer think that. I’ve now become convinced that he’s just an idiot who’s able to throw around mathematical jargon without understanding it.
In this paper, as usual, he’s spending rather a lot of time avoiding defining specification. Purportedly, he’s doing a survey of the mathematical techniques that can be used to define specification. Of course, while rambling on and on, he manages to never actually say just what the hell specification is – just goes on and on with various discussions of what it could be.
Most of which are wrong.
“But wait”, I can hear objectors saying. “It’s his theory! How can his own definitions of his own theory be wrong? Sure, his theory can be wrong, but how can his own definition of his theory be wrong?” Allow me to head off that objection before I continue.
Demsbki’s theory of specicfied complexity as a discriminator for identifying intelligent design relies on the idea that there are two distinct quantifiable properties: specification, and complexity. He argues that if you can find systems that posess sufficient quantities of both specification and complexity, that those systems cannot have arisen except by intelligent intervention.
But what if Demsbki defines specification and complexity as the same thing? Then his definitions are wrong: because he requires them to be distinct concepts, but he defines them as being the same thing.
Throughout this paper, he pretty ignores the complexity to focus on specification. He’s pretty careful never to say “specification is this”, but rather “specification can be this”. If you actually read what he does say about specification, and you go back and compare it to some of his other writings about complexity, you’ll find a positively amazing resemblance.
But onwards. Here’s the part that really blew my mind.
One of the methods that he purports to use to discuss specification is based on Kolmogorov-Chaitin algorithmic information theory. And in his explanation, he demonstrates a profound lack of comprehension of anything about KC theory.
First – he purports to discuss K-C within the framework of probability theory. K-C theory has nothing to do with probability theory. K-C theory is about the meaning of quantifying information; the central question of K-C theory is: How much information is in a given string? It defines the answer to that question in terms of computation and the size of programs that can generate that string.
Now, the quotes that blew my mind:

Consider a concrete case. If we flip a fair coin and note the occurrences of heads and tails in
order, denoting heads by 1 and tails by 0, then a sequence of 100 coin flips looks as follows:

(R) 11000011010110001101111111010001100011011001110111
00011001000010111101110110011111010010100101011110.

This is in fact a sequence I obtained by flipping a coin 100 times. The problem algorithmic
information theory seeks to resolve is this: Given probability theory and its usual way of
calculating probabilities for coin tosses, how is it possible to distinguish these sequences in terms
of their degree of randomness? Probability theory alone is not enough. For instance, instead of
flipping (R) I might just as well have flipped the following sequence:

(N) 11111111111111111111111111111111111111111111111111
11111111111111111111111111111111111111111111111111.

Sequences (R) and (N) have been labeled suggestively, R for “random,” N for “nonrandom.”
Chaitin, Kolmogorov, and Solomonoff wanted to say that (R) was “more random” than (N). But
given the usual way of computing probabilities, all one could say was that each of these
sequences had the same small probability of occurring, namely, 1 in 2100, or approximately 1 in
1030. Indeed, every sequence of 100 coin tosses has exactly this same small probability of
occurring.
To get around this difficulty Chaitin, Kolmogorov, and Solomonoff supplemented conventional
probability theory with some ideas from recursion theory, a subfield of mathematical logic that
provides the theoretical underpinnings for computer science and generally is considered quite far
removed from probability theory.

It would be difficult to find a more misrepresentative description of K-C theory than this. This has nothing to do with the original motivation of K-C theory; it has nothing to do with the practice of K-C theory; and it has pretty much nothing to do with the actual value of K-C theory. This is, to put it mildly, a pile of nonsense spewed from the keyboard of an idiot who thinks that he knows something that he doesn’t.
But it gets worse.

Since one can always describe a sequence in terms of itself, (R) has the description

copy '11000011010110001101111111010001100011011001110111
00011001000010111101110110011111010010100101011110'.

Because (R) was constructed by flipping a coin, it is very likely that this is the shortest
description of (R). It is a combinatorial fact that the vast majority of sequences of 0s and 1s have
as their shortest description just the sequence itself. In other words, most sequences are random
in the sense of being algorithmically incompressible. It follows that the collection of nonrandom
sequences has small probability among the totality of sequences so that observing a nonrandom
sequence is reason to look for explanations other than chance.

This is so very wrong that it demonstrates a total lack of comprehension of what K-C theory is about, how it measures information, or what it says about anything. No one who actually understands K-C theory would ever make a statement like Dembski’s quote above. No one.
But to make matters worse – this statement explicitly invalidates the entire concept of specified complexity. What this statement means – what it explicitly says if you understand the math – is that specification is the opposite of complexity. Anything which posesses the property of specification by definition does not posess the property of complexity.
In information-theory terms, complexity is non-compressibility. But according to Dembski, in IT terms, specification is compressibility. Something that possesses “specified complexity” is therefore something which is simultaneously compressible and non-compressible.
The only thing that saves Dembski is that he hedges everything that he says. He’s not saying that this is what specification means. He’s saying that this could be what specification means. But he also offers a half-dozen other alternative definitions – with similar problems. Anytime you point out what’s wrong with any of them, he can always say “No, that’s not specification. It’s one of the others.” Even if you go through the whole list of possible definitions, and show why every single one is no good – he can still say “But I didn’t say any of those were the definition”.
But the fact that he would even say this – that he would present this as even a possibility for the definition of specification – shows that Dembski quite simply does not get it. He believes that he gets it – he believes that he gets it well enough to use it in his arguments. But there is absolutely no way that he understands it. He is an ignorant jackass pretending to know things so that he can trick people into accepting his religious beliefs.

0 thoughts on “Dembski's Profound Lack of Comprehension of Information Theory

  1. Michael

    I don’t know anything about K-C theory, so could elaborate on what are some of the major errors in the quotations by Dembski?

    Reply
  2. PaulC

    I’m not one to make excuses for Dembski, but can you elaborate on what you find wrong about the second quoted passage? It may be imprecise, and I also agree that he’s been repeatedly undermined by the fact that random sequences have higher complexity than highly compressible ones by every reasonable defintion. But taken by itself, the informal description of Kolmogorov complexity doesn’t seem that bad.
    A bit string taken from a uniform distribution has a low probability of being very compressible, just as he says. That’s because there are 2^n bit strings of length n, but only 2^k descriptions of bit strings compressible to k bits. If k is even a little less than n, the probability that you’ve picked a string that’s compressible to k bits drops exponentially. E.g., the probability that the 100 bit string is compressible to 90 bits is less than 2^-10. If you’re willing to equate coin flips with a true random source, then it would be reasonable for you to bet that the minimal algorithmic description of the sequence generated that way was not much smaller than 100 bits.
    (Note that bringing in Kolmogorov complexity is a kind of overkill when mundane binomial p-values would suffice to reject the null hypothesis (random) for a string of 100 1s in a row; it’s typical of Dembski to try to make his claim look more impressive than it is).
    I would also agree with Dembski’s implication that if you find a sequence to be highly compressible, then you can reject the random hypothesis using a level of confidence that would typically be expressed as a p-value. I.e., given a particular n-bit string, and a way of compressing it to k bits (which might not be the best compression; finding the best one is an incomputable problem), you would calculate the p-value: given a uniform random bit string of length n, what is the probability of choosing one compressible to k bits or less.
    You then interpret this value as “the probability that, given that the null hypothesis is true, T will assume a value as or more unfavorable to the null hypothesis as the observed value” http://en.wikipedia.org/wiki/P-value
    If the string is highly compressible, and therefore the p-value is very low, then the null hypothesis does not look like a compelling explanation. At that point it would be worth looking for explanations other than randomness. (Note that this in no way disproves evolution, which is not a uniform random process. For instance, the fact that my DNA looks a lot like a chimp’s DNA is better explained by common descent than by the assumption that both came from random coin flips.).

    Reply
  3. PaulC

    BTW, a good source on the connection between Kolmogorov complexity and randomness testing is: “The Miraculous Universal Distribution” by Kirchherr, Li, and Vitanyi. Vitanyi has a PostScript copy of the paper online: http://www.cwi.nl/~paulv/papers/mathint97.ps
    The main problem with Dembski is that he consistently misapplies these arguments. If I am betting on coin flips and notice any kind of algorithmic pattern then there are legitimate reasons to conclude that the person flipping coins is not playing fair. However, this method does not allow one to discriminate between the product of evolution and the product of “intelligent design” since neither of these processes in any way resemble random coin flips.

    Reply
  4. Mark Chu-Carroll

    I’ll move some of my blogger posts on information theory over to SB tonight.
    Basically, K-C information theory is a sub-branch of recursive function theory, which is one of the fundamental theoretical areas underlying computer science.
    The idea of K-C theory is to study what information means in terms of comptuation. The goal of K-C theory is to find a meaningful way of asking the question “How much information does something contain?”
    It is not based on probability theory. It did not develop as an attempt to solve some problem with describing the difference between probable and improbable outcomes of a process. And there is absolutely no justification for saying that a string with low compressibility (aka low information content) is an indication of intelligent intervention.
    Seriously: what Dembski is arguing is that discovering anything with *low information content* should be taken as an indication that it was created by an intelligent agent.
    I’m pretty sure that he did not *want* to say that anything that doesn’t contain much information is indicative of an intelligent agent. I’m pretty sure that he doesn’t realize that this is exactly the opposite of what he argues in his definition of “complexity”. And I’m pretty sure that he doesn’t realize that by saying this, he’s saying that “X has specified complexity” means “X contains very little information” and “X contains a lot of information” at the same time.

    Reply
  5. PaulC

    Seriously: what Dembski is arguing is that discovering anything with *low information content* should be taken as an indication that it was created by an intelligent agent.

    I honestly couldn’t tell you what Dembski is trying to argue. Obviously a string that goes 111111111111 is not a sign of an intelligent agent, nor is the diversity of life on earth, for which evolution is a far more parsimonious explanation fitting all the data.
    But finding a large string with low information content can be taken as a reasonable indication that it did not come from a uniform random source. Randomness testing is a legitimate and active field. It does use the notion of compressibility. I grant that Kolmogorov complexity is not “based on probability theory” but it does have legitimate applications to areas that may also involve the application of probability theory.

    Reply
  6. Mark C. Chu-Carroll

    PaulC:
    Finding a large with string with low information content can be taken as as an indication that it did not come from a uniform random source is true. But going from there to indicating an intelligent agent – that is a huge, unjustified leap.
    And the fact that he’s arguing that it’s K-C theory that allows one to make that leap from “non-uniform random source” to “intelligent agent” is ridiculous – particularly when his writing doesn’t indicate that he has any idea that there is a leap to be made.

    Reply
  7. PaulC

    BTW, I added the reference to Kirchherr, Li, and Vitanyi above although it was posted as “Anonymous” for some reason. Once again, I recommend it highly as a legitimate and rigorous source for this kind of argument. To quote from an example in the paper:

    What bothers Alice is that the sequence of coin tosses doesn’t look `random’. She
    expects that a fair coin produces a random sequence of heads and tails. But what is
    `random’? She has intuition about the concept, to be sure — 00101110010010111110
    looks more random than 01010101010101010101 — but, precisely, what is meant by
    `random’?

    To cut to the chase, what is meant by the intuitive notion of “random” is indeed tied up in Kolmogorov complexity. I disagree strongly with Mark Chu-Carroll that there is no relevance here. However, I agree that Dembski is misapplying it. Merely refuting the null hypothesis that an observation comes from a uniform random source is not the same as proving it comes from an intelligent agent.

    Reply
  8. PaulC

    But going from [not uniform random] to indicating an intelligent agent – that is a huge, unjustified leap

    Yes, I agree completely. But I don’t think that Dembski’s second quoted passage taken in isolation is as completely wrong as you state.

    Reply
  9. Mark C. Chu-Carroll

    I don’t mean to say that K-C theory has no relevance to the kind of question we’re discussing. But it remains important to draw the distinction between what K-C theory says, and the kind of bizzare conclusion that Dembski is trying to draw by pretending that you can take statements from probability theory and statements from K-C theory and mix them together without distinction.
    K-C theory is not probability theory; the fact that it has insights that are useful in probability, and that probability has insights that are useful in K-C theory doesn’t mean that you can just mush the two together in a careless half-assed way, and say that the probability-theory conclusions are actually K-C conclusions, and vice-versa.
    Dembski is misapplying and misrepresenting both probability theory and K-C information theory – I think because he doesn’t realize that there’s any difference between the two. That’s what I mean about his fundamental cluelessness. I don’t think he understands what he’s doing, what he’s saying, where the ideas that he’s talking about come from, or what they actually mean. The fuzziness of his distinction between KC information theory and probability theory is just part of that cluelessness.

    Reply
  10. Mark Frank

    I don’t know a thing about information theory but the concept he was trying to get across at this point (whether it be valid information theory or not) struck me as one of the few sound bits of the paper.
    The big problems with the paper that struck me include:
    * A complete misrepresentation of classical hypothesis testing – which absolutely requires a clear definition of the alternative hypothesis (you can’t decide whether to do one-tailed or two-tailed testing or create confidence intervals without this)
    * Sudden, unjustified, leaps from defining simplicity in the “information theory” terms used above to defining it in terms of minimum number of concepts and then to defining it in minimum number words
    * A subtle move where on one page specification is in terms of “the observed outcome and all simpler outcomes” (whatever that means at that stage) to the next page where it is suddenly “the observed outcome and all simpler *less probable* outcomes” – this change is not even commented on.
    * No justification for any of the many definitions of specification in the paper.
    There are many others but that will do for a start.

    Reply
  11. Mark C. Chu-Carroll

    PaulC:
    There seem to be some network problems around here today; I’ve had some trouble with various timeouts. Since you posted the comment which came up anonymous, I’ve updated it to show you as the author.

    Reply
  12. Mark C. Chu-Carroll

    Mark Frank:
    I suspect that it’s just a matter of the old “errors in the stuff you know best are easiest to recognize” thing. For me, the errors in IT stick out like a sore thumb, because that’s much closer to my expertise; whereas I’m sure that for a probability theory person, the probability theory errors are the most glaring; etc. It’s just all so damned wretchedly bad that the places where you can really get just how bad it is stand out as so awesomely bad that the stuff you don’t know as well seems like it *must* be better in comparison.
    For example, I didn’t notice the misrepresentation of hypothesis testing that grabbed your attention. There was so much else wrong that was grabbing my attention that something like that, where I’d probably need to pull out a textbook to refresh myself on it, just slipped under my radar.
    But the most glaring things are really definition errors: the number of different definitions of specification he uses; the way that none of them are really justified; the way that he constantly shifts definitions in small ways without explanation; etc.
    The thing that I was really trying to get across in talking about this was that it’s a really sloppy paper. Things like the shift you point out – from “simpler” to “simpler less probable” – I really don’t think that he actually understands that a wording change like that *needs* justification. I’m convinced that he really doesn’t understand this stuff, and (to put it somewhat crudely) he’s just talking out his ass.

    Reply
  13. Anonymous

    I think Dembski is capable of doing mathematics in the sense of manipulating formalism. Where he fails is when he tries to put it into a larger context or apply it to the problem he claims to.
    I’m not sure how much is sloppy reasoning and how much is a matter of dishonesty. My working assumption is that he intentionally tries to baffle his reader with BS. He brings in a lot of esoteric machinery (e.g. NFL theorems) to make points that if stated simply are clearly not relevant to his case. It’s the diametric opposite of good popularization: think of Feynman’s QED in which he attempted to explain some very difficult physics with elementary examples and an intuitive treatment of complex numbers. Dembski doesn’t want the reader to understand what he has to say. He wants the reader to come away thinking Dembski is very smart and must be right even if the exposition made absolutely no sense.

    Reply
  14. Patrick Caldon

    I just read this up to page 30. It’s just Fred Hoyle’s jumbo with KC window dressing.

    Reply
  15. Mark Frank

    Anonymous – whoever you are. I think you make the point very well.
    It is interesting how Dembski sees so much of the world in digital terms. He starts with bit strings and poker hands – and then he goes on to treat bacterial flagella and even the universe in much same way. So you end up with sophisticated looking formulae that might, questionably, mean something applied to a well defined domain such a poker hands, but are meaningless elsewhere. It’s ironic that someone who is so opposed to materialism should view reality as a large computer.

    Reply
  16. Roger Rabbitt

    In another thread, Mark C. Chu-Carroll says:

    I seriously challenge you to find anywhere in the history of my blog, here or at blogger, where I’ve been less than honest. Abrasive, insulting, argumentative, even arrogant and obnixous – those, I wouldn’t argue with. But dishonest? Put up or shut up.

    I’ll be more than happy. First, a comment on your “holier than thou” attitude. Here is you talking about Dembski:

    While reading this, I came across a statement that actually changes my opinion of Dembski. Before reading this, I thought that Dembski was just a liar. I thought that he was a reasonably competent mathematician who was willing to misuse his knowledge in order to prop up his religious beliefs with pseudo-intellectual rigor. I no longer think that. I’ve now become convinced that he’s just an idiot who’s able to throw around mathematical jargon without understanding it.

    When you decide to climb into the mud wrestling ring, you should understand that it isn’t only about trying to throw your opponent into the mud. He gets to do that too. And with tag-team mud wrestling . . . So spare me the “I’m shocked” response. I’ve seen it all before.
    So, let us get down to the details:

    The second line you quote is perfectly accurate. Dembski may not have *intended* it that way; he may not even understand that that’s an implication of his own argument. But it is *not* dishonest to point out that he very clearly defines “complexity” in many of his writings as complexity in the information theoretic sense of “high information content”; and then in the paper where he discusses specification, he defines specification as the IT sense of “low information content”. You might not like the fact that Dembski screwed up that badly, but it’s not dishonest to point out that Dembski’s own writings have defined specified complexity in a contradictory way.

    But whether I like it or not, you’ve merely opined that “Dembski screwed up that badly”. But you provide no evidence. For example, you say: “and then in the paper where he discusses specification, he defines specification as the IT sense of “low information content”.”
    Yet when I search the article for the phrase “low information content”, I don’t find it. So, what page is that on? And how do you reconcile these two claims:

    He’s pretty careful never to say “specification is this”. . .
    . . . then in the paper where he discusses specification, he defines specification as . . .

    Does he tell us what it is, or doesn’t he? You seem to be confused on the point of your very complaint.

    But what if Demsbki defines specification and complexity as the same thing? Then his definitions are wrong: because he requires them to be distinct concepts, but he defines them as being the same thing.

    And where does he do that? Give me the exact citations. That would seem to be game, set and match for you. Yet for unexplained reasons, you don’t do this.

    Throughout this paper, he pretty ignores the complexity to focus on specification.

    Yes, which is not surprising, since the title is “Specification: The Pattern The Signifies Intelligence”. But he doesn’t really ignore complexity, as it surfaces where relevant. For example:

    But given the usual way of computing probabilities, all one could say was that each of these sequences had the same small probability of occurring, namely, 1 in 2100, or approximately 1 in 1030.

    Then we get to this:

    But to make matters worse – this statement explicitly invalidates the entire concept of specified complexity. What this statement means – what it explicitly says if you understand the math – is that specification is the opposite of complexity. Anything which posesses the property of specification by definition does not posess the property of complexity.

    Of course, that’s nonsense. Complexity Dembski defines as:

    But a probability amplifier is also a complexity diminisher. For something to be complex, there must be many live possibilities that could take its place. Increasingly numerous live possibilities correspond to increasing improbability of any one of these possibilities. Complexity and probability therefore vary inversely: the greater the complexity, the smaller the probability. NFL page 183.

    So, returning to the coin toss example, specifications are patterns that we can simply describe. All 1’s, all 0’s, alternating 1’s and 0’s, etc. But the complexity has to do with how many times we toss the fair coin, and therefore how many possible outcomes we have for the bit string, irrespective of whether the outcome will be specified or not. The intersection of complexity and specification is what he calls specified complexity.
    Sounds like Dembski is the one being a straight shooter, and you’ve been caught lieing.
    But feel free to provide those elusive citations that you claim exist.
    I’m all ears.

    Reply
  17. Mark C. Chu-Carroll

    Roger:

    As you even quote me saying, I don’t dispute that I’m abrasive and insulting when it comes to jackasses like Dembski. That’s a different thing than lying. And the passage you quote is, I continue to maintain, entirely accurate. Until quite recently, I believed that Dembski was a basically competent mathematician who deliberately used his knowledge to mislead people. But things like the article I was discussing in this post changed my mind: I do not believe that he understands what he’s talking about.
    It’s true that you can search Dembski’s writing for the phrase “low information content”, and you won’t find it. But that’s part of the problem: the argument that he presents in the section of this paper on specification and compressibility quite precisely matches what IT defines as “low information content”. In information theory, a string has low information content if it’s highly compressible: the entire definition of information content in IT is based on compressibility. The less compressible a string is, the more random it is (in the sense of randomness of “not containing repeating patterns”), and the higher its information content. My point in discussing that was that for all of the articles he’s written that include IT-based discussions, he does not realize that the argument he’s putting forward contradicts the point he’d like to make. That is precisely why I’ve revised my opinion of him from “competent lying mathematician” to “incompetent mathematician”.
    For citation of the fact that specification and complexity are distinct notions to Dembski… It’s frankly more than a bit wierd to insist that he doesn’t require them to be distinct notions. The crux of the SC argument is that if something posesses both specification and complexity, then it has the markings of design. If they’re not distinct, if they’re the same thing, then why even talk about “specified complexity”? But you want a quotation, I’ll give you one. Look up his paper “Explaining Specified Complexity”, and read the first three paragraphs, in which he discusses the difference, and examples of things that posess one but not the other.
    The quote from Dembski that you provide supports my position if you understand the math! That’s defining complexity in terms of probability; but if you actually understand information theory, and what it means for a string to have high information content, the passage you quote is defining complexity as high information content.
    In your own closing explanation of SC, you’re making exactly the same mistake as Dembski. A string with low information content is a string that can be simply described. The simple description is a compressed form of the information content of the string. If it can be described simply – really described in a simple form – it has low information content. If it’s complex – it has high information content. If you say it has “specified complexity”, you are saying that you have a string with high information content which has very little information content. That’s what this stuff means. That’s why I say that I no longer believe Dembski is a competent mathematician – because a competent mathematician who uses information theory in his arguments would know that this is a profound and obvious error. A dishonest mathematician wouldn’t make a mistake this obvious.
    And finally: the fact that Dembski is quite careful to put weasel-words into his writings doesn’t excuse his errors. Go through that entire article on specification, and see if he’s ever willing to explicitly commit himself to a mathematical definition: he’s clearly presenting arguments for how to define specification, while keeping the weasel-words in so that he can’t get nailed down. Saying, in essence, “I’m not defining X, but assume that the definition of X is …” doesn’t mean that you get to weasel out when the definition you presented is shown invalid.
    I’m still waiting to see where you say I’m *lying*. Everything you quote is legitimate criticism of Dembski. You can claim my arguments are wrong, in which case I would expect you to present an argument for why they’re wrong. But lying is a much stronger claim: nothing that you mention up there is a lie. What in all of that stuff is a deliberate misrepresentation of fact?

    Reply
  18. Roger Rabbitt

    I could contest much of what you said, but this will do to show your blindness to what Dembski’s point is:

    In your own closing explanation of SC, you’re making exactly the same mistake as Dembski. A string with low information content is a string that can be simply described. The simple description is a compressed form of the information content of the string. If it can be described simply – really described in a simple form – it has low information content. If it’s complex – it has high information content. If you say it has “specified complexity”, you are saying that you have a string with high information content which has very little information content. That’s what this stuff means. That’s why I say that I no longer believe Dembski is a competent mathematician – because a competent mathematician who uses information theory in his arguments would know that this is a profound and obvious error. A dishonest mathematician wouldn’t make a mistake this obvious.

    You look pretty foolish here. All this talk about the “string”, but nothing about the underlying event. That is where the complexity is determined. Consider this:
    We have two scenarios: one is the hundred flips of a fair coin ala Dembski. The other is the reporting of the result of a programmed bit string printing machine, which prints 99 1’s, followed by either a 1 or a 0. Now, we report that the coin flipping produces 100 1’s. Ditto for the bit printing machine. Which result contains more information? One eliminates 1030 possibilities, the other only one.
    By viewing the string in isolation, you miss its signifigance. You are obsessed over one aspect of the math, to the exclusion of all other math considerations involved.

    Reply
  19. Mark C. Chu-Carroll

    Roger:
    You’re the one who’s making yourself look foolish.
    First – you claim to be showing why I’m lying. And yet, you’re lecturing me about how I’m purportedly not understanding Dembski. What happened to that very specific accusation that I’m a liar?
    Second, you clearly don’t understand information theory, but information theory is what we’re discussing here – in particular, the definitions of information content used by information theory. That is what *Dembski* uses in his definition of complexity, and it’s what *Demsbki* uses in his pseudo-definition of specification in the text I cited.
    The “one aspect of the math” that you’re claiming I’m obsessed over is the one aspect of math that Dembski is using in his definition. And it’s the specific field of mathematics that addresses the precise issue that you claim is the significance of Dembski’s pseudo-definition.
    Dembski’s definition of specification *means* low information content. Dembski’s definition of complexity *means* high information content. Demsbki’s definition of specified complexity therefore means “low information content with high information content”.

    Reply
  20. Roger Rabbitt

    Information always presupposes a range of possibilities, and conveying information means ruling out some of those possibilities. It follows that information can be quantified. Indeed, the more possibilities that get ruled out, the more information gets conveyed. Fred Dretske (1981: 4) elaborates: “Information theory identifies the amount of information associated with, or generated by, the occurrence of an event (or the realization of a state of affairs) with the reduction in uncertainty, the elimination of possibilities, represented by that event or state of affairs”. NFL page 125

    Reply
  21. Mark C. Chu-Carroll

    Roger:
    Quoting definitions that you don’t understand doesn’t change the fact that neither you nor Dembski understand them.
    Go read some Chaitin: http://cs.umaine.edu/~chaitin/. Greg is the co-inventor of K-C information theory (the C is Chaitin), and a really amazing writer. He’s written a series of books that actually make IT comprehensible to a lay audience without losing any of the rigour. His website includes some of his lectures on the subject, which are really excellent. And you can check Dembski’s papers to see that K-C information theory is what Dembski is using.

    Reply
  22. Roger Rabbitt

    Don’t know if my last posting of last night inadvertantly fell into the bit bucket, or was directed there because you wearied of the discussion.
    If this doesn’t make it through, I’ll assume the latter.

    Reply
  23. Troublesome Frog

    We have two scenarios: one is the hundred flips of a fair coin ala Dembski. The other is the reporting of the result of a programmed bit string printing machine, which prints 99 1’s, followed by either a 1 or a 0. Now, we report that the coin flipping produces 100 1’s. Ditto for the bit printing machine. Which result contains more information? One eliminates 1030 possibilities, the other only one.

    So, are you saying that the information content of a sequence of nucleotides differs depending on whether it was generated using mutations or magic? Does the version that uses magic elminate fewer possibilities or more possibilities?

    Reply
  24. jackd

    Roger Rabbit:All this talk about the “string”, but nothing about the underlying event. That is where the complexity is determined.
    If that’s true, then Specified Complexity is useless in making the inference of design, because complexity would not be a property of the output (the string).

    Reply
  25. Mark C. Chu-Carroll

    Roger:
    The only time I have ever deleted comments from my blog is when they’re porno-spam. (For some reason, there’ve been several waves of porno-spam in the comments over the last week, but I’ve been very careful about deletions.) I assure you I didn’t delete anything that you posted. There was one comment which I approved last night which was just a one-paragraph quotation from one of Dembski’s books, with no context… Perhaps you made an editing error?

    Reply
  26. tgibbs

    I thought the most amusing part of Dembski’s screed was the following:

    there is
    never any need to consider replicational resources M�N that exceed 10^120 (say, byinvoking inflationary cosmologies or quantum many-worlds) because to do so leads to a wholesale breakdown in statistical reasoning, and that�s something no one in his saner moments is prepared to do

    Translated into English, this boils down to “there is no reason to consider the possiblity that the universe might be larger than the part of it that we can see from here, because if I don’t know the size of the universe, then it is impossible for me to make the kind of argument that I am trying to make.”
    To him, this corresponds to a “wholesale breakdown in statistical reasoning.” But in fact, hardly any statisical reasoning other than Dembski’s is dependent upon knowning the size of the universe. He rather drastically misrepresents Fisher’s approach in suggesting that Fisher was trying to define a threshold that would absolutely eliminate chance–which is basically suggesting that Fisher was too stupid to realize that events with a probability of 1 in 20 or 1 in 100 do in fact occur rather frequently. Statisitical significance has never been understood by anybody as absolutely eliminating chance–rather it defines a frequency of error that is acceptable in many contexts.
    Of course, it doesn’t matter, because in the end, Dembski’s argument boils down to “if the likelihood of evolution of functional structures of this complexity by evolutionary mechanisms is very, very small, then we need to consider the possibility that the structure did not form by evolutionary mechanisms.” I think that most biologists would agree with this, and most likely would not require a probability as low as Dembski’s 10^-120 to convince them to seek other explanations. Unfortunately, neither Dembski nor anybody else has any idea of how to reiliably calculate the likelihood of evolution of structures of particular specified complexity (whatever that might mean).

    Reply
  27. Mark C. Chu-Carroll

    tgibbs:
    I really hate the argument that you cite 🙂 It’s one which is constantly brought up by creationists and other idiots. It’s basically an argument that comes down to “My imagination defines the limits of the universe”. If a probability gets so small that I can’t understand it, then it must be impossible. If a process takes steps that I can’t understand, it can’t happen. The universe can’t be bigger than what I can see.
    The Dembski line is really just a variation on that: if the universe is bigger than what we can see from earth, then all statistical reasoning suddenly becomes meaningless; not because they’re anything wrong with statistics, but because there’s something wrong with the concept of things being more than what a human being can observe. If we can’t see it, it can’t exist; even postulating the existence of anything beyond what we can see is irrational and meaningless. It’s just the most arrogant kind of argument.

    Reply
  28. BronzeDog

    The Dembski line is really just a variation on that: if the universe is bigger than what we can see from earth, then all statistical reasoning suddenly becomes meaningless; not because they’re anything wrong with statistics, but because there’s something wrong with the concept of things being more than what a human being can observe. If we can’t see it, it can’t exist; even postulating the existence of anything beyond what we can see is irrational and meaningless. It’s just the most arrogant kind of argument.

    There more things in Heaven and Earth than are dreamt of in your philosophy, Mr. Dembski.

    Reply
  29. PaulC

    Dembski quote:

    because to do so leads to a wholesale breakdown in statistical reasoning, and that’s something no one in his saner moments is prepared to do

    This kind of statement is the red flag of a non-proof.
    It’s reminiscent of attempts to find a proof of Euclid’s parallel postulate in terms of the other axioms. I’m not sure if this is the case I remember hearing, but a quick search turned up Saccheri, who attempted to assume the parallel postulate was untrue and derive a contradiction:
    http://www.southernct.edu/~grant/nicolai/history.html

    But Saccheri was so convinced that his thread of logic must ultimately snag that he actually knotted it himself. After skillfully proving many valuable results, he ended by forcing a weak and vague conclusion about lines that merge at infinity, which he partially persuaded himself to be a logical contradiction. It apparently convinced almost no one, and even Saccheri himself was sufficiently skeptical to attempt another solution. However, his second effort was no more successful than the first.

    There is also the case of the Poisson spot–a bright spot that appears in the shadow of a sphere. http://www.schillerinstitute.org/fid_97-01/993poisson_jbt.html Poisson derived its existence by following the logic of a paper by Fresnel on the wave theory of light. The intent was to show the paper to be obviously wrong, but in fact, the spot turned out to be real.
    If your argument ends with something like “and then 1 would have to be 0” then you can be satisfied that you have found a successful reductio ad absurdum argument. If it ends “and that would just be unthinkable” then you have a non-proof attesting to nothing but your own unwillingness to think beyond a certain point.

    Reply
  30. secondclass

    Roger Rabbit:All this talk about the “string”, but nothing about the underlying event. That is where the complexity is determined.

    If that’s true, then Specified Complexity is useless in making the inference of design, because complexity would not be a property of the output (the string).

    That’s a key point. Is Specified Complexity a function of the end product only, or is it also a function of the causal story? Can you answer that question, Roger R? No matter which way you answer it, I’ll provide quotes by Dembski that contradict your answer.
    Dembski’s been working on specified complexity for 15 years now. You would think that by now he would have a consistent answer to this very fundamental question.

    Reply
  31. garote

    I think PaulC said exactly what needs to be said, that grants us all hearty permission to ignore Dembski’s work. I think it bears repeating:
    “If I am betting on coin flips and notice any kind of algorithmic pattern then there are legitimate reasons to conclude that the person flipping coins is not playing fair. However, this method does not allow one to discriminate between the product of evolution and the product of “intelligent design” since neither of these processes in any way resemble random coin flips.”
    Dembski’s feverishly comparing apples to oranges, to prove that one is an artichoke.

    Reply
  32. Mark C. Chu-Carroll

    garote:
    I actually disagree with you.
    My whole viewpoint on mathematics is that much of its value comes from the ability to look something, and abstract it down to a simple form based on its fundamental principles. That process of abstraction – of stripping away the details to identify a simpler problem that has the relevant features of the real-world problem – that’s a really useful thing to do, and you can often discover a lot of interesting things by performing that abstraction.
    In a way, the “coin-flip” thing is an abstraction of a complex process to something simple that contains some relevant feature.
    The problem WRT to Demsbki isn’t that he’s reducing a problem to an abstraction and analyzing the abstraction. The problem is that you need to make sure that your abstraction captures all of the relevant features that you’re discussing. You need to validate the abstraction as a model of the real world.
    Demsbki’s reduction of the process of analyzing certain features of the universe to the a metaphor involving recognition of patterns in coin flips is fine with me. The problem is that he creates a model that omits important elements of the real world, and then insists that the conclusions drawn from his abstract model are applicable to the real world, even though his model is a poor representation of the real world.
    In the case of the coin flip metaphor: if you’ve reduced the “problem” of observing patterns to one of flipping coins, and you notice some pattern in the results of a sequence of coin-flips, you could perhaps conclude that someone is cheating.
    But if, in your abstract model of coin-flipping, you’ve eliminated the possibility of non-deliberately biased coins; and you’ve assumed that the table that the coin lands on is perfectly smooth, eliminating any bias-factor that could be produced by irregularities in the surface; then your process of abstracton has deliberately removed relevant features of the real world. So if you observed a seemingly non-random result from a sequence of real coin flips, and you concluded that the only possible source of that apparent non-randomness is deliberate intervention, then you’d be drawing an invalid conclusion. Because there are a number of factors – irregularies in the coin, irregularities in the surface that it lands on, accidental bias in the way the coin is thrown – that could explain the observed results; but you excluded them from consideration.
    To be a bit more concrete: take a Dembski-ish model of the world. Now, point an antenna at a random point in the sky. If you see what looks like a highly regular repeated pattern of on-off pulses of radio waves, can you conclude that there must be an intelligent agent creating that regular pattern? No. There are things like pulsars – which produce very regular pulse patterns. Sure, the “I shouldn’t see anything with a regular pattern” is an OK abstraction, but when you see somethning that seemingly violates the conclusion that you drew from the abstraction, you need to ask “Is it my model that’s wrong?”. You can’t just wave your hands and say “Ooh, look a pattern, my model says that can’t happen unless there’s something intelligent producing it, so I just proved there’s extraterristrial intelligent life.”

    Reply
  33. Roger Rabbitt

    Mark C. Chu-Carroll says:

    The only time I have ever deleted comments from my blog is when they’re porno-spam. (For some reason, there’ve been several waves of porno-spam in the comments over the last week, but I’ve been very careful about deletions.) I assure you I didn’t delete anything that you posted. There was one comment which I approved last night which was just a one-paragraph quotation from one of Dembski’s books, with no context… Perhaps you made an editing error?

    Fair enough, I’ll just assume the mysterious bit-bucket scenario. But I don’t understand the “with no context” comment. Dembski’s definition of “information” lacks context in a thread where what he means by certain terms, including “information”, is the subject of some dispute.

    Reply
  34. Roger Rabbitt

    Troublesome Frog says:

    So, are you saying that the information content of a sequence of nucleotides differs depending on whether it was generated using mutations or magic? Does the version that uses magic elminate fewer possibilities or more possibilities?

    You must have me confused with another poster, since I made no mention of “magic”, nor do I understand what you think that word means.
    jackd says:

    If that’s true, then Specified Complexity is useless in making the inference of design, because complexity would not be a property of the output (the string).

    “Output” of what? Isn’t that the issue I raised? According to MCC, and how he is trying to force definitions onto Dembski’s text, the definition is that of an “input” not “output” string. IOW, the source of the string is irrelevant to the “information” it contains. He would like you to believe that is the only definition relevant to information theory. But that isn’t true:
    http://en.wikipedia.org/wiki/Self-information

    Within the context of information theory, self-information is defined as the amount of information that knowledge about (the outcome of) a certain event, adds to someone’s overall knowledge. The amount of self-information is expressed in the unit of information: a bit.
    By definition, the amount of self-information contained in a probabilistic event depends only on the probability p of that event. More specifically: the smaller this probability is, the larger is the self-information associated with receiving information that the event indeed occurred.

    http://www.answers.com/topic/information-theory
    [excuse the poor formatting and missing formula – see link]

    Self-information
    Shannon defined a measure of information content called the self-information or surprisal of a message m:
    where p(m) = Pr(M = m) is the probability that message m is chosen from all possible choices in the message space M.
    This equation causes messages with lower probabilities to contribute more to the overall value of I(m). In other words, infrequently occurring messages are more valuable. (This is a consequence from the property of logarithms that − logp(m) is very large when p(m) is near 0 for unlikely messages and very small when p(m) is near 1 for almost certain messages).
    For example, if John says “See you later, honey” to his wife every morning before leaving to office, that information holds little “content” or “value”. But, if he shouts “Get lost” at his wife one morning, then that message holds more value or content (because, supposedly, the probability of him choosing that message is very low).
    Entropy
    The entropy of a discrete message space M is a measure of the amount of uncertainty one has about which message will be chosen. It is defined as the average self-information of a message m from that message space:
    The logarithm in the formula is usually taken to base 2, and entropy is measured in bits. An important property of entropy is that it is maximized when all the messages in the message space are equiprobable. In this case H(M) = log | M | .

    So, by this view of the amount of information, we need to know something about the event and its probability. For example, with a simple bit string example, a string of 100 “1”‘s doesn’t necessarily imply information or the lack thereof by Dembski’s definition. Lawlike behavior can produce something like that. It is only in the context of the “tossing of a fair coin” that we can begin to construct the probabilities, and hence the information involved. That is different than the “information” as MCC is viewing it.

    Reply
  35. Mark C. Chu-Carroll

    Roger:
    Until you address the fact that you have specifically accused me of lying, I’m not interested in continuing the discussion. Either show how I’m lying, or retract the claim, either way, I’ll happily continue the discussion.
    But being called a liar doesn’t sit well with me. So either put up – and show that I’ve lied here; or admit that you were wrong in accusing me of lying. Otherwise, as far as I’m concerned, the discussion is over.

    Reply
  36. Roger Rabbitt

    Mark C. Chu-Carroll says:

    Until you address the fact that you have specifically accused me of lying, I’m not interested in continuing the discussion. Either show how I’m lying, or retract the claim, either way, I’ll happily continue the discussion.
    But being called a liar doesn’t sit well with me. So either put up – and show that I’ve lied here; or admit that you were wrong in accusing me of lying. Otherwise, as far as I’m concerned, the discussion is over.

    I thought we had dealt with that. Indeed, I only replied to your claims about Dembski’s being dishonest. Yet, you provide no evidence that he is, and brush it off by labeling it “[a]brasive, insulting, argumentative, even arrogant and obnixous”. Is calling Dembski a liar merely being “[a]brasive, insulting, argumentative, even arrogant and obnixous”, or is it making a specific claim? If the former, your current case fails by definition. If the latter, then you have failed to make your own case on the merits.
    You are free to obsess about this, or ban me from your blog, as you wish. The point I am making, which seems to have sailed over your head, is that what’s sauce for the goose is sauce for the gander. And an even more important message for you (which seems will elude you for now, but may be exposed to others who haven’t so much personally invested) is that there is a reason that ad homs are a logical fallacy.
    You approach Dembski with the a priori conviction that he is dishonest and an idiot, and lo and behold, that is what you “see” in his words. But PaulC, a guy who disagrees with Dembksi’s conclusions, nevertheless didn’t stumble into that trap. He saw where Dembski made defensible statements. A guy like me, who as you claim, doesn’t “understand information theory”, has no problem understanding Dembski, and easily finding support for his claims from non-ID people involved in information theory.
    That is what should embarass you. Not that some pseud on the internet called you dishonest.
    And for those of you familiar with the film “Flock of Dodos”, this is part of what Randy Olsen (sp?) is trying to figure out.

    Reply
  37. Mark C. Chu-Carroll

    Roger:
    As I said – no discussion until you either back up your claim that I’m a liar, or withdraw it. Playing semantic games to avoid the fact that you specifically called me a liar, and said you were going to demonstrate that isn’t going to get around it. Back it up, withdraw it, or shut up.

    Reply
  38. Roger Rabbitt

    MCC says:

    As I said – no discussion until you either . . .

    I’ll let this be my last posting here, whether it gets thru or not. I can’t imagine what you think I would want to discuss with you now. You are the classic case of the fella who can dish it out, but can’t take it. Most discussions that interest me are with folks with whom I can establish some kind of mutual ground rules, frequently implicit, about what goes and what doesn’t. But you wish to always be the “special case”, dishing it out but claiming immunity from return fire. I held up the mirror to you, reflecting your own language back to you, and you freak out and start whining about retractions.
    I’ve given more evidence for my “charge” against you, than you have against Dembski, but you refuse to consider that. You are obsessed about your own hurt ego. I see no sense of any concern about the charges you have made against Dembski, despite the fact that both PaulC and I have shown you errors in your critique of the Dembski article.
    You can only do something about that which you control. If I saw any effort on your part to crank down the rhetoric which you used to prior to my entry into the debate, I would be more than willing to defuse the issue and move on. Indeed, it was my hope that by holding up the mirror to your rhetoric you could see the problem with it. Alas, I misjudged your ability to evaluate your own behavior. To yourself, you will always be the ” religious jew[,] [b]ut [] an honest one – which means that I do not play stupid games”, despite all evidence to the contrary.
    I have no problem with discussions that are “[a]brasive, insulting, argumentative, even arrogant and obnixous”, or those that are more genteel. But I have a low tolerance for baby-sitting folks who think they are immune to the standards they themselves promulgate.

    Reply
  39. Mark Chu-Carroll

    Roger:
    You don’t seem to get it, at all.
    You’re babbling about math you don’t understand, and then going out of your way to throw personal insults.
    Statements like “you’re focusing on the string instead of the event” is an absolute demonstration that *you don’t understand information theory*. There are two ways of understanding information: the on-line perspective (which is the perspective of observing the event, and characterizing information in terms of “surprise”), and the off-line perspective (which is looking at the string that results, and considering its compressibility).
    You can play rhetorical games all you want, but in the end, it comes down to this: you’re making nonsensical mathematical arguments, and you’ve made very specific accusations of dishonesty against me which you can’t be bothered to justify.

    Reply
  40. Jan

    Mark:
    I would like to thank you for a well written piece on the many mistakes that dembski makes. When I first saw dembskis use these mathematical concepts I knew that he either abused it or misunderstood it. But I did not know exactly which until read this entry.

    Reply
  41. Mark Perakh

    Whereas I don’t think using words like “idiot” is helpful, in the debate between Mark Chu-Carroll and Roger Rabbit I tend to agree more with MCC than with RB. What puzzled me in this debate was that both sides ignored earlier publications wherein Dembski’s treatment of specification was discused in thorough detail (see, for example, the essay by Wesley Elsberry and Jeff Shallit at http://www.talkreason.org/articles/eandsdembski.pdf ; BTW Shallit was teaching Kolmogorov complexity to the class where Dembski was a student; it seems Dembski did not absorb Shallit’s lectures well enough).
    Regarding Dembski’s dishonesty (which Roger seems to doubt) see, for example, http://www.talkreason.org/articles/creative.cfm .

    Reply
  42. Torbjörn Larsson

    Mark P:
    As I understand the point with Mark’s blog, it is that he looks for and criticize bad math where he sees it, without neccessarily bother to study prior art. But I’m sure your references are welcome. I found them interesting.
    Speaking of which, in Elsberry’s and Shallit’s essay I can’t find, at a quick glance, the rather short and elegant analysis that Mark does in his post. I also note that they seem to conflate SC and CSI, which as I understand it has the same basings while CSI makes a more specific and stronger claim and is used differently by Dembski. Of course, taking care of CSI takes care of SC which is what they do. But I can’t see that they care to make that point explicit.
    I also think Mark has come to the conclusion that Dembski’s math shows that he is inept at it, lying or not. It indeed seems that on other areas he is more convincing as a liar.

    Reply
  43. dave fitz

    If information does not prove anything about its source, as seen in the coin-flip vs. bit-machine example, then Dembski’s mighty effort of 15 years to prove source based on information…
    …is all a total waste.
    No matter to what absurd and apparently unreachable heights Dembski pushes the improbability of life on Earth, those arguing for chance can simply expand the probabilities infinitely. Maybe there were a million universes which existed lifeless before collapsing back into the cosmic egg, and this is the millionth and first. Other than baseless assertion and pointless handwaving, what could possibly end the argument in Dembski’s favor?
    The most irritating part of this, to me, is that Dembski keeps asserting, without evidence, that he can construct a confidence interval to trap the presence of a First Cause. Such a claim is meaningless in both science and theology, and resembles some kind of near-waking dream or maybe a shroom fantasy. Can anyone explain why we’re still having this conversation?
    Dave

    Reply
  44. pwe

    Hi Mark;
    As you indicate, it’s a problem that Dembski uses the word “complexity” in more than one meaning, and if you don’t keep track of, which meaning is used where, things get confusing.
    At p. 15 Dembski writes:

    It’s this combination of pattern-simplicity (i.e., easy description of pattern) and event-complexity (i.e., difficulty of reproducing the corresponding event by chance) that makes the pattern exhibited by (ψR) – but not (R) – a specification.

    At p. 16 he writes:

    To continue our story, given that S has noticed that E exhibits the pattern T, S‘s background knowledge now induces a descriptive complexity of T, which measures the simplest way S has of describing T.

    Now, “pattern-simplicity” and “descriptive complexity” are one and the same and at least analogous to K-C complexity. However, “event-complexity” from p. 15 is the improbability of the event occuring by chance.
    Say you toss a fair coin 100 times, and you toss it again 100 times. What is the probability of getting the same sequence in both rounds? It’s 1/2^100, a low probability, and therefore corresponding to a high event-complexity.
    This complexity, the event-complexity, is independent of the simplicity/complexity of the description of the event. If the sequence happens to be “all 1s”, there’s a simple description; but in most cases there is not. However, my point is that Dembski uses the word “complexity” in two different and unrelated ways, and the way it’s supposed to be used in the term “specified complexity” is in the probability sense. That’s just the way it is, believe me 🙂
    That is, an event with high specified complexity is an event that is simple to describe (low descriptive complexity) but difficult to (re)produce without cheating – where “difficult” means “unlikely to occur in one try”.

    Reply
  45. t

    Why does someone who is a ‘scientist’ have to resort to name-calling and personal attacks when debating a scientific theory? You sound like someone who is very angry, and defending their faith! you don’t need a ph.d to figure out this is not about ‘science’, its personal, and ugly.
    I’ve noticed that darwiniacs try to silence their critics, as in the kitzmuller case. If that doesn’t work, then they quickly resort to personal attacks. If you were so secure in your ‘theory’ then you would welcome any challenges. But you act more like an islamo-fascist when they think islam has been insulted.
    you do your side no credit.

    Reply
  46. Mark C. Chu-Carroll

    For all your complaints, I notice that you don’t bother to address *any* of my actual criticisms of Dembski and his sloppy math. The simple fact of the matter is: Demsbki *does not* understand information theory, and makes incredibly foolish and inept mistakes. And you *can’t* refuse that, which is why you’re focusing on the *tone* of my critique rather than the content.

    Reply
  47. t

    why should I bother to get into catfight with an someone who may have intellect, but no maturity? Why should I bring myself to your level? You wouldn’t believe anything I said anyway, you are firm in your faith.
    and make no mistake, this is not ‘science’ its faith. You perceive it to be under attack, and you react with hatred and anger. I would never raise my voice to my colleagues, or call them names, when discussing something related to my profession. I’ve noticed its easy to call people names on a message board, but much harder in person.

    Reply
  48. Mark C. Chu-Carroll

    Yes, no doubt, you’re so superior to a lowly swine like me that I’m not worth talking to. So you’ll lecture me on manners; and you’ll come back to my blog repeatedly to reply; and tell me about how what I’ve written is “not science”.
    But addressing the actual *content* of my post? No. That’s clearly over the line, because I’m not worth it.
    The fact remains: Dembski does not get information theory; and he’s defined his terms so that his fundamental concept of “specified complexity” is meaningless.

    Reply
  49. t

    The fact remains when you have to resort to name-calling you’ve already lost the argument! looks like I got under your skin! you darwiniacs, and yes Ann is right about people like you, are very sensitive when it comes to your faith aren’t you? You have to try to silence your opponents, and when that doesn’t work, personally attack them. You’re desperately trying to make a name for yourself. I tried to find where Dembski has responded to you, but I couldn’t in a quick search, perhaps he doesn’t think you’re worth responding to? thats what really bothers you isn’t it? He has responded to many others, but not to you!
    I can’t argue the math, I’m not a mathematician. But what we’re really talking about is evolution. You look around at all this complexity and see it all happening by chance. But Hoyle, who was probably a better mathematician than you, said it was like the chance of a tornado going through a junkyard and creating a 747. You look at the universe, and again Hoyle said: “A common-sense interpretation of the facts suggests that a superintellect has monkeyed with the physics.”
    Then the darwiniacs resort to just-so stories…given this, then that, and in desperation, resort to calling when a bacteria loses sensitivity to an anti-biotic, ‘evolution’. When its still a bacteria, and, according to spetner, whom I’m sure you disagree with, causes the bacteria to lose information.
    You want to make a name for yourself? its simple, prove evolution. Which should be easy as pie! since life arose by chance, using what…INTELLIGENCE, you should be able to produce life! do that, and you win the nobel prize, and are set for life, and you prove dembski, behe, and all those other hicks wrong!!! just think of the glory! just mix a few chemicals, zap it with electricity (ala frankenstein) and voila LIFE!! should be easy since, according to darwin, life arose by chance, dumb luck!
    but you can’t. Dawkins was right, biologists study very complex things that look like they were designed….he should have stopped there, because it goes against common sense to think that all this immense complexity, that we barely understand (whats dark matter again?) just came here by chance.

    Reply
  50. Mark C. Chu-Carroll

    You can babble all you like. But you’ve just *admitted* that you don’t understand the post that you’re criticizing.
    You started off by saying “this is not science”. But when you’re pressed to say *why* it’s not science; *why* my criticism of Dembski is invalid, what do you do? You admit that “I can’t argue the math, I’m not a mathematician.”
    The whole point of this post was that Demsbki’s math is *wrong*. Badly, god-awfully wrong. I’m not trying to make a one-page proof of the theory of evolution. I’m criticizing sloppy, bad, *wrong* math by someone who presents himself as a mathematician.
    Part of the beauty of math – and science – is that I don’t *have* to be as smart as Hoyle. I don’t even have to be as smart as Dembski. As long as I *show the math*, and my math is valid, the argument is over.
    Dembski argues that specification = low K-C complexity, and complexity = high K-C complexity; therefore specified complexity is “high and low K-C complexity at the same time”.
    Unless I’m *wrong* that in the paper quoted above, Dembski describes specification as low K-C complexity, then Dembski’s SC definition is a pile of gibberish. If I’m wrong about Dembski’s definition of specification, then show me how.
    Remember – you’re the one who started this little discussion by asserting that the argument here “isn’t science”. If you can’t actually make any argument for *why* it wasn’t science, then just what does that make you? A four letter word comes to mind…

    Reply
  51. t

    Why should I believe YOUR math over Dembski’s? when you have to resort to calling him names!! and no its not ‘science’ when you resort to name-calling, but I’ve noticed you darwiniacs call science, and evolution whatever you want!
    I notice you couldn’t answer any of my points, but hey thats OK, have you created that life form yet? whats taking you so long?? having a litle problem?? HMMMM?? well lets make it easy for you, go ahead and take an existing life form and ‘evolve’ it into sd98fhvw (something NEW) given that they’ve tried for thousands of generations with fruit flies and bacteria, and have had no luck…I’m sure you with your ‘intellect’ will have no problems!!
    as far as calling me a ‘four letter word’ isn’t that nice, real ‘tough’ on a message board, but you wouldn’t do it to my face you punk ass little !!!! HAHAHAHAAHHA

    Reply
  52. Mark C. Chu-Carroll

    Why should you believe my math of Dembski’s? That’s an easy one.
    As I said in my last comment: the beauty of math is that *anyone* can do it. It’s not that it’s *my* math. It’s that it’s *math*. It’s not just me blindly asserting my belief over Dembski’s. It’s me presenting *the mathematical argument that demonstrates that Dembski is wrong*. Anyone who knows math can read a mathematical argument, and see if the math is valid. The reason why you should believe my math over Dembski’s is quite simple: because the argument is valid mathematics. I’ve shown what’s wrong with Dembski’s math.
    Incidentally, the four letter word I was thinking of was “liar”. Since you popped up, and claimed that my article was not science, but you later admitted that you didn’t understand the actual *content* of the article. So you were lying when you claimed to be able to say whether or not there was any scientific content.

    Reply
  53. Bronze Dog

    Probably long-gone troll:

    thanks for proving my points LOSER!!

    WHAT points? You never made one.
    Reminds me of some trolls who seemed to think that keeping my gmail account open on a weekend proves that Sylvia Browne is psychic.

    Reply
  54. Wesley R. Elsberry

    Sorry for not seeing this sooner.
    T. Larsson wrote:

    Speaking of which, in Elsberry’s and Shallit’s essay I can’t find, at a quick glance, the rather short and elegant analysis that Mark does in his post. I also note that they seem to conflate SC and CSI, which as I understand it has the same basings while CSI makes a more specific and stronger claim and is used differently by Dembski. Of course, taking care of CSI takes care of SC which is what they do. But I can’t see that they care to make that point explicit.

    We did make the point that K-C complexity has nothing to do with probability in our discussion of Dembski’s misuse of Davies. There we also noted the relation between CSI and compressibility.

    Dembski also identifies CSI or “specified complexity” with similarly-worded concepts in the literature. But these identifications are little more than equivocation. For example, Dembski quotes Paul Davies’ book, The Fifth Miracle, where Davies uses the term “specified
    complexity”, and strongly implies that Davies’ use of the term is the same as his own [19, p. 180]. This is simply false. For Davies, the term “complexity” means high Kolmogorov complexity, and has nothing to do with improbability. In contrast Dembski himself associates
    CSI with low Kolmogorov complexity:
    It is CSI that within the Chaitin-Kolmogorov-Solomonov theory of algorithmic information identifies the highly compressible, nonrandom strings of digits… [19, p. 144]
    (Note that in algorithmic information theory, “highly compressible” is synonymous with “low Kolmogorov complexity”; see the Appendix.) Therefore Dembski’s and Davies’ use of “specified complexity” are incompatible, and it is nonsensical to equate them.

    Dembski’s uses SC, CSI, and “complexity-specification” as synonymous phrases. See the preface of “No Free Lunch” for an example of Dembski explicitly doing so for “specified complexity” and “complex specified information”.

    Reply

Leave a Reply