{"id":25,"date":"2006-06-16T12:54:38","date_gmt":"2006-06-16T12:54:38","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2006\/06\/16\/dembskis-profound-lack-of-comprehension-of-information-theory\/"},"modified":"2006-06-16T12:54:38","modified_gmt":"2006-06-16T12:54:38","slug":"dembskis-profound-lack-of-comprehension-of-information-theory","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2006\/06\/16\/dembskis-profound-lack-of-comprehension-of-information-theory\/","title":{"rendered":"Dembski&#039;s Profound Lack of Comprehension of Information Theory"},"content":{"rendered":"<p>I was recently sent a link to yet another of Dembski&#8217;s wretched writings about specified complexity, titled <a href=\"http:\/\/www.designinference.com\/documents\/2005.06.Specification.pdf\">Specification: The Pattern The Signifies Intelligence<\/a>.<br \/>\nWhile reading this, I came across a statement that actually changes my opinion of Dembski. Before reading this, I thought that Dembski was just a liar. I thought that he was a reasonably competent mathematician who was willing to misuse his knowledge in order to prop up his religious beliefs with pseudo-intellectual rigor.  I no longer think that. I&#8217;ve now become convinced that he&#8217;s just an idiot who&#8217;s able to throw around mathematical jargon without understanding it.<br \/>\nIn this paper, as usual, he&#8217;s spending rather a lot of time avoiding defining specification. Purportedly, he&#8217;s doing a survey of the mathematical techniques that can be used to define specification. Of course, while rambling on and on, he manages to never actually say  just what the hell specification <em>is<\/em> &#8211; just goes on and on with various discussions of what it <em>could be<\/em>.<br \/>\nMost of which are wrong.<br \/>\n&#8220;But wait&#8221;,  I can hear objectors saying. &#8220;It&#8217;s his theory! How can his own definitions of his own theory be wrong? Sure, his theory can be wrong, but how can his own definition of his theory be wrong?&#8221; Allow me to head off that objection before I continue.<br \/>\nDemsbki&#8217;s theory of specicfied complexity as a discriminator for identifying intelligent design relies on the idea that there are two <em>distinct<\/em> quantifiable properties: specification, and complexity. He argues that if you can find systems that posess sufficient quantities of both specification <em>and<\/em> complexity, that those systems cannot have arisen except by intelligent intervention.<br \/>\nBut what if Demsbki defines specification and complexity <em>as the same thing<\/em>?  Then his definitions are wrong: because he requires them to be distinct concepts, but he defines them as being <em>the same thing<\/em>.<br \/>\nThroughout this paper, he pretty ignores the complexity to focus on specification. He&#8217;s pretty careful never to say &#8220;specification <b>is<\/b> this&#8221;, but rather &#8220;specification <b>can be<\/b> this&#8221;. If you actually read what he <em>does<\/em> say about specification, and you go back and compare it to some of his other writings about complexity, you&#8217;ll find a positively amazing resemblance.<br \/>\nBut onwards. Here&#8217;s the part that really blew my mind.<br \/>\nOne of the methods that he purports to use to discuss specification is based on Kolmogorov-Chaitin algorithmic information theory. And in his explanation, he demonstrates a profound lack of comprehension of <em>anything<\/em>  about KC theory.<br \/>\nFirst &#8211; he purports to discuss K-C within the framework of probability theory. K-C theory has <em>nothing to do<\/em> with probability theory. K-C theory is about the meaning of quantifying information; the central question of K-C theory is: How much information is in a given string? It defines the answer to that question in terms of computation and the size of programs that can generate that string.<br \/>\nNow, the quotes that blew my mind:<\/p>\n<blockquote><p>\nConsider a concrete case. If we flip a fair coin and note the occurrences of heads and tails in<br \/>\norder, denoting heads by 1 and tails by 0, then a sequence of 100 coin flips looks as follows:<\/p>\n<pre>\n(R) 11000011010110001101111111010001100011011001110111\n00011001000010111101110110011111010010100101011110.\n<\/pre>\n<p>This is in fact a sequence I obtained by flipping a coin 100 times. The problem algorithmic<br \/>\ninformation theory seeks to resolve is this: Given probability theory and its usual way of<br \/>\ncalculating probabilities for coin tosses, how is it possible to distinguish these sequences in terms<br \/>\nof their degree of randomness? Probability theory alone is not enough. For instance, instead of<br \/>\nflipping (R) I might just as well have flipped the following sequence:<\/p>\n<pre>\n(N) 11111111111111111111111111111111111111111111111111\n11111111111111111111111111111111111111111111111111.\n<\/pre>\n<p>Sequences (R) and (N) have been labeled suggestively, R for &#8220;random,&#8221; N for &#8220;nonrandom.&#8221;<br \/>\nChaitin, Kolmogorov, and Solomonoff wanted to say that (R) was &#8220;more random&#8221; than (N). But<br \/>\ngiven the usual way of computing probabilities, all one could say was that each of these<br \/>\nsequences had the same small probability of occurring, namely, 1 in 2100, or approximately 1 in<br \/>\n1030. Indeed, every sequence of 100 coin tosses has exactly this same small probability of<br \/>\noccurring.<br \/>\nTo get around this difficulty Chaitin, Kolmogorov, and Solomonoff supplemented conventional<br \/>\nprobability theory with some ideas from recursion theory, a subfield of mathematical logic that<br \/>\nprovides the theoretical underpinnings for computer science and generally is considered quite far<br \/>\nremoved from probability theory.\n<\/p><\/blockquote>\n<p>It would be difficult to find a more misrepresentative description of K-C theory than this. This has nothing to do with the original motivation of K-C theory; it has nothing to do with the practice of K-C theory; and it has pretty much nothing to do with the actual value of K-C theory. This is, to put it mildly, a pile of nonsense spewed from the keyboard of an idiot who thinks that he knows something that he doesn&#8217;t.<br \/>\nBut it gets worse.<\/p>\n<blockquote><p>\nSince one can always describe a sequence in terms of itself, (R) has the description<\/p>\n<pre>\ncopy '11000011010110001101111111010001100011011001110111\n00011001000010111101110110011111010010100101011110'.\n<\/pre>\n<p>Because (R) was constructed by flipping a coin, it is very likely that this is the shortest<br \/>\ndescription of (R). It is a combinatorial fact that the vast majority of sequences of 0s and 1s have<br \/>\nas their shortest description just the sequence itself. In other words, most sequences are random<br \/>\nin the sense of being algorithmically incompressible. It follows that the collection of nonrandom<br \/>\nsequences has small probability among the totality of sequences so that observing a nonrandom<br \/>\nsequence is reason to look for explanations other than chance.\n<\/p><\/blockquote>\n<p>This is <em>so<\/em> very wrong that it demonstrates a total lack of comprehension of what K-C theory is about, how it measures information, or what it says about <em>anything<\/em>. No one who actually understands K-C theory would <em>ever<\/em> make a statement like Dembski&#8217;s quote above. <em>No one<\/em>.<br \/>\nBut to make matters worse &#8211; this statement explicitly invalidates the entire concept of specified complexity.  What this statement means &#8211; what it <em>explicitly says<\/em> if you understand the math &#8211; is that <em>specification<\/em> is the opposite of <em>complexity<\/em>. Anything which posesses the property of specification <em>by definition<\/em> does not posess the property of complexity.<br \/>\nIn information-theory terms, complexity is non-compressibility. But according to Dembski, in IT terms, specification is compressibility. Something that possesses &#8220;specified complexity&#8221; is therefore something which is simultaneously compressible and non-compressible.<br \/>\nThe only thing that saves Dembski is that he hedges everything that he says. He&#8217;s not saying that this is what specification means. He&#8217;s saying that this <em>could be<\/em> what specification means. But he also offers a half-dozen other alternative definitions &#8211; with similar problems. Anytime you point out what&#8217;s wrong with any of them, he can always say &#8220;No, that&#8217;s not specification. It&#8217;s one of the others.&#8221; Even if you go through the whole list of possible definitions, and show why every single one is no good &#8211; he can still say &#8220;But I didn&#8217;t say any of those were the definition&#8221;.<br \/>\nBut the fact that he would even say this &#8211; that he would present this as even a possibility for the definition of specification &#8211; shows that Dembski quite simply <em>does not get it<\/em>. He believes that he gets it &#8211; he believes that he gets it well enough to use it in his arguments. But there is absolutely <em>no way<\/em> that he understands it. He is an ignorant jackass pretending to know things so that he can trick people into accepting his religious beliefs.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I was recently sent a link to yet another of Dembski&#8217;s wretched writings about specified complexity, titled Specification: The Pattern The Signifies Intelligence. While reading this, I came across a statement that actually changes my opinion of Dembski. Before reading this, I thought that Dembski was just a liar. I thought that he was a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[16,30],"tags":[],"class_list":["post-25","post","type-post","status-publish","format-standard","hentry","category-debunking-creationism","category-information-theory"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-p","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/25","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=25"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/25\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=25"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=25"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=25"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}