{"id":772,"date":"2009-05-11T14:17:00","date_gmt":"2009-05-11T14:17:00","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2009\/05\/11\/dembski-responds\/"},"modified":"2009-05-11T14:17:00","modified_gmt":"2009-05-11T14:17:00","slug":"dembski-responds","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2009\/05\/11\/dembski-responds\/","title":{"rendered":"Dembski Responds"},"content":{"rendered":"<p> Over at Uncommon Descent, Dembski has responded to my critique of<br \/>\nhis paper with Marks. In classic Dembski style, he ignores the<br \/>\nsubstance of my critique, and resorts to quote-mining.<\/p>\n<p> In my <a href=\"http:\/\/scientopia.org\/blogs\/goodmath\/2009\/05\/dembskis-latest-lifes-conservation-law-and-why-its-stupid\">previous post<\/a>, I included a summary of my past critiques of<br \/>\nwhy search is a lousy model for evolution. It was a brief summary of<br \/>\npast comments, which did nothing but set the stage for my<br \/>\ncritique. But, typically, Dembski pretended that that was the entire<br \/>\nsubstance of my post, and ignored the rest of it. Very typical of<br \/>\nDembski &#8211; just misrepresent your opponents, create a strawman, and<br \/>\nthen pretend that you&#8217;ve addressed everything.<\/p>\n<p><!--more--><\/p>\n<p> Dembski does his best to misrepresent even that small portion.  As<br \/>\nI&#8217;ve said lots of times in the past: search is a crappy model for<br \/>\nevolution. It&#8217;s value is that it provides a handle on which to hang<br \/>\nyour intuition. But its great weakness is that your intuition is<br \/>\nfrequently <em>wrong<\/em>. Our sense of a landscape is something<br \/>\nstatic and unchanging, smooth, with hills and valleys. We intuitively<br \/>\nexpect a landscape to have hills with maxima, and valleys with minima.<br \/>\nThat comes both from our intuition about &#8220;normal&#8221; shapes, and our<br \/>\nexperience with 3 dimensional space. But in evolutionary search,<br \/>\nwe&#8217;re looking at a dynamic landscape with thousands of dimensions.<br \/>\nNone of our expectations work there.<\/p>\n<p> Dembski points out that you can always add time as a<br \/>\ndimension. That&#8217;s true &#8211; in fact, I&#8217;ve <a href=\"http:\/\/scientopia.org\/blogs\/goodmath\/2008\/12\/fitness-landscapes-evolution-and-smuggling-information\">pointed that out before.<\/a> But<br \/>\nthe point still holds that the static landscape model doesn&#8217;t work<br \/>\nvery well. Why? The simple way of adding time as a dimension<br \/>\ncorresponds to our experience of time: a linear progression forwards &#8211;<br \/>\na single additional dimension, where the motion through that dimension<br \/>\nis linear. But if we want to model evolution, we <em>can&#8217;t<\/em> use<br \/>\nthat model of time &#8211; because the landscape changes <em>in response to<br \/>\nour search<\/em>. If the search chooses step A, then the landscape<br \/>\nresponds in one way; if the search chooses step B, the landscape<br \/>\nresponds in a <em>different<\/em> way. <\/p>\n<p> To make that a bit more concrete, think about a simple example<br \/>\nfor which we&#8217;ve got lots of observations: antibiotic resistance.<br \/>\nYou&#8217;ve got a bacteria that&#8217;s not resistant to any antibiotics. How&#8217;s<br \/>\nit going to evolve over time? No particularly good idea. Now,<br \/>\nyou add penicillin to its environment. What&#8217;s going to happen? You&#8217;re<br \/>\ngoing to select for penicillin resistance. Now, imagine two scenarios:<br \/>\none where you keep giving penicillin, and one where you stop the<br \/>\npenicillin. In the first scenario, the landscape requires that the<br \/>\nbacteria maintain penicillin resistance; the bacteria can&#8217;t reproduce<br \/>\nif it isn&#8217;t resistant. So you&#8217;ve got a landscape that maximizes<br \/>\nthe survival of the resistant bacteria. In the second scenario,<br \/>\none possible outcome is that resistance <em>disappears<\/em>: the<br \/>\nresistant bacteria waste energy on their resistance strategies,<br \/>\nand get outcompeted by the non-resistant. The two landscapes are<br \/>\ntotally different.<\/p>\n<p> In that case, you&#8217;ve got two different landscapes because of two<br \/>\ndifferent external interventions. But you can get an equivalent<br \/>\nsituation without the external change of removing the<br \/>\npenicillin. Bacteria have multiple ways of combatting antibiotics.<br \/>\nPenicillin works by interfering with the process by which the bacteria<br \/>\nproduce their cell walls. When they divide in the presence of<br \/>\npenicillin, they basically explode &#8211; they split, and they can&#8217;t<br \/>\nmanufacture the walls to close the two new cells. Some bacteria<br \/>\nrespond to this by producing a chemical that binds to the penicillin<br \/>\nmolecule, so that it&#8217;s neutralized, and can&#8217;t interfere with the<br \/>\nproduction of the new cell wall. Another response is to build the<br \/>\nwalls in a different way, so that penicillin no longer<br \/>\ninterferes. Both of those strategies work; and they each have<br \/>\nadvantages and disadvantages. When exposed to penicillin, bacteria<br \/>\ncan go either direction. If they manufacture their cell walls<br \/>\ndifferently, that produces one fitness landscape; if they excrete<br \/>\na penicillin neutralizer, that produces a different fitness<br \/>\nlandscape.<\/p>\n<p> The simple way of adding dimensions to a landscape model<br \/>\ndoesn&#8217;t capture this. And it&#8217;s really quite difficult to actually<br \/>\ncapture this in a landscape model &#8211; it basically means that<br \/>\n<em>you can never know what the landscape is like<\/em>. You get<br \/>\na massively multidimensional landscape, with absolutely no ability to<br \/>\npredict what will happen as time passes. It basically turns the model<br \/>\nwhose advantage is simplicity and intuitiveness, and turns it into<br \/>\nsomething astonishingly complex and non-intuitive.<\/p>\n<p> In short, it&#8217;s a crappy model.<\/p>\n<p> But &#8211; <em>none of that matters for this discussion<\/em>. I don&#8217;t<br \/>\nlike reasoning about evolution using search, because I find it to be<br \/>\nobfuscatory rather than clarifying; but you <em>can<\/em> model<br \/>\nevolution as a search over an incredibly complex and<br \/>\nhard-to-comprehend landscape. That&#8217;s why the critique of the landscape<br \/>\nmodel in my original post was three sentences out of a rather long<br \/>\npost.<\/p>\n<p> When you look at Dembski&#8217;s article, it&#8217;s very messy because he<br \/>\nchose (deliberately) to use a messy model. But the messiness isn&#8217;t the<br \/>\nproblem: the messiness is a distraction, which is intended to obscure<br \/>\nthe actual problem. And the actual problem is that it&#8217;s a circular<br \/>\nargument. It basically starts by assuming that only an intelligent<br \/>\nagent can create information; then it jumps through all sort of hoops<br \/>\nin order to finally conclude that only an intelligent agent can create<br \/>\ninformation. In that sense, the whole thing is completely content<br \/>\nfree: it&#8217;s pure obfuscatory math &#8211; math used to obscure an argument,<br \/>\nrather than clarify or formalize it. In fact, the fundamental argument<br \/>\nof the entire thing is non-mathematical: &#8220;intelligence&#8221; isn&#8217;t a part<br \/>\nof the math at all.<\/p>\n<p> So what&#8217;s all that impressive looking math about?<\/p>\n<p> Not a heck of a lot. There&#8217;s a reason that I keep going on about<br \/>\nhow it&#8217;s all just a mass of obfuscation.<\/p>\n<p> First of all, it pretends to present a conservation law. Second,<br \/>\nit pretends to prove that an intelligent source of information is<br \/>\nrequired. But in fact, the conservation law isn&#8217;t really a<br \/>\nconservation law, and the idea that an intelligent source of<br \/>\ninformation is required is not only not proved by the argument based<br \/>\non the conservation law, but is in fact <em>inconsistent with<\/em><br \/>\nthe conservation law.<\/p>\n<p> We&#8217;ll start with the conservation law. As I said in the previous<br \/>\npost, Dembski engages in a whole lot of obfuscatory mathematics to<br \/>\ncreate a supposed proof that in order for a search to perform better<br \/>\nthan a random walk, that search must, in essence, cheat: it must<br \/>\nencode information about the landscape that it traverses. In Dembski&#8217;s<br \/>\nterminology, it must contain &#8220;smuggled&#8221; active information. The<br \/>\nsupposed &#8220;law of conservation of information&#8221; basically says that the<br \/>\namount of information encoded into the search can be measured by the<br \/>\nthe rate at which that search outperforms random walk.<\/p>\n<p> So why is that <em>not<\/em> a &#8220;conservation of information&#8221;<br \/>\nlaw?<\/p>\n<p> Because there&#8217;s no conserved quantity. In a real conservation law,<br \/>\nyou have a measured quantity that you start with, and throughout any<br \/>\nseries of actions or events, you can prove that that quantity never<br \/>\nchanges. For example, you can look at a physical system in a<br \/>\nparticular frame of reference and measure the total momentum in the<br \/>\nsystem. Then throughout any series of interactions, you can show<br \/>\nthat the momentum never changes.<\/p>\n<p> In Dembski&#8217;s system, can you measure the total information in the<br \/>\nsystem? No. Can you show that the amount of information in the system<br \/>\nis the same before and after a search? Not in any meaningful way,<br \/>\nno. Can you look at a search function, and ask how much information<br \/>\nit encodes from a particular landscape? Not in any meaningful way,<br \/>\nno. <\/p>\n<p> To be a little bit concrete: there is <em>no<\/em> analytic way to<br \/>\nlook at a search function and quantify how much &amp;lquot;active<br \/>\ninformation&amp;rquot; is embedded in it. It can only be determined<br \/>\nretrospectively: run the search in the landscape, determine how well<br \/>\nit performed, and then quantify its performance. Looked at from that<br \/>\nperspective, it&#8217;s a (sloppy) re-statement of Kolmogorov-Chaitin information<br \/>\ntheory: the information contained in a string (or, to use Dembski&#8217;s<br \/>\nscenario, a landscape position) is the shortest program that can<br \/>\ngenerate that string (or a path to that position).<\/p>\n<p> So &#8211; Dembski unknowingly rephrased a bit of K-C theory. That&#8217;s not<br \/>\nso bad, right? To manage to redo a bit of work by two of the best<br \/>\nmathematicians of the 20th century? Well, if that&#8217;s what he meant to<br \/>\ndo, it wouldn&#8217;t be bad. But it&#8217;s very bad for Dembski&#8217;s argument:<br \/>\nK-C theory doesn&#8217;t support Dembski&#8217;s argument. In fact, in K-C theory,<br \/>\nyou <em>can&#8217;t quantify<\/em> information in a precise way. Beyond<br \/>\nsome absolutely trivial examples, you can&#8217;t measure the quantity of<br \/>\ninformation.<\/p>\n<p> Dembski is arguing that information must be conserved, using a<br \/>\nframework that in which <em>you can&#8217;t measure it<\/em>. And further,<br \/>\nit&#8217;s a framework in which the <em>intuitive<\/em> notion of information<br \/>\n&#8211; which is what Dembski is really relying on &#8211; has absolutely no<br \/>\nconnection with the information that&#8217;s supposedly hidden in<br \/>\nthe search.<\/p>\n<p> Once again, I&#8217;ll get a bit concrete. A while back, I wrote about a<br \/>\n<a href=\"http:\/\/scienceblogs.com\/goodmath\/2008\/11\/evolution_produces_better_ante.php\">NASA<br \/>\nexperiment to design a better antenna.<\/a> The engineers involved used<br \/>\nan evolution-based approach. The result was <em>completely<br \/>\nunexpected<\/em>. The particular shape that turned out to optimal was<br \/>\nnot something that any of the engineers involved would have come up<br \/>\nwith by themselves. By Dembski&#8217;s argument in this paper, the<br \/>\ndescription of that antenna was encoded into the search system as<br \/>\n&#8220;active information&#8221;. But the people involved in the experiment<br \/>\n<em>didn&#8217;t have<\/em> the information about what the optimal antenna<br \/>\nwould look like. And by looking at their simulation, no one could<br \/>\nextract or identify the information that Dembski insists was smuggled<br \/>\nin. Running the experiment produced the information that this<br \/>\nparticular antenna appears to be optimal. In the Kolmogorov-Chaitin<br \/>\nsense, that means that information about the optimality was contained<br \/>\nin the combination of the search and the search landscape &#8211; which is<br \/>\ntrivially true, because running the program in the landscape produces<br \/>\nthat antenna as a result. But that&#8217;s <em>not<\/em> what Dembski is<br \/>\nclaiming to say. Dembski is saying that the program implicitly<br \/>\ncontains the solution.<\/p>\n<p> Why does it implicitly contain the solution? Because Dembski says<br \/>\nso. Seriously &#8211; that&#8217;s what his argument reduces to. He<br \/>\n<em>defines<\/em> the active information in a system in terms of how<br \/>\nthat system performs in a search. Then he shows that the amount of<br \/>\ninformation that results from doing the search is equal to the amount<br \/>\nof active information in the search algorithm. It&#8217;s a trick of<br \/>\ndefinitions, obscured by a lot of pointlessly complex math. In<br \/>\nessence, it reduces to making a blind assertion: information is<br \/>\nconserved; therefore any system that can in any sense produce<br \/>\ninformation must contain that information. But since by the<br \/>\nalgorithmic definition of information, any system that produces<br \/>\ninformation contains the information it produces, saying that<br \/>\ninformation is conserved is a simple tautology &#8211; exactly the kind of<br \/>\nstatement that Dembski mocks in the beginning of the paper!<\/p>\n<p> Moving on to the second point, Dembski pretends that this whole<br \/>\nargument somehow shows that intelligence must be involved in any<br \/>\nprocess that creates information. As I said in both the previous<br \/>\nparagraph and the previous post, it doesn&#8217;t work &#8211; because he doesn&#8217;t<br \/>\nactually make that argument. He <em>assumes<\/em> it as a premise<br \/>\nbefore he starts, and then he concludes it at the end as if it&#8217;s a new<br \/>\nresult. Look at all of his mathematical arguments &#8211; there&#8217;s nothing in<br \/>\nthere that defines &#8220;intelligence&#8221;. There&#8217;s nothing that concludes<br \/>\nanything about intelligence. There&#8217;s the whole so-called conservation<br \/>\nlaw &#8211; but it <em>doesn&#8217;t allow any exceptions<\/em>. There is nowhere<br \/>\nin that whole line of mathematical argument that says &#8220;Intelligence<br \/>\ncan create information&#8221;; in fact, according to that argument,<br \/>\nintelligent agents <em>cannot<\/em> create information &#8211; nothing<br \/>\ncan.<\/p>\n<p> That&#8217;s the nail in the coffin of Dembski&#8217;s argument. His<br \/>\nwhole conservation law effectively argues that you can&#8217;t create<br \/>\ninformation. He <em>claims<\/em> that it says you can&#8217;t create<br \/>\ninformation without intelligence &#8211; but that exception, that<br \/>\nintelligence can create information &#8211; is <em>totally omitted from<br \/>\nthe math<\/em>.<\/p>\n<p> You can apply Dembski&#8217;s own argument to an intelligent<br \/>\nagent searching for a solution to something. The intelligent<br \/>\nagent, by Dembski&#8217;s own argument, <em>already contains all of<br \/>\nthe information<\/em> that it uses in the search. If<br \/>\nyou actually follow through on what Dembski&#8217;s conservation of<br \/>\ninformation stuff says, what it ultimately means is that<br \/>\n<em>intelligent agents can&#8217;t produce information<\/em>. If an<br \/>\nintelligent agent produces information, they must contain that<br \/>\ninformation as well.  So either intelligent agents can&#8217;t create<br \/>\ninformation, or information isn&#8217;t conserved. Either way,<br \/>\nDembski winds up defeating his own argument.<\/p>\n<p> Dembski&#8217;s entire argument is <em>self-defeating<\/em>. Either<br \/>\ninformation is conserved in the sense that he insists &#8211; and then<br \/>\nintelligent agents can&#8217;t produce information; or intelligent<br \/>\nagents <em>can<\/em> create information &#8211; and then the conservation<br \/>\nof information &#8220;law&#8221; is refuted.<\/p>\n<p> As I said in the original post, and have now argued multiple times<br \/>\nhere, this is all an eloborate, highly obscured circle. Information is<br \/>\nconserved, which means that information can&#8217;t be created; therefore<br \/>\nsomething must have created it. Why?  Because at the beginning of the<br \/>\nwhole argument, he asserted that an intelligent agent can create<br \/>\ninformation; therefore if you can find any information, since it&#8217;s<br \/>\nconserved, something intelligent must have created it. He defined<br \/>\ninformation as a conserved quantity created by an intelligent agent,<br \/>\nand then used all of that impressive-looking math to ultimately<br \/>\n&#8220;prove&#8221; that information is a conserved quantity created by an<br \/>\nintelligent agent.  It&#8217;s a perfect example of obfuscatory math: all of<br \/>\nthat math is just a smoke screen to cover up the fact that he&#8217;s<br \/>\nembedded his conclusion in the assumptions at the beginning of the<br \/>\nargument &#8211; and worse, it&#8217;s an argument that contains its own<br \/>\nrefutation, because by the supposed conservation law, an intelligent<br \/>\nagent can&#8217;t create information either &#8211; so no information can ever<br \/>\nbe created.<\/p>\n<p> You can see the foundation of the basic circle of the argument if<br \/>\nyou look at the dreadful text of section one of the paper. He starts<br \/>\nwith Shannon&#8217;s definition of information, and then (as I pointed out<br \/>\nbefore), engages in a bunch of sillyness to try to pretend that<br \/>\nvarious philosophers who talked about philosophical ideas of<br \/>\ninformation were actually talking about Shannon information, and then<br \/>\nuses them to build up a purely philosophical argument that only<br \/>\nintelligence can create information. Then he uses that as a basis of<br \/>\nhis further arguments.<\/p>\n<p> The whole paper is an exercise in circularity. There&#8217;s nothing<br \/>\nthere &#8211; which is why this isn&#8217;t a paper in a mathematical journal;<br \/>\ninstead, it&#8217;s just a chapter in one of Bill Dembski&#8217;s vanity<br \/>\npublications.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Over at Uncommon Descent, Dembski has responded to my critique of his paper with Marks. In classic Dembski style, he ignores the substance of my critique, and resorts to quote-mining. In my previous post, I included a summary of my past critiques of why search is a lousy model for evolution. It was a brief [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[16,31],"tags":[],"class_list":["post-772","post","type-post","status-publish","format-standard","hentry","category-debunking-creationism","category-intelligent-design"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-cs","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/772","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=772"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/772\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=772"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=772"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=772"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}