{"id":512,"date":"2007-09-17T09:00:00","date_gmt":"2007-09-17T09:00:00","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/09\/17\/a-glance-at-the-work-of-dembski-and-marks\/"},"modified":"2017-01-10T09:57:03","modified_gmt":"2017-01-10T14:57:03","slug":"a-glance-at-the-work-of-dembski-and-marks","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2007\/09\/17\/a-glance-at-the-work-of-dembski-and-marks\/","title":{"rendered":"A Glance at the Work of Dembski and Marks"},"content":{"rendered":"<p>Both in comments, and via email, I&#8217;ve received numerous requests to take a look at<br \/>\nthe work of Dembski and Marks, published through Professor Marks&#8217;s website. The site is<br \/>\ncalled the &#8220;Evolutionary Informatics Laboratory&#8221;. Before getting to the paper, it&#8217;s worth<br \/>\ntaking just a moment to understand its provenance &#8211; there&#8217;s something deeply fishy about<br \/>\nthe &#8220;laboratory&#8221; that published this work. It&#8217;s not a lab &#8211; it&#8217;s a website; it was funded<br \/>\nunder very peculiar circumstances, and hired Dembski as a &#8220;post-doc&#8221;, despite his being a full-time professor at a different university. Marks claims that his work for<br \/>\nthe EIL is all done on his own time, and has nothing to do with his faculty position at the university. It&#8217;s all quite bizarre. For details, see <a href=\"http:\/\/www.pandasthumb.org\/archives\/2007\/09\/follow_the_mone.html\">here<\/a>.<\/p>\n<p>On to the work. Marks and Dembski have submitted three papers. They&#8217;re all<br \/>\nin a very similar vein (as one would expect for three papers written in a short period<br \/>\nof time by collaborators &#8211; there&#8217;s nothing at all peculiar about the similarity). The<br \/>\nbasic idea behind all of them is to look at search in the context of evolutionary<br \/>\nalgorithms, and to analyze it using an information theoretic approach. I&#8217;ve<br \/>\npicked out the first one listed on their site: <a href=\"http:\/\/cayman.globat.com\/~trademarksnet.com\/Research\/EILab\/Publications\/CostOfSuccess.html\">Conservation of Information in Search: Measuring the Cost of Success<\/a><\/p>\n<p><!--more--><br \/>\n There&#8217;s two ways of looking at this work: on a purely technical level, and in terms of its<br \/>\npresentation.<\/p>\n<p>On a technical level, it&#8217;s not bad. Not great by any stretch, but it&#8217;s entirely reasonable. The idea<br \/>\nof it is actually pretter clever. They start with NFL. NFL says, roughly, that if you don&#8217;t know anything about the search space, you can&#8217;t select a search that will perform better than a random walk.<br \/>\nIf we have a search for a given search space that <em>does<\/em> perform better than a random walk,<br \/>\nin information theoretic terms, we can say that the search <em>encodes<\/em> information<br \/>\nabout the search space. How can we quantify the information encoded in a search algorithm<br \/>\nthat allows it to perform as well as it does?<\/p>\n<p>So, for example, think about a search algorithm like Newton&#8217;s method. It generally homes in extremely<br \/>\nrapidly on the roots of a polynomial equation &#8211; dramatically better than one would expect in a random<br \/>\nwalk. For example, if we look at something like y = x<sup>2<\/sup> &#8211; 2, starting with an approximation of a<br \/>\nzero at x=1, we can get to a very good approximation in just two iterations. What information is encoded<br \/>\nin Newton&#8217;s method? Among other things, it&#8217;s working in a Euclidean space on a continuous, differentiable<br \/>\ncurve. That&#8217;s rather a lot of information. We can actually quantify that in information theoretic<br \/>\nterms by computing the average time to find a root in a random walk, compared to the average time<br \/>\nto find a root in Newton&#8217;s method.<\/p>\n<p>Further, when a search performs <em>worse<\/em> than what is predicted by a random walk, we can<br \/>\nsay that with respect to the particular search task, that the search encodes <em>negative<\/em> information &#8211; that it actually contains some assumptions about the locations of the target that<br \/>\nactively push it away, and prevent it from finding the target as quickly as a random walk would.<\/p>\n<p>That&#8217;s the technical meat of the paper. And I&#8217;ve got to say, it&#8217;s not bad. I was expecting something really awful &#8211; but it&#8217;s not. As I said earlier, it&#8217;s far from being a great paper. But technically, it&#8217;s reasonable.<\/p>\n<p>Then there&#8217;s the presentation side of it. And from that perspective, it&#8217;s awful. Virtually every<br \/>\nstatement in the paper is spun in a thoroughly dishonest way. Throughout the paper, they constantly make<br \/>\nstatements about how information <em>must be<\/em> deliberately encoded into the search by the programmer.<br \/>\nIt&#8217;s clear the direction that they intend to go &#8211; they want to say that biological evolution can<br \/>\nonly work if information was coded into the process by God. For example, an evolution to use <a style=\"text-decoration: none;\" href=\"https:\/\/www.hardboiledbody.com\/best-beta-alanine-supplement\/\"> <span style=\"text-decoration: none; color: #000000;\">beta alanine<\/span><\/a> as a catylist for digestion would of been already predisposed by information placed in DNA. Here&#8217;s an example from the first<br \/>\nparagraph of the paper:<\/p>\n<p>Search algorithms, including evolutionary searches, do not<br \/>\ngenerate free information. Instead, they consume information,<br \/>\nincurring it as a cost. Over 50 years ago, Leon Brillouin, a<br \/>\npioneer in information theory, made this very point: &#8220;The<br \/>\n[computing] machine does not create any new information,<br \/>\nbut it performs a very valuable transformation of known<br \/>\ninformation&#8221; When Brillouin&#8217;s insight is applied to search<br \/>\nalgorithms that do not employ specific information about the<br \/>\nproblem being addressed, one finds that no search performs<br \/>\nconsistently better than any other. Accordingly, there is no<br \/>\n&#8220;magic-bullet&#8221; search algorithm that successfully resolves all<br \/>\nproblems.\n<\/p><\/blockquote>\n<p> That&#8217;s the first one, and the least objectionable. But just half a page later, we find:<\/p>\n<blockquote><p>\nThe significance of COI <em>[MarkCC: Conservation of Information &#8211; not Dembski&#8217;s version, but from someone<br \/>\nnamed English]<\/em> has been debated since its popularization through the NFLT [30]. On the one hand, COI has a<br \/>\nleveling effect, rendering the average performance of all algorithms equivalent. On the other hand,<br \/>\ncertain search techniques perform remarkably well, distinguishing themselves from others. There is a<br \/>\ntension here, but no contradiction. For instance, particle swarm optimization [10] and genetic algorithms<br \/>\n[13], [26] perform well on a wide spectrum of problems. Yet, there is no discrepancy between the<br \/>\nsuccessful experience of practitioners with such versatile search algorithms and the COI imposed inability<br \/>\nof the search algorithms themselves to create novel information [5], [9], [11]. Such information does not<br \/>\nmagically materialize but instead results from the action of the programmer who prescribes how knowledge<br \/>\nabout the problem gets folded into the search algorithm.\n<\/p><\/blockquote>\n<p>That&#8217;s where you can really see where they&#8217;re going. &#8220;Information does not magically materialize, but<br \/>\ninstead results from the action of the programmer&#8221;. The paper harps on that idea to an<br \/>\ninappropriate degree. The paper is supposedly about quantifying the information that<br \/>\nmakes a search algorithm perform in a particular way &#8211; but they just hammer on the idea<br \/>\nthat the information <em>was deliberately put there<\/em>, and that it can&#8217;t come from<br \/>\nnowhere.<\/p>\n<p>It&#8217;s true that information in a search algorithm can&#8217;t come from nowhere. But it&#8217;s<br \/>\nnot a particularly deep point. To go back to Newton&#8217;s method: Newton&#8217;s method of root<br \/>\nfinding certainly codes all kinds of information into the search &#8211; because it was created<br \/>\nin a particular domain, and encodes that domain. You can actually model orbital dynamics<br \/>\nas a search for an equilibrium point &#8211; it doesn&#8217;t require anyone to encode in<br \/>\nthe law of gravitation; it&#8217;s already a part of the system. Similarly in biological<br \/>\nevolution, you can certainly model the amount of information encoded in the process &#8211; which<br \/>\nincludes all sorts of information about chemistry, reproductive dynamics, etc.; but since those<br \/>\nthings are encoded <em>into the universe<\/em>, you don&#8217;t need to find an intelligent agent<br \/>\nto have coded them into evolution: they&#8217;re an intrinsic part of the system in which<br \/>\nevolution occurs. You can think of it as being like a computer program: computer programs<br \/>\ndon&#8217;t need to specifically add code into a program to specify the fact that the computer they&#8217;re going to run on has 16 registers;  <em>every<\/em> program for the computer has that wired into<br \/>\nit, because it&#8217;s a fact of the &#8220;universe&#8221; for the program. For anything in our universe, the basic<br \/>\nfacts of our universe &#8211; of basic forces, of chemistry, are encoded in their existence. For anything on earth, facts about the earth, the sun, the moon &#8211; are encoded into their very existence.<\/p>\n<p>Dembski and Marks try to make a big deal out of the fact that all of this information is quantifiable.<br \/>\n<em>Of course<\/em> it&#8217;s quantifiable. The amount of information encoded into the structure of the universe<br \/>\nis quantifiable too. And it&#8217;s extremely interesting to see just how you can compute how much information<br \/>\nis encoded into things. I like that aspect of the paper. But it doesn&#8217;t imply anything about<br \/>\nthe origin of the information: in this simple initial quantification, information theory cannot distinguish between environmental information which is inevitably encoded, and information<br \/>\nwhich was added by the deliberate actions of an intelligent agent. Information theory can<br \/>\nquantify information &#8211; but it can&#8217;t characterize its source.<\/p>\n<p>If I were a reviewer, would I accept the paper? It&#8217;s hard to say. I&#8217;m not an information theorist; so<br \/>\nI could easily be missing some major flaw. The style of the paper is very different from any other<br \/>\ninformation theory paper that I&#8217;ve ever read &#8211; it&#8217;s got a very strong rhetorical bent to it which is very<br \/>\nunusual. I also don&#8217;t know where they submitted it, so I don&#8217;t know what the reviewing standards are &#8211; the<br \/>\nreviewing standards of different journals are quite different. If this were submitted to a theoretical<br \/>\ncomputer science journal like the ones I typically read, where the normal ranking system is (reject\/accept<br \/>\nwith changes and second review\/weak accept with changes\/ strong accept with changes\/strong accept), I<br \/>\nwould probably rank it either &#8220;accept with changes and second review&#8221; or &#8220;weak accept with changes&#8221;.<\/p>\n<p>So as much as I&#8217;d love to trash them, a quick read of the paper seems to show<br \/>\nthat it&#8217;s a mediocre paper, with an interesting idea. The writing sucks: it was<br \/>\nwritten to try to make a point that it can&#8217;t make technically, and it makes that point with all the subtlety of a sledgehammer, despite the fact that the actual technical content of the paper can&#8217;t support it.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Both in comments, and via email, I&#8217;ve received numerous requests to take a look at the work of Dembski and Marks, published through Professor Marks&#8217;s website. The site is called the &#8220;Evolutionary Informatics Laboratory&#8221;. Before getting to the paper, it&#8217;s worth taking just a moment to understand its provenance &#8211; there&#8217;s something deeply fishy about [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":true,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[30,31],"tags":[],"class_list":["post-512","post","type-post","status-publish","format-standard","hentry","category-information-theory","category-intelligent-design"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-8g","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/512","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=512"}],"version-history":[{"count":3,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/512\/revisions"}],"predecessor-version":[{"id":3383,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/512\/revisions\/3383"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=512"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=512"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=512"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}