{"id":489,"date":"2007-08-12T14:24:54","date_gmt":"2007-08-12T14:24:54","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/08\/12\/the-problem-with-nfl-breadth-or-depth\/"},"modified":"2007-08-12T14:24:54","modified_gmt":"2007-08-12T14:24:54","slug":"the-problem-with-nfl-breadth-or-depth","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2007\/08\/12\/the-problem-with-nfl-breadth-or-depth\/","title":{"rendered":"The Problem with NFL: Breadth or Depth?"},"content":{"rendered":"<p> Despite having written about it before, I still get a lot of questions about William Dembski&#8217;s &#8220;No Free Lunch&#8221;<br \/>\n(NFL) theorems. One message recently contained the question in a particularly interesting form, so I thought I&#8217;d take<br \/>\nthe opportunity to answer it with a post.<\/p>\n<p> Here&#8217;s the question I received:<\/p>\n<ol>\n<li> Is the NFL theorem itself bad math?<\/li>\n<li> If the theorem itself is sound, what&#8217;s wrong with how it&#8217;s being applied? Is it a breadth issue<br \/>\nor a depth issue?<\/li>\n<\/ol>\n<p><!--more--><\/p>\n<p> The theorems themselves are actually good math.  They&#8217;re valid given their premises; they&#8217;re even deep<br \/>\nand interesting theorems in their way. The problem isn&#8217;t the theorem: it&#8217;s the way that the theorem is abused to apply it to evolution. But what I found interesting is the idea of characterizing the problem as breadth or depth.<\/p>\n<p> Before I get into the depth of the answer, let me quickly explain what the NFL theorems say.<\/p>\n<p> Suppose you&#8217;ve got a multivariable function, f(x<sub>1<\/sub>,&#8230;,x<sub>n<\/sub>), called a <em>landscape function<\/em>.  You want to use a search function to find the maximum value of f. NFL asks: if you know nothing about f, and all you can do is ask for the value of f applied to a specific set of parameters, can you write a search<br \/>\nfunction which can find the maximum of X?<\/p>\n<p> NFL says no. If you know nothing about the landscape, no search function that you can write is guaranteed to do any<br \/>\nbetter at finding the optimum than a random walk through the landscape. More precisely, it says that given any<br \/>\nparticular search function, the performance of that search function <em>averaged over all possible landscapes<\/em> is<br \/>\nno better than a random search.<\/p>\n<p> Interestingly, you can view NFL as something pretty close to a variant of the halting problem. You can model<br \/>\nthe halting problem is a landscape where each position in the landscape corresponds to a sequence of instructions, and the landscape value l at that point is a measure of the probability of the program halting after that sequence of instructions. (Of course, you can&#8217;t really construct that landscape; but you can construct one where you use a heuristic to approximate the halting probability, and only get it exactly right at places where the program<br \/>\nhalts in an observed execution.) Then solving the halting problem is equivalent to searching that landscape for a point where l is either 0 or 1.  In that case, NFL is saying, basically, that there&#8217;s no search function for that space<br \/>\nthat does any better than randomly executing the program for a certain duration and seeing if it halts.<\/p>\n<p> Hopefully, that gives you some idea of why I say there&#8217;s some actual depth to NFL. It is saying something about a<br \/>\ngeneral problem: there is no universal search. If you don&#8217;t know anything about the properties of the landscape in<br \/>\nadvance, then you can&#8217;t pick a good search function for that landscape. You need to understand something about what the<br \/>\nlandscape looks like in order to design or select an effective search algorithm for it.<\/p>\n<p> Back to the question: is the problem with NFL as an anti-evolution argument breadth or depth?<\/p>\n<p> The answer is <em>both<\/em>.<\/p>\n<p> The breadth problem is simple: the NFL formulation is too broad. NFL says that averaged over <em>all<\/em> fitness<br \/>\nlandscapes, any given search function is no better than a random walk. Based in this, they say that if you model<br \/>\nevolution as a search function over the fitness landscape of survival, it can&#8217;t possibly do better than total randomness. That&#8217;s nonsense: evolution doesn&#8217;t need to work over <em>all possible<\/em> fitness landscapes. Biology and survival, modeled as a fitness landscape, isn&#8217;t all possible landscapes. It isn&#8217;t even a single randomly selected<br \/>\nlandscape. It&#8217;s a highly constrained landscape. Evolution doesn&#8217;t need to be able to search <em>any<\/em> landscape &#8211; it just needs to be able to search <em>one particular kind<\/em> of landscape. If you constrain the properties of the landscape, then you absolutely <em>can<\/em> pick effective search functions.<\/p>\n<p> For example: newtons method of root finding is a landscape search. It uses the slope at a point to guide it towards<br \/>\nthe root of a function. It&#8217;s optimizing for the minimum distance from the true root: the landscape at any point<br \/>\nis the distance from a root. As anyone who&#8217;s taken college calculus knows that Newton&#8217;s method works extremely well for a large number of continuous functions. Dembski&#8217;s argument that NFL shows that evolution can&#8217;t possibly produce fit organisms could be applied, virtually without modification, to show that Newton&#8217;s method can&#8217;t possibly find roots. And it would be equally true: applied to arbitrary functions, Newton&#8217;s method will fail most of the time. Sometimes it<br \/>\nwill work, if the function happens to have the right properties &#8211; but <em>most<\/em> functions don&#8217;t. Newton&#8217;s method requires functions to be continuous and differentiable &#8211; most functions aren&#8217;t even differentiable. And Newton&#8217;s method fails even on most differentiable functions.<\/p>\n<p> Because NFL plays overly broadly with landscapes, its conclusions don&#8217;t make any sense applied to the real<br \/>\nphenomenon of life. You can&#8217;t use facts about the set of all possible landscapes to draw conclusions about<br \/>\nindividual landscapes.<\/p>\n<p> Now, let&#8217;s move on to the depth problem. If you look at how NFL models evolution in depth, you find that it&#8217;s<br \/>\na terrible model. I&#8217;ve discussed the problems with modeling evolution as a fitness landscape before, so I won&#8217;t go in depth. But the entire idea of using a fitness landscape is a train wreck. Landscape search is based on <em>static<\/em> landscapes; life isn&#8217;t a static landscape. That, right there, is enough to utterly wreck the entire argument. Evolution isn&#8217;t landscape search. Fitness landscapes are a useful tool for modeling certain <em>short term<\/em> aspects of evolution; but as a general model, they do a terrible job. Looking at the problem in depth, and<br \/>\nyou find that the model of biology and of evolution used in the NFL arguments are such a poor model of evolution that<br \/>\nthere are <em>no<\/em> valid conclusions that can be drawn from it about the process as a whole.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Despite having written about it before, I still get a lot of questions about William Dembski&#8217;s &#8220;No Free Lunch&#8221; (NFL) theorems. One message recently contained the question in a particularly interesting form, so I thought I&#8217;d take the opportunity to answer it with a post. Here&#8217;s the question I received: Is the NFL theorem itself [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[31],"tags":[],"class_list":["post-489","post","type-post","status-publish","format-standard","hentry","category-intelligent-design"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-7T","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/489","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=489"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/489\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=489"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=489"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=489"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}