{"id":406,"date":"2007-05-02T21:05:36","date_gmt":"2007-05-02T21:05:36","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/05\/02\/using-bad-math-to-create-bad-models-to-produce-bad-results\/"},"modified":"2007-05-02T21:05:36","modified_gmt":"2007-05-02T21:05:36","slug":"using-bad-math-to-create-bad-models-to-produce-bad-results","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2007\/05\/02\/using-bad-math-to-create-bad-models-to-produce-bad-results\/","title":{"rendered":"Using Bad Math to Create Bad Models to Produce Bad Results"},"content":{"rendered":"<p> An astute reader pointed me towards a monstrosity of pompous bogus math. It&#8217;s an oldie, but I hadn&#8217;t seen it before, and it was just referenced by my old buddy Sal Cordova in a thread on one of the DI blogs. It&#8217;s a <a href=\"http:\/\/www.trueorigin.org\/spetner2.asp\">&#8220;debate&#8221; posted online by Lee Spetner, in which he rehashes the typical bogus arguments against evolution<\/a>. I&#8217;m going to ignore most of it; this kind of stuff has been refuted more than enough times. But in the course<br \/>\nof this train wreck, he pretends to be making a mathematical argument about search spaces and optimization processes. It&#8217;s a completely invalid argument &#8211; but it&#8217;s one which is <em>constantly<\/em> rehashed by creationists, and Spetner&#8217;s version of it is a perfect demonstration of exactly what&#8217;s wrong with the argument.<\/p>\n<p><!--more--><\/p>\n<p> Let&#8217;s look at the relevant parts of Spetner&#8217;s argument. Spetner basically repeats himself over and over, so I&#8217;ll just quote the first repetition; you can go look at the full document to see others.<\/p>\n<blockquote>\n<p>The principle message of evolution is that all life descended with modification from a<br \/>\nputative single primitive source. I call this the grand sweep of evolution. The mechanism<br \/>\noffered for the process of modification is basically the Darwinian one of a long series of<br \/>\nsteps of random variation, each followed by natural selection. The variation is generally<br \/>\nunderstood today to be random mutations in the DNA.<\/p>\n<p><em>&#8230;<\/em><\/p>\n<p> For the grand process of evolution to work, long sequences of <em>beneficial<\/em><br \/>\nmutations must be possible, each building on the previous one and conferring a selective<br \/>\nadvantage on the organism. The process must be able to lead not only from one species to<br \/>\nanother, but to the entire advance of life from a simple beginning to the full complexity<br \/>\nof life today. There must be a long series of possible mutations, each of which conferring<br \/>\na selective advantage on the organism so that natural selection can make it take over the<br \/>\npopulation. Moreover, there must be not just one, but a great many such series.<\/p>\n<p> The chain must be continuous in that at each stage a change of a single base pair<br \/>\nsomewhere in the genome can lead to a more adaptive organism in some environmental context.<br \/>\nThat is, it should be possible to continue to climb an <em>adaptive<\/em> hill, one base<br \/>\nchange after another, without getting hung up on a local adaptive maximum. No one has ever<br \/>\nshown this to be possible.<\/p>\n<p> Now one might say that if evolution were hung up on a local Maximum, a large genetic<br \/>\nchange like a recombination or a transposition could bring it to another higher peak. Large<br \/>\nadaptive changes are, however, highly improbable. They are orders of magnitude less<br \/>\nprobable than getting an adaptive change with a single nucleotide substitution, which is<br \/>\nitself improbable. No one has shown this to be possible either.<\/p>\n<\/blockquote>\n<p> So &#8211; he&#8217;s trying to do the usual thing of modeling evolution as a search of a fitness landscape. It&#8217;s pretty common to model evolution that way &#8211; both real scientists and creationist bozos do it &#8211; but it is worth pointing out that while search is a useful model of evolution, it&#8217;s far from a perfect one. The classic formulation of search over a fitness landscape requires an <em>unchanging<\/em> landscape. But the &#8220;fitness landscape&#8221; that&#8217;s being traversed in an evolutionary process is <em>not<\/em>: it&#8217;s constantly changing.<\/p>\n<p> He takes advantage of that flaw in the model of the fitness landscape to build a key part of his argument. Throughout the argument, he keeps making claims about getting &#8220;hung up on a local maximum&#8221;. The passage quoted above contains an example; in the full document, he comes back to that point again and again.<\/p>\n<p> But it&#8217;s a completely invalid point. Even if we assume that there <em>are<\/em> local maxima where things can get hung up, the landscape is constantly changing. So a local maximum today is <em>not<\/em> necessarily a maximum tomorrow, and is almost certainly <em>not<\/em> a maximum 100 years from now. So even if we pile together his bunch of bogus assumptions without argument, we <em>still<\/em> wind up with his argument being completely and thoroughly <em>wrong<\/em>, because it&#8217;s based on the invalid assumption of an unchanging fitness landscape.<\/p>\n<p> We can even show <em>why<\/em> the fixed fitness landscape is wrong. Suppose we had a<br \/>\nfixed fitness landscape with local maxima. Then we&#8217;ll find organisms &#8220;climbing&#8221; towards<br \/>\nthose maxima. And when they <em>reach<\/em> a maximum, they stop moving. What this means is<br \/>\nthat we&#8217;ll see <em>multiple<\/em> organisms climbing towards the fitness maxima; and over<br \/>\ntime we&#8217;ll see things clustering around the maxima. With this clustering at the maxima, eventually, there will be competition for resources &#8211; meaning that the maximum <em>isn&#8217;t a maximum anymore<\/em> &#8211; suddenly it&#8217;s a hotbed of competition with <em>some<\/em> changes producing winners, and some losers. So the fitness landscape <em>can&#8217;t<\/em> be a fixed with true local maximums that become traps.<\/p>\n<p> What else does he get wrong? As bad as the &#8220;fixed landscape&#8221; bogosity is, it&#8217;s not the worst of his slimy mathematical sloppiness. <\/p>\n<p> Because, you see, the chances of there being an actual local maximum in an evolutionary fitness landscape is something amazing close to nil. Sure, there are probably <em>some<\/em>, sometimes. But the thing is, &#8220;fitness&#8221; in the evolutionary sense is really a function of not just one or two variables, but of dozens, or hundreds, or even thousands of variables. We&#8217;re not talking about a two-dimensional counter where there are hills and valleys. An evolutionary fitness landscape is a surface with dozens of dimensions. To be a true local maximum &#8211; that is, a point in the landscape with <em>no<\/em> smooth upward paths out requires the surface to be at a maximum <em>in all dimensions at the same point<\/em>: it means that if you slice a plane through the surface to get a two-dimensional view, no matter how you orient the plane in those dozens of dimensions, it will always produce a hill shape with the maximum at the same place.  Why would all of the dimensions coincide on a maximum like that? If we&#8217;re playing probability games &#8211; and this argument of his is ultimately probabilistic &#8211; then the deadlocking local maxima are <em>incredibly<\/em> improbable &#8211; less likely than the things that he&#8217;s ruling out as being <em>too<\/em> unlikely.<\/p>\n<p> And even that isn&#8217;t his worst mistake. Suppose you&#8217;re looking at evolution as a search<br \/>\nover a landscape. So you&#8217;ve got an organism at some point in the landscape. To consider<br \/>\nwhere it can go in its traversal of the landscape, you need to consider how it can &#8220;move&#8221;: where can it go in one step from its present location?<\/p>\n<p> Spetner <em>requires<\/em> that the <em>only<\/em> permissible &#8220;motions&#8221; are <em>single point changes<\/em> which produce <em>immediate<\/em> effects. He explicitly disallows<br \/>\nconsideration of multiple concurrent changes; he disallows consideration of changes that don&#8217;t present an <em>immediate<\/em> benefit; he disallows consideration of any change other than single-base changes &#8211; no duplications, no rearrangements, no changes of any kind except single-base point mutations. In other words, he deliberately creates a search model that <em>does not match observed reality<\/em>, and then uses it to conclude that his <em>search model cannot match observed reality<\/em>. (Where&#8217;s Dr. Egnor when you need him? This is exactly the kind of tautology that Dr. Egnor claims to like to knock down!)<\/p>\n<p> And even <em>that<\/em> isn&#8217;t his worst mistake. If you look at how Spetner formulates the search, he basically treats an evolutionary search as if it&#8217;s a <em>single organism<\/em> traversing the fitness landscape. He demands that changes work in a strict stepwise fashion. <em>One<\/em> change happens; that <em>one change<\/em> produces a selective advantage; the selective advantage causes that change to propagate and become fixed in the population; and then the next change can occur. That is <em>not<\/em> an accurate model of reality: reality is a population of many individuals, with many changes happening at the same time &#8211; some neutral, some beneficial, some harmful &#8211; and those changes accumulate and propagate through the population, with some individuals surviving, and some not. In other words, it&#8217;s an even more blatant example of the Egnor error: create a model that does not match observed reality, <em>claim<\/em> that it&#8217;s an accurate model of the theory you want to criticize, and then declare triumph when the predictions of your model cannot match reality. <\/p>\n<p> Pure bogosity. Pure slop. Pure bad math.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>An astute reader pointed me towards a monstrosity of pompous bogus math. It&#8217;s an oldie, but I hadn&#8217;t seen it before, and it was just referenced by my old buddy Sal Cordova in a thread on one of the DI blogs. It&#8217;s a &#8220;debate&#8221; posted online by Lee Spetner, in which he rehashes the typical [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[31],"tags":[],"class_list":["post-406","post","type-post","status-publish","format-standard","hentry","category-intelligent-design"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-6y","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/406","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=406"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/406\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=406"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=406"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=406"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}