{"id":179,"date":"2006-10-09T12:17:27","date_gmt":"2006-10-09T12:17:27","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2006\/10\/09\/second-law-slop-from-granville-sewell\/"},"modified":"2006-10-09T12:17:27","modified_gmt":"2006-10-09T12:17:27","slug":"second-law-slop-from-granville-sewell","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2006\/10\/09\/second-law-slop-from-granville-sewell\/","title":{"rendered":"Second Law Slop from Granville Sewell"},"content":{"rendered":"<p>A reader sent me a link to an article by that inimatable genius of the intelligent design community, Granville Sewell. (As much as I hate to admit it, Sewell is a professor of mathematics at Texas A&amp;M. I don&#8217;t know what his professional specialty is, but if his work in that area is anything like the dreck he produces in defense of ID, then it&#8217;s shocking that he got a faculty position, much less tenure.) Sewell wrote *yet another* one of those horrible &#8220;second law of thermodynamics&#8221; papers and submitted it *as an opinion piece* to a math journal (&#8220;The Mathematical Intelligencer&#8221;). It was, needless to say, not received well by people who actually care about quality math, and he was roundly flamed in letters in the following issue. The paper that I&#8217;m looking at is [his *defense* to criticisms in the original paper.](http:\/\/www.iscid.org\/papers\/Sewell_EvolutionThermodynamics_012304.pdf)<br \/>\nAs one might expect from one of the ICSID guys, it&#8217;s a sloppy rehash of the same-old creationist arguments &#8211; it&#8217;s mainly the same old creationist thermodynamic crap, mixed with a bit of big numbers, and a little dose of obfuscatory mathematics.<\/p>\n<p><!--more--><br \/>\nBefore I get to the paper itself, one important thing about the *original* paper in the MI is worth pointing out. As I said above, it was published as an *opinion* piece, not as a research paper. Opinion pieces are *not* generally reviewed as rigorously as research papers, and the MI is not exactly what anyone (including its editors!) would call a *rigorous* journal. From the information for authors  about what they publish:<br \/>\n&gt;We welcome controversy; this is an international forum for issues on which<br \/>\n&gt;mathematicians disagree. But whatever their subject, all articles should be<br \/>\n&gt;written in a relaxed, engaging style, and should be accessible to the entire<br \/>\n&gt;community, irrespective of specialty. Articles are peer-reviewed.<br \/>\n&gt;Authors need not feel confined to non-fiction: we will consider humor, poetry,<br \/>\n&gt;fiction, and art forms not yet invented.<br \/>\nDespite this, I have seen Sewell&#8217;s paper cited in several lists of &#8220;peer reviewed ID literature&#8221; as an example of how serious scientists are publishing ID research. An opinion piece in the MI is *not* what any serious scientist of mathematician would call a &#8220;peer reviewed journal paper&#8221;.<br \/>\nWith that brief aside out of the way, let&#8217;s dive in to see what Sewell has to say. He gets off to a great start; if he had just stopped after the first sentence-and-a-half he actually would have written a valid, meaningful statement.<br \/>\n&gt;Why is it that the most vocal opponents of Darwinism often are not biologists or<br \/>\n&gt;geologists, but physicists, computer scientists, engineers or mathematicians,<br \/>\n&gt;like myself? The obvious answer would be that we don&#8217;t understand the issues as<br \/>\n&gt;well, but I have another explanation. You really don&#8217;t need to know any biology<br \/>\n&gt;or paleontology to understand the real problem with Darwinism: it is simply that<br \/>\n&gt;it is extremely improbable. I realize that the development of life took millions<br \/>\n&gt;of years, that there is an &#8220;evolutionary chain&#8221; of similarities connecting all<br \/>\n&gt;species, and that many things about the process give the appearance of natural<br \/>\n&gt;causes, but none of this diminishes the main problem.<br \/>\nGosh GS, why waste time with a second explanation when the first one does the job so perfectly? The rest of this paper is just a vivid demonstration of how he *doesn&#8217;t* understand the issues.<br \/>\nIt&#8217;s just *yet another* sloppy &#8220;second law of thermodynamics&#8221; argument; what&#8217;s particularly sad is that this is his *defense* after his *original* argument got slapped down. So his original argument was even worse. He gives us a glimpse of how sloppy the original paper was:<br \/>\n&gt;In a Fall 2000 opinion piece in the Mathematical Intelligencer [1] I made the<br \/>\n&gt;assertion that the underlying principle behind the second law of thermodynamics<br \/>\n&gt;is that natural forces do not do extremely improbable things. An unfortunate<br \/>\n&gt;choice of words: I should have said, the underlying principle is that natural<br \/>\n&gt;forces do not do macroscopically describable things which are extremely<br \/>\n&gt;improbable from the microscopic point of view.<br \/>\nStill wrong, and he should know it. One of the things that people who are serious about math know is that *informal* statements of mathematical phenomena are inevitably inaccurate (or at least incomplete), and that you *never* reason from the informal argument. You reason *from the math*. You can take the *equations*, the *math* of the second law, and play with it however much you want, but you&#8217;ll *never* manage to come up with any mathematical statement that &#8220;natural forces do not do macroscopically describable things which are extremely improbable from the microscopic point of view&#8221;.<br \/>\nIn fact, you don&#8217;t even need to go all the way to the equations to see how nonsensical this statement is. Here&#8217;s a very precise informal statement of the second law called the Kelvin-Planck statement, taken from [wikipedia][second-law]: &#8220;There is no process that, operating in a cycle, produces no other effect than the subtraction of a positive amount of heat from a reservoir and the production of an equal amount of work.&#8221;<br \/>\nTry to derive Sewell&#8217;s statement from *that*. You can&#8217;t: because his statement bears no relation to the second law. In fact, one of the most common observations when you study thermodynamics mathematically is that entropy can only be viewed as a measure of disorder at a *microscopic* level, but that *macroscopic* order can frequently be produced as a result of *increase* in *microscopic* disorder &#8211; essentially *exactly* the opposite of what Sewell said.<br \/>\nSewell then moves on to another classic creationist canard:<br \/>\n&gt;But the Earth is an open system, and it is often argued that any increase in<br \/>\n&gt;order is allowed in an open system, as long as the increase is &#8220;compensated&#8221;<br \/>\n&gt;somehow by a comparable or greater decrease outside the system. S. Angrist and<br \/>\n&gt;L. Helper [3], for example, write, &#8220;In a certain sense the development of<br \/>\n&gt;civilization may appear contradictory to the second law&#8230; Even though society<br \/>\n&gt;can effect local reductions in entropy, the general and universal trend of<br \/>\n&gt;entropy increase easily swamps the anomalous but important efforts of civilized<br \/>\n&gt;man. Each localized, man-made or machine- made entropy decrease is accompanied<br \/>\n&gt;by a greater increase in entropy of the surroundings, thereby maintaining the<br \/>\n&gt;required increase in total entropy.&#8221;<br \/>\n&gt;<br \/>\n&gt;According to this logic, then, the second law does not prevent scrap metal from &gt;reorganizing itself into a computer in one room, as long as two computers in the<br \/>\n&gt;next room are rusting into scrap metal-and the door is open. The spectacular<br \/>\n&gt;increase in order seen here on Earth does not violate  the second law because<br \/>\n&gt;order is decreasing throughout the rest of this vast universe, so the total<br \/>\n&gt;order in the universe is surely still decreasing.<br \/>\nOnce again, we can see the results of arguing about a *mathematical* concept in terms of *non-mathematical* informal reasoning. When a biologist says, for example, that life doesn&#8217;t violate the second law because increases in order are compensated for by *larger* increases in entropy, we aren&#8217;t talking about *decoupled* phenomena. We&#8217;re talking about *a single process*. No one would argue that the second law says that a computer can spontaneously be formed if two other computers in a different room decay. But we *can* produce a computer by burning a couple of barrels of oil, and using the resulting heat to drive the machinery that produces the computer. In this *coupled chain of events*, we produce a small amount of &#8220;order&#8221; (the computer) as a result of produce a large amount of entropy (the burning of the oil and the heat from operating the machinery, among other sources).<br \/>\nSimilarly, my children can grow (producing &#8220;order&#8221; in Sewell&#8217;s formulation),<br \/>\nbut the increase in order caused by their bodies growing new cells and structures is *more* than offset by the quantity of waste and heat that&#8217;s produced by their bodies as they grow. By far, the overall entropy is increasing.<br \/>\nSewell&#8217;s game is to decouple things to make it look silly &#8211; which he can only get away with because he&#8217;s *not doing math*. If you wanted to do the actual math of a thermodynamic analysis, you&#8217;d have to show that the entropy increase is causally connected to the entropy decrease.<br \/>\nOf course, he sees this objection coming, and attempts to head it off:<br \/>\n&gt;So I wrote a reply, &#8220;Can ANYTHING Happen in an Open System?&#8221; [4] to my critics<br \/>\n&gt;which was published in the Fall 2001 issue of The Mathematical Intelli- gencer.<br \/>\n&gt;In that reply, I first showed (see Appendix) that the second law does not simply<br \/>\n&gt;require that any increase in thermal order in an open system be compen- sated<br \/>\n&gt;for by a decrease outside the system, it requires that the increase in thermal<br \/>\n&gt;order be no greater than the thermal order entering the open system. The thermal<br \/>\n&gt;order in an open system can decrease in two different ways: it may be converted<br \/>\n&gt;to disorder (first term on the right in equation (4) of the Appendix) or it may<br \/>\n&gt;be exported through the boundary (second term). It can increase in only one<br \/>\n&gt;way-by importation through the boundary. An identical analysis shows the same to<br \/>\n&gt;be true of carbon (or any other diffusing substance): the increase in carbon<br \/>\n&gt;order in an open system cannot be greater than the carbon order entering the<br \/>\n&gt;system. In  these simple examples, I again assumed nothing but heat conduction<br \/>\n&gt;or diffusion was going on, but for more general situations I offered the<br \/>\n&gt;tautology that &#8220;if an increase in order is extremely improbable when a system is<br \/>\n&gt;closed, it is still extremely improbable when the system is open, unless<br \/>\n&gt;something is entering which makes the increase not extremely improbable.&#8221; The<br \/>\n&gt;fact that order is disappearing in the next room does not make it any easier for<br \/>\n&gt;computers to appear in our room-unless this order is disappearing into our room,<br \/>\n&gt;and then only if it is a type of order that makes the appearance of computers<br \/>\n&gt;not extremely improbable, for example, computers. Importing thermal order will<br \/>\n&gt;make the temperature distribution less random, and importing carbon order will<br \/>\n&gt;make the carbon distribution less random, but neither makes the formation of<br \/>\n&gt;computers more probable.<br \/>\nWhat&#8217;s clever about this response is that it takes the *exact* error that he&#8217;s making in his argument, and turns it around to try to apply it to the *criticisms* of his argument. He&#8217;s started by making an argument that local or macroscopic decreases in entropy can&#8217;t be compensated for by other distant or microscopic increases in entropy. Now to reply to the criticism of that, he&#8217;s specifically *using* the fact that the local and distant entropy changes *from his own argument* aren&#8217;t connected.<br \/>\nAnd then he tries to weasel by introducing a *new* mistake, hiding behind some sloppy math. He throws in a random line about how &#8220;other kinds of order and entropy&#8221; can be introduced, and that they&#8217;ll follow the second law as well (with the statement &#8220;assuming only diffusion is operative&#8221; hidden in parens). That clause is parens is  *very* sleazy; he&#8217;s *trying* to create a generalization of entropy that allows that allows him to say that we *have* to explain what he calls &#8220;carbon order&#8221; specifically in terms of &#8220;carbon entropy&#8221; &#8211; meaning we must create specifically create a certain amount of &#8220;disordered&#8221; carbon for every bit of &#8220;ordered&#8221; carbon. Of course, that&#8217;s not true. It would only be true in a system *which is closed* except for carbon diffusion, and in which *only diffusion is operating*.<br \/>\nThen he tries to pull *back* to the math to argue that the equations are only talking about *specific kinds* of entropy &#8211; one instantiation of the equation for each kind. And that is *terrible* math.  He&#8217;s taken a global quantity: entropy &#8211; and divided into subtypes *without* showing how the subtypes relate to the original; and then asserted that they are *completely* partitioned &#8211; that no expenditure creating a quantity of a particular subtype of entropy can possible create a smaller reduction in some *other* subtype of entropy.<br \/>\nIf he wants to make that kind of argument, he *could* try to. But he&#8217;d need to show  how he could *derive* it from the general statement of the second law. But he doesn&#8217;t do that: in fact, he *can&#8217;t* do that, because we can observe phenomena where one of his supposed &#8220;subtypes&#8221; of entropy *do* decrease &#8211; that is, we can witness the exchange of one subtype for another. (For example, you can&#8217;t explain the natural production of diamonds strictly in terms of carbon-diffusion entropy. The C-D order of a diamond is strictly larger than the C-D order of a carbon-rich mineral deposit.)<br \/>\nFinally, he gets to his last argument &#8211; and it&#8217;s a classic stupid big-numbers argument:<br \/>\n&gt;According to the traditional argument, the second law does not prevent atoms<br \/>\n&gt;from reorganizing themselves into spaceships and computers here because<br \/>\n&gt;the Earth is an open system. According to a new argument, however, advanced<br \/>\n&gt;by recent critics of my article, this is not prohibited even in a closed<br \/>\n&gt;system. Several of these have argued that everything Nature does can<br \/>\n&gt;be considered extremely improbable-the exact arrangement of atoms at<br \/>\n&gt;any time at any place is extremely unlikely to be repeated, argued one<br \/>\n&gt;e-mail. Tom Davis, in his published reply [5], made an analogy with coin<br \/>\n&gt;flipping and argued that any particular sequence of heads and tails<br \/>\n&gt;is extremely improbable, so something extremely improbable happens<br \/>\n&gt;every time we flip a long series of coins. If a coin were flipped 1000<br \/>\n&gt;times, he would apparently be no more surprised by a string of all heads<br \/>\n&gt;than by any other sequence, because any string is as improbable as<br \/>\n&gt;another. Davis concedes that it is extremely unlikely that humans and<br \/>\n&gt;computers would arise again if history were repeated, &#8220;but something<br \/>\n&gt;would&#8221;.<br \/>\nThe big numbers argument is just an argument that tries to string together numbers in a way that makes it seems ridiculously unlikely that something could happen, because you&#8217;ve got a sufficiently large probability against it that it&#8217;s &#8220;effectively impossible&#8221;.<br \/>\nSewell&#8217;s version of this is based on a very simple version of big numbers. He doesn&#8217;t even really produce a particular big number. He just basically uses the *intuition* that it&#8217;s ridiculously impossible to imagine that a computer would be spontaneously created in isolation; and to connect that to human beings through the intuition that we are more complicated than computers.<br \/>\nThe flaw in this is very typical of the usual big-numbers kind of argument. It&#8217;s looking at the *a-posteriori* odds of human life, computed as if human life emerged spontaneously.  But what evolution actually argues is that life *evolved* over millions of  years. What bearing does that have on the probability calculation? The spontaneous probability ignores history. Think of evolution as a tree. The spontaneous probability looks at *every possible* leaf node of the tree, with &#8220;human life&#8221; as one of them, and asks &#8220;What&#8217;s the probability of reaching *this specific* path to the human leaf node?&#8221; But evolution *doesn&#8217;t* consider every path. It takes that tree, and *prunes* at every step. The actual evolutionary &#8220;search tree&#8221; is incredibly small in comparison to the complete possible search space &#8211; the overwhelming bulk of the search space is pruned out by evolution. In a real mathematical model of evolution as search, the fundamental operation that makes it work is pruning.<br \/>\n[second-law]: http:\/\/en.wikipedia.org\/wiki\/Laws_of_thermodynamics<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A reader sent me a link to an article by that inimatable genius of the intelligent design community, Granville Sewell. (As much as I hate to admit it, Sewell is a professor of mathematics at Texas A&amp;M. I don&#8217;t know what his professional specialty is, but if his work in that area is anything like [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[16],"tags":[],"class_list":["post-179","post","type-post","status-publish","format-standard","hentry","category-debunking-creationism"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-2T","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/179","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=179"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/179\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=179"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=179"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=179"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}