{"id":219,"date":"2006-11-21T13:45:53","date_gmt":"2006-11-21T13:45:53","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2006\/11\/21\/complexity-from-simplicity-or-why-casey-luskin-needs-a-math-class\/"},"modified":"2006-11-21T13:45:53","modified_gmt":"2006-11-21T13:45:53","slug":"complexity-from-simplicity-or-why-casey-luskin-needs-a-math-class","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2006\/11\/21\/complexity-from-simplicity-or-why-casey-luskin-needs-a-math-class\/","title":{"rendered":"Complexity from Simplicity; or, Why Casey Luskin Needs a Math Class"},"content":{"rendered":"<p>One of my fellow ScienceBloggers, [Karmen at Chaotic Utopia](http:\/\/scienceblogs.com\/chaoticutopia\/2006\/11\/puzzling_at_a_simpleminded_cre.php) pointed out a spectacularly stupid statement in [Casey Luskin&#8217;s critique of Carl Zimmer][lutkin] (*another* fellow SBer) at the Discovery Institutes &#8220;Center for Science and Culture&#8221;.  Now normally, I might not pile on to tear-down of Casey (not because he doesn&#8217;t deserve it, but because often my SciBlings do such a good job that I have nothing to add); but this time, he&#8217;s crossed much too far into *my* territory, and I can&#8217;t let that pass without at least a brief comment.<br \/>\n[lutkin]: http:\/\/www.evolutionnews.org\/2006\/11\/evolution_nat_geo_1.html<br \/>\nSo, here&#8217;s the dumb statement:<br \/>\n&gt;The article called evolution a &#8220;simple&#8221; process. In our experience, does a &#8220;simple&#8221; process generate<br \/>\n&gt;the type of vast complexity found throughout biology?<br \/>\nYes, one of the leading IDist writers on the net believes that in reality, simple processes don&#8217;t generate complex results.<br \/>\nKarmen pointed out fractals as a beautiful example of the generation of complexity from simplicity. I&#8217;d like to point out something that, while lacking the artistic beauty of a well-chosen fractal, is an *even simpler* and possibly more profound example.<\/p>\n<p><!--more--><br \/>\nAs long-time readers of GM\/BM know, I&#8217;m fascinated by [cellular automata (CAs).][alpaca] CAs are an incredibly simple idea which can generate the most spectacularly complex behaviors out of pure triviality.<br \/>\n[alpaca]: http:\/\/scienceblogs.com\/goodmath\/2006\/10\/a_metalanguage_for_pathologica_1.php<br \/>\nFor the simplest example of this, line up a bunch of little tiny machines in a row. Each machine has an LED on top. The LED can be either on, or off. Once every second, all of the CAs simultaneously look at their neighbors to the left and to the right, and decide whether to turn their LED on or off based on whether their neighbors lights are on or off. Here&#8217;s a table describing one<br \/>\npossible set of rules for the decision about whether to turn the LED on or off.<\/p>\n<table border=\"1\">\n<tr>\n<th>Current State<\/th>\n<th>Left Neighbor<\/th>\n<th>Right Neighbor<\/th>\n<th>New State<\/th>\n<\/tr>\n<tr>\n<td>On<\/td>\n<td>On<\/td>\n<td>On<\/td>\n<td>Off<\/td>\n<\/tr>\n<tr>\n<td>On<\/td>\n<td>On<\/td>\n<td>Off<\/td>\n<td>On<\/td>\n<\/tr>\n<tr>\n<td>On<\/td>\n<td>Off<\/td>\n<td>On<\/td>\n<td>On<\/td>\n<\/tr>\n<tr>\n<td>On<\/td>\n<td>Off<\/td>\n<td>Off<\/td>\n<td>On<\/td>\n<\/tr>\n<tr>\n<td>Off<\/td>\n<td>On<\/td>\n<td>On<\/td>\n<td>On<\/td>\n<\/tr>\n<tr>\n<td>Off<\/td>\n<td>On<\/td>\n<td>Off<\/td>\n<td>Off<\/td>\n<\/tr>\n<tr>\n<td>Off<\/td>\n<td>Off<\/td>\n<td>On<\/td>\n<td>On<\/td>\n<\/tr>\n<tr>\n<td>Off<\/td>\n<td>Off<\/td>\n<td>Off<\/td>\n<td>Off<\/td>\n<\/tr>\n<\/table>\n<p>So, for example, if a cell is on, it&#8217;s left neighbor is on, and its right neighbor is off, then for the next second, the cell will keep its light on. If a cell has its light on, and both its left and right neighbors have *their* lights on, then it will turn its light *off* for the next second.<br \/>\nThis is known as the *rule 110* cellular automaton according to Steven Wolfram&#8217;s taxonomy of<br \/>\none-dimensional CAs in [A New Kind of Science][wolfram]. Rule 110 is interesting for several reasons.<br \/>\nFirst, it&#8217;s so trivial. Even writing as a table like the one up there is making it look more<br \/>\ncomplicated than it should &#8211; it&#8217;s an incredibly simple thing. And yet, it creates an amazing amound<br \/>\nof complexity. If you run rule 110, and take the row of lights and line them up those rows chronologically &#8211; so that the initial row of lights is on top, the state that it went to after one second is below it, and so on &#8211; you can create a two dimensional image, like this one:<br \/>\n<img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" alt=\"CA_rule110s.png\" src=\"https:\/\/i0.wp.com\/scientopia.org\/img-archive\/goodmath\/img_113.png?resize=500%2C250\" width=\"500\" height=\"250\" \/><br \/>\nThat image is *bizarre*! It forms a pattern of triangles, which on the one hand exhibits a lot of structure &#8211; large triangles are surrounded by smaller triangles, which are surrounded by smaller ones; and the larger triangles will often appear in almost linear patterns &#8211; the exact place where large triangles show up is very chaotic and impossible to predict without *running* the automata. Incredible complexity is being spawned by utter triviality.<br \/>\nHow complex can rule-100  CAs get? [110 is turing complete.][110] *Any* computational process, *anything* which can be computed by any mechanical device, can be computed using nothing but a single row of rule 110 CAs.<br \/>\n[110]: http:\/\/www.complex-systems.com\/Archive\/hierarchy\/abstract.cgi?vol=15&amp;iss=1&amp;art=01<br \/>\nAnother example? The most famous CA of them all is [Conway&#8217;s &#8220;game of life&#8221;][life]. I&#8217;ve written about<br \/>\nlife before, because it&#8217;s such an interesting system. Life is also turing complete &#8211; and someone has actually implemented a two-symbol [turing machine as a grid of Life cells][life-turing]. What&#8217;s particularly interesting about that, from the point of view of Casey&#8217;s statement about how a simple process like evolution doesn&#8217;t generate complexity, is that the Life turing machine is generated from a set of components *that were not designed by a human being*. The components were discovered by running a<br \/>\ngenetic algorithm on relatively small life grids, and looking for stable patterns. All of the<br \/>\nparts of the Life turing machine &#8211; the clocks, the gears for moving the tapes, the tape cells, the tape read\/write head &#8211; all of the complex machinery of the turing machine &#8211; was generated<br \/>\nrandomly from an extremely simple set of rules.<br \/>\n[life]: http:\/\/www.ericweisstein.com\/encyclopedias\/life\/<br \/>\n[life-turing]: http:\/\/www.cs.ualberta.ca\/~bulitko\/F02\/papers\/tm_words.pdf<\/p>\n","protected":false},"excerpt":{"rendered":"<p>One of my fellow ScienceBloggers, [Karmen at Chaotic Utopia](http:\/\/scienceblogs.com\/chaoticutopia\/2006\/11\/puzzling_at_a_simpleminded_cre.php) pointed out a spectacularly stupid statement in [Casey Luskin&#8217;s critique of Carl Zimmer][lutkin] (*another* fellow SBer) at the Discovery Institutes &#8220;Center for Science and Culture&#8221;. Now normally, I might not pile on to tear-down of Casey (not because he doesn&#8217;t deserve it, but because often my [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[24,54],"tags":[],"class_list":["post-219","post","type-post","status-publish","format-standard","hentry","category-goodmath","category-programming"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-3x","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/219","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=219"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/219\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=219"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=219"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=219"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}