{"id":631,"date":"2008-04-22T14:12:08","date_gmt":"2008-04-22T14:12:08","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2008\/04\/22\/more-bad-bayesians-no-ets\/"},"modified":"2008-04-22T14:12:08","modified_gmt":"2008-04-22T14:12:08","slug":"more-bad-bayesians-no-ets","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2008\/04\/22\/more-bad-bayesians-no-ets\/","title":{"rendered":"More Bad Bayesians: No ETs!"},"content":{"rendered":"<p> Remember when I <a href=\"http:\/\/scientopia.org\/blogs\/goodmath\/2008\/04\/schools-of-thought-in-probability-theory\">talked about the problems with Bayesian probability?<\/a> As you&#8217;ll probably recall, one of the things that drives me crazy about Bayesianism is that you get a constant stream of crackpots abusing it. Since the basic assumption of Bayesian probability is that you can <em>always<\/em> use it, you&#8217;ll constantly get people abusing it.<\/p>\n<p> Christopher Mims, who was one of the people running ScienceBlogs when I first signed on, sent me <a href=\"http:\/\/dsc.discovery.com\/news\/2008\/04\/21\/extra-terrestrial-life.html\">a classic example<\/a>. A professor has published a paper in a journal called <a href=\"http:\/\/astrobio.net\/news\/modules.php?op=modload&amp;name=News&amp;file=article&amp;sid=2682&amp;mode=thread&amp;order=0&amp;thold=0\">&#8220;Astrobiology&#8221;<\/a>, arguing that there&#8217;s an exceedingly low probability of intelligent life elsewhere in the Universe.<\/p>\n<p><!--more--><\/p>\n<p> It&#8217;s pretty much the same kind of claptrap that you find in the Bayesian<br \/>\n<a href=\"http:\/\/scientopia.org\/blogs\/goodmath\/2006\/06\/fundie-probability-even-worse-math-than-swinburne\">proofs of God<\/a>. That is, pick the conclusion that you really want to get,<br \/>\nand make up numbers for the priors needed to get the result you want.<\/p>\n<p> So, for example, Watson fairly arbitrarily assumes that there are four key events in the development of intelligent life:<\/p>\n<ol>\n<li> Single-celled life.<\/li>\n<li> Multi-cellular life.<\/li>\n<li> Specialized cells types.<\/li>\n<li> Language. <\/li>\n<\/ol>\n<p>He takes these four, and assigns each of them a prior probability of 10%. Why 10%? Who knows, it seems reasonable. Why four steps? Well, because he wanted an end result of 0.01 percent, and that takes four events.<\/p>\n<p> Then he plays with it a bit more, to argue that the time between when intelligent life emerges and when it dies out (due to the death of its solar system) is likely to be quite brief, and so the odds of two intelligent life forms coexisting at the same time in locations where they could make any kind of contact are vanishingly small.<\/p>\n<p> Like I said: this is rubbish. The &#8220;4 independent steps&#8221; are a crock; you could just as easily make it 8 steps:<\/p>\n<ol>\n<li> Replicators\n<li> Cells\n<li> Mobility\n<li> Organelles\n<li> Multicellular\n<li> Specialized cells\n<li> Sensory apparatus\n<li> Sexual reproduction\n<li> Tool use\n<li> Language\n<\/ol>\n<p> Why is the four steps more reasonable than the ten? Just because that&#8217;s what produced the result that he wanted.<\/p>\n<p> Even if you accept the idea of the best model being these four key milestones, the uniform 10% prior is a crock.<\/p>\n<p>  You could easily argue that the probability of<br \/>\nprimitive replicators is higher than 10%, or lower than 10%. You could easily argue that once there was a replicator, that the odds of it developing into cells was higher than 10%. But you can make good arguments for it being higher, or for it being lower &#8211; because the fact is, <em>we don&#8217;t know<\/em>.<\/p>\n<p> You could easily argue that given single-celled life, the probability of some kind of cooperation leading to multi-cellular life was quite high. You could also argue that it was quite low. Assigning it a prior of 1 in 10 is what is technically known as &#8220;talking out your ass&#8221; &#8211; meaning making it up as you go along to produce the result you want. Because <em>we don&#8217;t know<\/em>.<\/p>\n<p> Once you have multi-cellular life, my own guess is that specialization<br \/>\nwould be inevitable. But that&#8217;s a <em>guess<\/em>: the fact of the matter is,<br \/>\nwe don&#8217;t know. There&#8217;s lots of work going on in biology to figure things like<br \/>\nthat out, but the fact is, at the moment, <em>we don&#8217;t know<\/em>, and so <em>any<\/em> figure that someone spits out for a probability is just something that they made up, because it sounded right.<\/p>\n<p> Once you have creatures with specialized cells and internal organs,<br \/>\nwhat are the odds that they&#8217;ll develop language? Who the hell knows? Probably pretty damned unlikely, given the number of species on earth who have distinct organs, but no significant spoken language. Certainly seems reasonable to guess rather less than 1 in 10. But again &#8211; that&#8217;s a guess.<\/p>\n<p> I could keep going. But the point is, the whole thing is just made up.<br \/>\nAnd it <em>should<\/em> be quite obvious to just about anyone who looks at it<br \/>\nthat it&#8217;s just made up. It&#8217;s an abuse of math to make a rhetorical argument. The truth of the matter is, Professor Watson thinks that intelligent life is very rare, and so he threw together a bunch of bullshit to make that argument<br \/>\nlook more serious that &#8220;I think it&#8217;s unlikely&#8221;. See, it&#8217;s not just that he thinks so &#8211; it&#8217;s that <em>he&#8217;s done a mathematical analysis to determine that<br \/>\nit&#8217;s unlikely<\/em>. The math doesn&#8217;t say anything more than &#8220;He thinks it&#8217;s unlikely&#8221;. But he&#8217;s been able to turn that into a journal publication and a whole lot of press &#8211; by wrapping it in the trappings of math.<\/p>\n<p> It&#8217;s not really the fault of Bayesian probability. Idiots will be idiots, and people who want to misuse math or science will find a way. But Bayesian<br \/>\nprobability does, by making a claim that it&#8217;s <em>always<\/em> applicable, lend itself to this kind of thing more than any other kind of math I know. And that has the unfortunate effect that when I hear about a Bayesian analysis of just about <em>anything<\/em>, my bullshit detector goes on high alert.<\/p>\n<p> When you see something like this, there are some simple tricks for recognizing it as bullshit, which I&#8217;ve tried to follow above. The main thing is, look at the priors. There are two things that you&#8217;ll see in trashy arguments: a set of uniform priors for very different events; and a very random-seeming set of events.<\/p>\n<p> In this article, we&#8217;ve got both in spades. We&#8217;ve got four priors, which are<br \/>\ncompletely random &#8211; there&#8217;s no particular reason to believe that these four are independent, or that they&#8217;re the important factors. And we&#8217;ve got uniform priors for wildly divergent phenomena: &#8220;Cellular life&#8221;, &#8220;multicellular life&#8221;, and &#8220;language&#8221; all given identical priors without justification beyond &#8220;Well, it seems right&#8221;.<\/p>\n<p> That&#8217;s Bayesian garbage, and it&#8217;s very unfortunate that there&#8217;s so  much of it<br \/>\nout there. Because there&#8217;s a lot of really good math built on Bayesian probability. But my instinct is always to mistrust anything Bayesian, because the good math doesn&#8217;t go out of its way to advertise itself as Bayesian &#8211; the authors just show you how they<br \/>\ndid their computation, how they picked their priors, etc. Whereas almost anything which is explicitly labeled as a &#8220;Bayesian proof&#8221; is crap.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Remember when I talked about the problems with Bayesian probability? As you&#8217;ll probably recall, one of the things that drives me crazy about Bayesianism is that you get a constant stream of crackpots abusing it. Since the basic assumption of Bayesian probability is that you can always use it, you&#8217;ll constantly get people abusing it. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[6],"tags":[],"class_list":["post-631","post","type-post","status-publish","format-standard","hentry","category-bad-probability"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-ab","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/631","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=631"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/631\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=631"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=631"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=631"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}