{"id":773,"date":"2009-05-15T10:41:47","date_gmt":"2009-05-15T10:41:47","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2009\/05\/15\/you-cant-write-that-number-in-fact-you-cant-write-most-numbers\/"},"modified":"2009-05-15T10:41:47","modified_gmt":"2009-05-15T10:41:47","slug":"you-cant-write-that-number-in-fact-you-cant-write-most-numbers","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2009\/05\/15\/you-cant-write-that-number-in-fact-you-cant-write-most-numbers\/","title":{"rendered":"You can&#039;t write that number; in fact, you can&#039;t write most numbers."},"content":{"rendered":"<p> In my Dembski rant, I used a metaphor involving the undescribable numbers. An interesting confusion came up in the comments about just what that meant. Instead of answering it with a comment, I decided that it justified a post of its own. It&#8217;s a fascinating topic which is incredibly counter-intuitive. To me, it&#8217;s one of the great examples of how utterly wrong our<br \/>\nintuitions can be.<\/p>\n<p> Numbers are, obviously, very important. And so, over the ages, we&#8217;ve invented lots of notations that allow us to write those numbers down: the familiar arabic notation, roman numerals, fractions, decimals, continued fractions, algebraic series, etc. I could easily spend months on this blog just writing about different notations that we use to write numbers, and the benefits and weaknesses of each notation.<\/p>\n<p> But the fact is, the <b>vast, overwhelming majority of numbers cannot be written<br \/>\ndown <em> in any form<\/em><\/b>.<\/p>\n<p> That statement seems bizarre at best. But it does actually make sense. But for it to<br \/>\nmake sense, we have to start at the very beginning: What does it mean for a number to be <em>describable<\/em>?<\/p>\n<p><!--more--><\/p>\n<p> A <em>describable<\/em> number is a number for which there is some finite representation. An<br \/>\nindescribable number is a number for which there is <em>no<\/em> finite notation. To be clear,<br \/>\nthings like repeating decimals are <em>not<\/em> indescribable: a repeating decimal has a finite<br \/>\nnotation. (It can be represented as a rational number; it can be represented in decimal notation<br \/>\nby adding extra symbols to the representation to denote repetition.) Irrational<br \/>\nnumbers like &pi;, which can be computed by an algorithm, are <em>not<\/em> indescribable. By<br \/>\nindescribable, I mean that they <em>really<\/em> have no finite representation.<\/p>\n<p> As a computer science guy, I naturally come at this from a computational<br \/>\nperspective. One way of defining a describable number is to say that there is<br \/>\n<em>some<\/em> finite computer program which will generate the representation of<br \/>\nthe number in some form. In other words, a number is describable if you can<br \/>\ndescribe how to generate its representation using a finite description. It<br \/>\n<em>doesn&#8217;t matter<\/em> what notation the program generates it in, as long as the<br \/>\nend result is uniquely identifiable as that one specific number. So you could use<br \/>\nprograms that generate decimal expansions; you could use programs that generate either<br \/>\nfractions or decimal expansions, but in the latter case, you&#8217;d need the program to<br \/>\nidentify the notation that it was generating.<\/p>\n<p> So &#8211; if you can write a finite program that will generate a representation<br \/>\nof the number, it&#8217;s describable. It doesn&#8217;t matter whether that program ever finishes<br \/>\nor not &#8211; so if it takes it an infinite amount of time to compute the number,<br \/>\nthat&#8217;s fine &#8211; so long as the <em>program<\/em> is finite. So &pi; is describable: it&#8217;s<br \/>\nnotation in decimal form is infinite, but the program to generate that representation is finite.<\/p>\n<p> An indescribable number is, therefore, a number for which there is no notation,<br \/>\nand no algorithm which can uniquely identify that number in a finite amount of space. In theory, any number can be represented by a summation series of rational numbers &#8211; the indescribable ones<br \/>\nare numbers for which not only is the length of that series of rational numbers<br \/>\ninfinite, but given the first K numbers in that series, there is no algorithm<br \/>\nthat can tell you the value of the K+1th rational.\n<\/p>\n<p> So, take an arbitrary computing device, &phi;, where &phi;(x) denotes the result of<br \/>\nrunning &phi; on program x. The total number of describable numbers can be no larger than<br \/>\nthe size of the set of programs x that can be run using &phi;. The number of programs<br \/>\nfor any effective computing device is countably infinite &#8211; so there are, at most,<br \/>\na countably infinite number of describable numbers. But there are uncountably many<br \/>\nreal numbers &#8211; so the set of numbers that can&#8217;t be generated by any finite program<br \/>\nis uncountably large.<\/p>\n<p> Most numbers <em>cannot<\/em> be described in a finite amount of space. We can&#8217;t compute with<br \/>\nthem, we can&#8217;t describe them, we can&#8217;t identify them. We know that they&#8217;re there; we can<br \/>\n<em>prove<\/em> that they&#8217;re there. All sorts of things that we count on as properties of<br \/>\nreal numbers wouldn&#8217;t work if the indescribable numbers weren&#8217;t there. But they&#8217;re<br \/>\ntotally inaccessible.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In my Dembski rant, I used a metaphor involving the undescribable numbers. An interesting confusion came up in the comments about just what that meant. Instead of answering it with a comment, I decided that it justified a post of its own. It&#8217;s a fascinating topic which is incredibly counter-intuitive. To me, it&#8217;s one of [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[30,43],"tags":[],"class_list":["post-773","post","type-post","status-publish","format-standard","hentry","category-information-theory","category-numbers"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-ct","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/773","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=773"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/773\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=773"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=773"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=773"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}