{"id":523,"date":"2007-10-02T11:03:34","date_gmt":"2007-10-02T11:03:34","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/10\/02\/the-excel-65535100000-bug\/"},"modified":"2007-10-02T11:03:34","modified_gmt":"2007-10-02T11:03:34","slug":"the-excel-65535100000-bug","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2007\/10\/02\/the-excel-65535100000-bug\/","title":{"rendered":"The Excel 65,535=100,000 Bug"},"content":{"rendered":"<p> I&#8217;ve been getting a lot of requests from people to talk about the <a href=\"http:\/\/blogs.msdn.com\/excel\/archive\/2007\/09\/25\/calculation-issue-update.aspx\">recent Excel bug<\/a>. For those of<br \/>\nyou who haven&#8217;t heard about this, in Excel 2007, floating point calculations that should result in<br \/>\na number very, <em>very<\/em> close to either 65,535 or 65,536 are displaying their result as 100,000.<br \/>\nIt&#8217;s only in the display though &#8211; the underlying number is actually represented correctly, so if you subtract 2 from 65,536, you&#8217;ll get the correct answer of 65,534 &#8211; not 99,998.<\/p>\n<p> I can&#8217;t give you the specifics &#8211; because without seeing the Excel code, I can&#8217;t tell exactly what they got wrong. But I&#8217;ve got some pretty good suspicions, so I&#8217;ll do my best to explain the background that leads to the problem, and what I think is probably going on. (Another excellent explanation if this<br \/>\nis in <a href=\"http:\/\/blog.wolfram.com\/2007\/09\/arithmetic_is_hardto_get_right.html\">the Wolfram blog post<\/a> that I mentioned in my <a href=\"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/09\/fast-arithmetic-and-fractals#more\">fast arithmetic fractals post<\/a> this weekend.)<\/p>\n<p><!--more--><\/p>\n<p> When you&#8217;re working with numbers on in a program like excel, you&#8217;re using something called floating point numbers. Floating point is annoying, seriously annoying. It&#8217;s an <em>approximation<\/em> of real numbers using a finite precision in a way that allows arithmetic operations to be done reasonably quickly. (It&#8217;s still a whole lot slower than working with integers, but it&#8217;s usually pretty good.)<\/p>\n<p>\tWithout going into the gory details, what floating point means is that the number is represented by a fixed-precision number in a strange form of scientific notation. So, for example, in 4-digit base-10 floating point, the number 600 is actually represented as 0.6000&times;10<sup>3<\/sup>; 31.4159 is represented as 0.3142&times;10<sup>3<\/sup>. Notice what happened in that second case &#8211; we lost two digits, because the 4-digit float representation didn&#8217;t have enough precision to represent all of the digits.<\/p>\n<p> To make matters worse, floating point doesn&#8217;t using base-10. It&#8217;s binary: so numbers are really represented by a base-2 fraction and a base-2 exponent. So, for example, 1\/4 is really represented as<br \/>\n0.1&times;2<sup>-1<\/sup>.<\/p>\n<p> There are a couple of unfortunate side-effects of this:<\/p>\n<ol>\n<li> The one which most people have seen in some form,<br \/>\nis that almost every floating point computation accumulates errors: because of the finite precision,<br \/>\nevery step performs a round-off, which can add a bit of error. The more you do, the larger the accumulated effects of roundoff errors can become. The typical example of this is on many calculators, if you do 2 &#8211; (sqrt(2)^2), you&#8217;ll get something odd like 0.00000000012.<\/li>\n<li> The order in which you perform a computation can have a huge impact on the precision of the result.<br \/>\nFor example, in base-10 4 digit floating point, with 1-digit exponents, if you have (0.1E-9)&times;(0.3E0)&times;(0.4E8), if you do the first pair of numbers first, it evaluates to<br \/>\n0E0&times;0.4E8=0. If you do it in the second order, it evaluates to 0.1E-9&times;0.12E8=0.12E-2.<\/li>\n<li> In order to display a value to a user, you need to convert it from base-2 scientific notation<br \/>\nto a standard base-10 representation. There are a bunch of problems here: doing the conversion is<br \/>\npotentially quite slow, involving multiple divisions (with the attendant round-off errors); and<br \/>\nyou want to present the number to the user correctly; while the base-10 conversion might result in<br \/>\n1.999999999999999999999999999999999999, you probably want to output that as 2. <\/li>\n<\/ol>\n<p> The last is the problem in Excel. Converting to print format is very slow &#8211; so people devote enormous<br \/>\namounts of effort to finding algorithms that make it faster. Every corner that can reasonably be cut<br \/>\nin order to save a bit of time is cut. At the same time, you want to get the number out correctly &#8211; so along with doing your clever optimization, you&#8217;re always keeping your eyes open for one of the cases<br \/>\nwhere the round-off errors should be corrected &#8211; like rounding 65,534.999999 to 65,535. <\/p>\n<p>  The problem with that is that floating point representation<br \/>\nis very complicated, and there are an incredible number of different numbers that can be represented.<br \/>\nIt&#8217;s very easy to write code which <em>seems<\/em> correct, and which works perfectly on nearly every<br \/>\nfloating point value, but which contains an error which will manifest on a dozen bit patterns.<br \/>\nYou can test that code on billions of different values, and still miss the crucial couple that reveal<br \/>\nthe problem.<\/p>\n<p> It looks like the Excel bug is one of those cases. My suspicion is that there&#8217;s an unfortunate interaction between the code that tries to prevent generating numbers like 1.99999999999999999 instead of 2, and the code that does the optimized conversion. What it looks like is that the rounding-error correction is probably over-doing its job; it&#8217;s doing the roundoff and presenting its result back<br \/>\nto the output code in what looks like it&#8217;s already base-10 format.<\/p>\n<p> In general, I like ragging on Microsoft as much (if not more than) you average Mac user. But I can&#8217;t say that I really blame them too much for this one. Floating point arithmetic and conversion is a nightmare &#8211; it&#8217;s enormously complex, and the demand for speed is extraordinary. Slowing things down a tiny bit &#8211; taking an extra microsecond per conversion &#8211; can have a huge impact on the performance of the system. That kind of code is under constant pressure to squeeze out every last drop of performance. And errors like this are <em>so<\/em> easy to miss, while catching them<br \/>\nin testing is almost impossible.  You can only reliably catch this kind of problem by doing a detailed analysis of the logic of the code, and all you need to do is miss one out of hundreds of different corner cases, and you&#8217;re hosed. It&#8217;s just so hard to get right that the only surprise is that they&#8217;ve made so few mistakes like this.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;ve been getting a lot of requests from people to talk about the recent Excel bug. For those of you who haven&#8217;t heard about this, in Excel 2007, floating point calculations that should result in a number very, very close to either 65,535 or 65,536 are displaying their result as 100,000. It&#8217;s only in the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[7],"tags":[],"class_list":["post-523","post","type-post","status-publish","format-standard","hentry","category-bad-software"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-8r","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/523","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=523"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/523\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=523"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=523"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=523"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}