{"id":521,"date":"2007-09-29T15:18:00","date_gmt":"2007-09-29T15:18:00","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/09\/29\/fast-arithmetic-and-fractals\/"},"modified":"2007-09-29T15:18:00","modified_gmt":"2007-09-29T15:18:00","slug":"fast-arithmetic-and-fractals","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2007\/09\/29\/fast-arithmetic-and-fractals\/","title":{"rendered":"Fast Arithmetic and Fractals"},"content":{"rendered":"<p> As pointed out by a commenter, there are some really surprising places where fractal patterns can<br \/>\nappear. For example, there was <a href=\"http:\/\/blog.wolfram.com\/2007\/09\/arithmetic_is_hardto_get_right.html\">a recent post<\/a> on the <a href=\"http:\/\/blog.wolfram.com\">Wolfram mathematica blog<\/a> by the engineer who writes<br \/>\nthe unlimited precision integer arithmetic code.<\/p>\n<p><!--more--><\/p>\n<p> Unlimited precision integers are numbers represented in a variable-length form, so that you can<br \/>\nrepresent very large numbers by a sequence of smaller numbers. For an overly simplified example, suppose you wanted to represent the number &#8220;340123461297834610029346129834761298734612483&#8221;. One way of doing that<br \/>\nwould be as a list of four-digit numbers: [0003,4012,3461,2978,3461,0029,3461,2983,4761,2987,3461,2483].<\/p>\n<p> One of the challenges with very large numbers like that is doing arithmetic on them in a reasonable amount of time. You <em>can<\/em> do what we&#8217;d do on paper &#8211; basic long multiplication, one digit at a time. But that&#8217;s really very slow for really big numbers.<\/p>\n<p> Naturally, people have searched for better algorithms. One great example of that is called Karatsuba multiplication. The Karatsuba algorithm is based on the idea that you can replace some of the sub-multiplies by shifts, where shifts are less expensive than multiplications.<\/p>\n<p> So, let&#8217;s look at a simple number, like 5743. We can re-write that as 57&times;10<sup>2<\/sup>+43. In fact, we can write <em>any<\/em> four-digit base-ten number x in a similar form, as x<sub>1<\/sub>&times;10<sup>2<\/sup>+x<sub>2<\/sub>.<\/p>\n<p> So, suppose we want to multiply two 4-digit numbers in this expanded pair form: (x<sub>1<\/sub>&times;10<sup>2<\/sup>+x<sub>2<\/sub>)&times;(y<sub>1<\/sub>&times;10<sup>2<\/sup>+y<sub>2<\/sub>). We can generalize that slightly &#8211; replace the 10<sup>2<\/sup> with a symbol B &#8211; we&#8217;re basically doing base-100 multiplication in the example, so B=100.<\/p>\n<p> The normal way of multiplying is basically to multiply the four pairs, (x<sub>1<\/sub>&times;y<sub>1<\/sub>,x<sub>2<\/sub>&times;y<sub>1<\/sub>,x<sub>1<\/sub>&times;y<sub>2<\/sub>,x<sub>2<\/sub>&times;y<sub>2<\/sub>), and then shift the first pair left 4 places, the second two each two places, and then add them all up. That&#8217;s  four multiplications. If we repeat this splitting process for each of the two digit numbers, then we end up doing 16 one-digit multiplications to multiply the two 4-digit multiplications.<\/p>\n<p> Katsuba is based on doing a bit of algebra on the paired form. Do an algebraic expansion of the symbolic version of the paired form:<\/p>\n<p>(x<sub>1<\/sub>&times;B+x<sub>2<\/sub>)&times;(y<sub>1<\/sub>&times;B<\/sup>+y<sub>2<\/sub>) = x<sub>1<\/sub>y<sub>1<\/sub>B<sup>2<\/sup> + (x<sub>2<\/sub>y<sub>1<\/sub>+x<sub>1<\/sub>y<sub>2<\/sub>)B + x<sub>2<\/sub>y<sub>2<\/sub>.<\/p>\n<p> We can rewrite than in terms of three new variables:<\/p>\n<ul>\n<li> L = x<sub>1<\/sub>y<sub>1<\/sub><\/li>\n<li>M = x<sub>2<\/sub>y<sub>2<\/sub><\/li>\n<li>N = (x<sub>1<\/sub>y<sub>2<\/sub> + x<sub>2<\/sub>y<sub>1<\/sub>)<\/li>\n<\/ul>\n<p> So X&times;Y = LB<sup>2<\/sup> + NB + M<\/p>\n<p> Now, here&#8217;s the clever part. We&#8217;re going to find an alternate way of producing N, using<br \/>\none multiply instead of two. We&#8217;ll start by taking (x<sub>1<\/sub>+x<sub>2<\/sub>)&times;(y<sub>1<\/sub>+y<sub>2<\/sub>):<\/p>\n<ol>\n<li> (x<sub>1<\/sub>+x<sub>2<\/sub>)(y<sub>1<\/sub>+y<sub>2<\/sub>) = <em>(expand)<\/em> <\/li>\n<li> x<sub>1<\/sub>y<sub>1<\/sub> + x<sub>1<\/sub>y<sub>2<\/sub> + x<sub>2<\/sub>y<sub>1<\/sub> + x<sub>2<\/sub>y<sub>2<\/sub> = <em>(replace terms with equivalents)<\/em><\/li>\n<li> L + N + M<\/li>\n<li> So: N = (x<sub>1<\/sub>+x<sub>2<\/sub>)(y<sub>1<\/sub>+y<sub>2<\/sub>) &#8211; L &#8211; M<\/li>\n<\/ol>\n<p> So once we compute X and Y, we can compute Z using a single multiplication, two additions, and two subtractions. Then when we recombine all of these, we&#8217;ll end up doing an extra shift. But since shifts,<br \/>\nadditions, and subtractions are cheaper than multiplication, the cost of multiplies dominates &#8211; and we can do this multiplication using 3 sub-multiplies instead of 4.<\/p>\n<p> If we&#8217;re multiplying a really big number, say, one with 32 digits, we can do this recursively &#8211; three 16-digit sub-multiplies; 3 8-digit sub-sub-multiplies for each of those, for a total of 9 sub-sub-multiplies; and so on, until we get to 243 single digit multiplications. Compare that to the<br \/>\nconventional multiplication, which needs to multiply each digit in the first number, with each digit in the other number, for a total of 1024 multiplications.<\/p>\n<p> So, we&#8217;ve got a nice fast algorithm for multiplying large numbers. Now, what does this have<br \/>\nto do with fractals? Let&#8217;s look at a bitmap (via Wolfram) that shows the basic pattern of<br \/>\nwhat digits get multiplied by what digits:<\/p>\n<p><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" alt=\"karatsuba-1.gif\" src=\"https:\/\/i0.wp.com\/scientopia.org\/img-archive\/goodmath\/img_278.gif?resize=180%2C180\" width=\"180\" height=\"180\" \/><\/p>\n<p> Voila! The rows represent bits from one number; the column represent bits from the other. The multiplication pattern of Katsuba arithmetic is a fractal. In fact, it&#8217;s very close to being the<br \/>\n<a href=\"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/08\/geometric-lsystems\">classic T-square fractal<\/a>. In fact, it is the inverse of the T-square fractal after a finite number of iterations, with the bottom right corner snipped out. Fractals are ubiquitous in describing mathematical patterns. There are a huge number of recursive algorithms (which are, by definition, self-similar), which correspond to a variety of simple fractals.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As pointed out by a commenter, there are some really surprising places where fractal patterns can appear. For example, there was a recent post on the Wolfram mathematica blog by the engineer who writes the unlimited precision integer arithmetic code.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[86],"tags":[],"class_list":["post-521","post","type-post","status-publish","format-standard","hentry","category-fractals"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-8p","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/521","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=521"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/521\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=521"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=521"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=521"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}