Something Nifty: A Taste of Simple Continued Fractions

One of the annoying things about how we write numbers is the fact that we generally write things one of two ways: as fractions, or as decimals.
You might want to ask, “Why is that annoying?” (And in fact, that’s what I want you to ask, or else there’s no point in my writing the rest of this!)
It’s annoying because both fractions and decimals can both only describe
*rational* numbers – that is, numbers that are a perfect ratio of two integers. And *most* numbers aren’t rational.
But it’s even more annoying than that: if you use decimals, then there are lots of rational numbers that you can’t represent exactly (i.e., 1/3); and if you use fractions, then it’s hard to express the idea that the fraction isn’t exact. (How do you write π as a fraction? 22/7 is a standard fractional approximation, but how do you say π, which is *almost* 22/7?)
So what do we do?
One of the answers is something called *continued fractions*. A continued fraction is a very neat thing. Here’s the idea: take a number where you don’t know it’s fractional form. Pick the nearest simple fraction 1/n that’s just a *little bit too large*. If you were looking at, say, 0.4, you’d take 1/2 – it’s a bit bigger. Now – you’ve got an approximation, but it’s too large. So that means that the demoninator is *too small*. So you add a correction to the denominator to make it a little bit bigger. And you just keep doing that – you approximate the correction to the denominator by adding a fraction to the denominator that’s just a little too big, and then you add a correction to *that* correction.
Let’s look at an example: 2.3456
1. It’s close to 2. So we start with 2 + (0.3456)
2. Now, we start approximating the fraction. The way we do that is we take the *reciprocal* of 0.3456 and take the integer part of it: 1/0.3456 rounded down is 2 . So we make it 2 + 1/2; and we know that the denominator is off by .3088.
3. We take the reciprocal again, and get 3, and it’s off by .736
4. We take the reciprocal again, and get 1, and it’s off by 0.264
5. Next we get 3, but it’s off by 208/1000
6. Then 4, off by 0.168
7. Then 5, off by .16
8. Then 6, off by .25
9. Then 4, off by 0; so now we have an exact result.
So as a continued fraction, 2.3456 looks like:
continued.jpg
As a shorthand, continued fractions are normally written using a list notation inside of square brackets: the integer part, following by a semicolon, followed by a comma-separated list of the denominators of each of the fractions. So our continued fraction for 2.3456 would be written [2; 2, 3, 1, 3, 4, 5, 6, 4].
There’s a very cool visual way of understanding that algorithm. I’m not going to show it for 2.3456, because it’s a bit too much… But let’s look at something simpler: let’s try to write 9/16ths as a continued fraction. Basically, we make a grid consisting of 16 squares across by 9 squares up and down. We draw the *largest* square we can on that grid. The number of squares of that size that we can draw is the first digit of the continued fraction. Now there’s a rectangle left over: we draw the largest squares we can, there. And so on:

square-continued.jpg

So the continued fraction for 9/16ths is [0; 1, 1, 3, 2].
Using continued fractions, we can represent *any* rational number in a finite-length continued fraction.
One incredibly nifty thing about this way of writing numbers is: what’s the reciprocal of 2.3456, aka [2; 2, 3, 1, 3, 4, 5, 6, 4]? It’s [0; 2, 2, 3, 1, 3, 4, 5, 6, 4]. We just add a zero to the front as the integer part, and push everything else one place to the right. If it was a zero in front, then we would have removed the zero and pulled everything else one place to the left.
Irrational numbers are represented as *infinite* continued fractions. So there’s an infinite series of correction fractions. You can understand it as a series of every-improving approximations of the value of the number. And you can define it using a recurrence relation (that is, a recursive formula) for how to get to the next digit.
For example, π = [3; 7, 15, 1, 292, 1, …]. If we work that out, the first six places of the continued fraction for pi work out in decimal form to 3.14159265392. That’s correct to the first *11* places in decimal. Not bad, eh?
A very cool property of the continued fractions is: square roots written as continued fractions *always repeat*. Even cooler? What’s the square root of two as a continued fraction? [1; 2, 2, 2, 2, …. ].

0 thoughts on “Something Nifty: A Taste of Simple Continued Fractions

  1. Xanthir

    Continued fractions are so cool. ^_^ Once I get home I’ll dig out a nice popmath book I have that covers continued fractions, and post up a couple of the better ones. Phi has an absolutely *awesome* continued fraction.
    And while I’m at it, I’ll cover some continued radicals as well, where you have nested roots. Something like sqrt(2+sqrt(2+sqrt(2+… I believe phi has an awesome continued radical as well.
    Homework! What is the value of the continued radical sqrt(0+sqrt(0+sqrt(0+…? Bonus points if you can show your work.

    Reply
  2. Canuckistani

    Weellll,
    y = sqrt(x + sqrt(x + sqrt(x + …
    Gives a quadratic expression in y:
    y2 = x + y
    so y = -1/2 plus/minus sqrt(1 – 4x)/2
    As x -> 0, this approaches either 0 or -1.
    Do I win bonus points? Or did I make some stupid mistake?

    Reply
  3. Blake Stacey

    Woot for the continued fractions!
    Now for this week’s Ask a ScienceBlogger, we’ll all be expecting your opinions on Good Will Hunting, A Beautiful Mind and Pi. <wink>

    Reply
  4. Xanthir

    Hmm. It *should* equal 0 or 1, by the following argument:
    L = sqrt(0+sqrt(0+sqrt(0…
    L^2 = 0+sqrt(0+sqrt(0…
    L^2 = sqrt(0+sqrt(0…
    L^2 = L
    The equation L^2 – L = 0 factors into L(L-1)=0 which has solutions of L={0,1}.

    Reply
  5. Davis

    Canuck, you made a sign error — it should read
    y = 1/2 plus/minus sqrt(1 – 4x)/2.
    (Which then gives you the same two answers as Xanthir’s method.)

    Reply
  6. Davis

    Oops, there was another sign error in there I didn’t catch. The correct expression is
    y = 1/2 plus/minus sqrt(1 + 4x)/2

    Reply
  7. GreedyAlgorithm

    If you view the continued radical as the limit g(z) of an iterated function f(x)=(0+x)^(1/2) starting at z, then g(z)=lim(z^(-2n),n=infinity, which for large n has answers all around the unit circle and has no limit unless z=0, in which case g(z)=0. If f(x)=sqrt(0+x), where sqrt is the answer in the open right half-plane, then g(z)=lim(|r^(-2n)|e^(i(T/(2n)),n=infinity)=1 unless r=0, in which case g(z)=0.
    If you assume the continued radical has a value z, then sqrt(0+z)=z, so re^(iT)=|r^(1/2)|e^(i(T/2)), making r=0 or T=0, making z=0 or z=1.

    Reply
  8. Canuckistani

    I always mix up a sign somewhere. I think it may be a congenital defect.
    Xanthir’s method is essentially the same as my method. Whenever you have a continued operation like that, you can get started by applying the inverse operation and seeing what you get.

    Reply
  9. Flaky

    The set of integers is readily defined by zero, successor and negation. Set of rationals is of course the set of ratios of integers. Do infinite continued fractions, or some equivalent representation, cover all of irrationals? Or is it possible to devise an operation on reals that could be used to introduce real numbers that cannot be represented as infinite continued fractions?

    Reply
  10. Canuckistani

    Flaky,
    MarkCC wrote, “Irrational numbers are represented as infinite continued fractions.” Since he didn’t qualify it, I’m guessing that this means all irrational numbers. I’d personally have a hard time believing that infinite decimal expansions can represent numbers that infinite continued fractions can’t.
    I think that one really interesting distinction is between computable numbers and the rest of the reals. A computable number is a number for which there exists an algorithm to produce its digits. Since computable numbers are countable, most real numbers aren’t even computable. Fortunately, you can do calculus with only the computable numbers.
    Some numbers are definable but not computable. Chaitin’s constant is one such construction. Roughly speaking, it is the probability that a randomly generated computer program will halt. Since it is a probability, it is certainly between zero and one, but to determine its digits, the halting problem would have to be solvable.

    Reply
  11. PaulC

    Thanks for the write-up. I knew continued fractions were good for something (Bill Gosper, better known to me for his early Conway’s Game of Life work, is a big advocate) but this is the most intuitive description I’ve read.
    Am I making a mistake in thinking that the golden ratio (sqrt(5)+1)/2 (the limit of the ratio of consecutive Fibonacci numbers) has a continued fraction of [1;1,1,1,1,…]?
    It appears to follow visually from the squares construction, and algebraically, you can derive the equation x=1+1/x from the continued fraction expansion.

    Reply
  12. Cat lover

    Cooooooool!!!
    I’ve met continued fractions before (mostly in connection with Pell’s Equation) but this is the first time I’ve seen that visualization. Neato! 🙂
    @PaulC: That is, indeed, the continued fraction expansion of phi–yet another reason why it’s such an interesting number.
    @Flaky: As Canuckistani said, all irrationals can be written as continued fractions. What I would like to know is, how can we characterize the numbers with *repeating* continued fraction expansions? Obviously roots of quadratics with integer coefficients have repeating expansions, but what (if any) other numbers do? Anybody know?

    Reply
  13. assman

    @Cat lover “The numbers with periodic continued fraction expansion are precisely the solutions of quadratic equations with integer coefficients” according to the wikipedia article on continued fractions
    BTW, another cool continued fraction is e which is
    e = [2,1,4,1,1,6,1,1,8,1,1,10,1,1…]

    Reply
  14. malpollyon

    Whilst repeating continued fractions for quadratic irrationals is cool, I always thought the coolest thing about simple continued fractions was the continued fractions for powers of e. I won’t spoil the surprise here in case Mark has it planned for a follow-up.
    Also the big advantage continued fractions have as a representation for numbers is that like standard fractions they have a finite representation for every rational, and like decimals (or other bases) you can perform arithmetic on irrationals.

    Reply
  15. rho

    RE: the continued radical sqrt(0+sqrt(0+sqrt(0+…
    This clearly is equal to 0, and not to 1.
    It’s only one value. It’s an unfamiliar notation for this value, but it’s still only one value, and not the solution to an equation. Having two possible answers is like saying 2+2 is either 4 or 5. One is right, the other isn’t.
    Let’s consider a function F(n) (should be a subscript n, but I don’t know if I can do that in here), where the n indicates how many square roots are taken. So:
    F(1) = sqrt(0)
    F(2) = sqrt(0+sqrt(0))
    F(3) = sqrt(0+sqrt(0+sqrt(0)))
    And so on. We can then define this iteratively:
    F(n) = sqrt(0 + F(n-1))
    However, F(1) is clearly equal to 0. (sqrt(0)=0)
    Using our itterative method, F(2)=sqrt(0+0) = 0
    You can then carry on doing this as many steps as you like, up to the limit n–>infinity, and you’re still not going to get to anything other than 0.
    As for why Xanthir’smethod doesn’t work, consider:
    L = 0
    L^2 = 0
    L^2 = L
    L(L-1) = 0
    L = {0,1}
    The incorrect line here is the final one, which has a division by 0. To make this step more rigorously, you’d go:
    L(L-1)/L = 0/L
    L-1 = 0
    L = 1
    But L is 0, so this doesn’t work.

    Reply
  16. Flaky

    Canuckistani: It’s probably easy to prove that infinite decimal expansions and infinite continued fractions cover the same set of reals. That wasn’t my point.
    The distinction between computable and non-computable numbers is also interesting. But since you can compute the non-computable, you can’t really ‘place’ them on the real line. At best you can know a range into which they fall.
    The question is, are there numbers, which can’t be represented as an infinite continued fraction, infinite decimal expansion, or some other equivalent way, that are real numbers and there exists a terminating algorithm that can compare them to any rational number, telling if it’s larger or smaller?

    Reply
  17. Uffe

    Flaky, Canuckistani,
    You touch on an interesting subject. Maybe we can ask MarkCC to take this on as a fun topic here?
    Chaitin had a whole lecture on this somewhere, where he convincingly argued, using a long row of arguments from physics and math, that real numbers have no place and are an artefact of logic. One of his arguments is the one you touched above:
    The set of Natural numbers, as well as the set of all programs Turing machines, are countable sets (aleph-0), whereas the set of real numbers is at least aleph-1. Thus, for real numbers you run badly out of Turing programs to compute them. If a number cannot be computed, then it is questionable to talk about its existence as more than some mathematical approximation of reality.
    Some of the real numbers can be readily computed by Turing machines, although the programs might not finish in finite time. Examples: e, π, sqrt(2). But what do we actually know about the “ugly ones” that cannot be computed? Does it make sense to claim they exist?
    At the end of Chaitin’s lecture a smart guy in the audience asked him what about Ω – Chaitin’s own famous constant – it would go down the drain with the rest of the real numbers? Chaitin’s answer on that was a bit evasive, noting that Ω is a special case, a real number yes, but solidly constructed out of natural numbers and “nearly” computable.

    Reply
  18. Torbjörn Larsson

    Coolness thirded or fourded.
    I haven’t looked at continous fractions, so it was nice to see a max procedure for each step and a graphical picture that goes with. It seems phi is as natural here, as e is in diff eq’s and integral transforms.
    Uffe:
    Thank you for a nice example next time someone tries to go platonic on me. CS results seems to go a long way to show why (at least some) numbers are parts of models and approximations. I detest platonic philosophy (and its cryptodualism) as much as platonic love. They just doesn’t do it for me.

    Reply
  19. PaulC

    Uffe: I’m a little lost on what it means for a number to “exist” in this context. Something that follows as “an artifact of logic” exists in the sense that its existence is implied by whatever axioms you started with. You can certainly write some axioms and conclude that there exist uncountably many incomputable real numbers as a consequence.
    On the other hand, even the fundamental constants such as pi might not exist in the sense of ever being physically realizable. Approximations of pi are all over the place, but the value itself is purely an abstraction. Conceivably, there could be some process that implies an exact value of pi, though I’m not sure how, but there must be reals that could be generated by an infinite running Turing machine but that are not realized in our universe.
    The reason I find this distinction important is that I often have wondered what it means for an abstraction to be realized. For instance, I can imagine a long-running dovetailed Turing machine program that would enumerate strings until it found one which passed a battery of tests that we would associate with intelligence and then started to run that program. I consider it reasonable that that program when running would exhibit consciousness. However, I don’t think that my thought experiment has consciousness even if I were to write it out with perfect mathematical formalism. Somehow, it appears that the phenomenon of conscious should only result from mathematical systems that are physically realized, and not from ones that “exist” but are only abstractions.
    Maybe Chaitin is really just expressing a preference for constructive proofs. I think that reals are at least a convenient fiction. There are many instances in which it is useful to assume a continuum of values in an interval without worrying much about how to compute them.

    Reply
  20. PaulC

    BTW, I also think that mathematicians (especially platonists) tend to get it backwards: it would be more correct to say that the universe, with all it’s chunkiness, spikiness, and quantum-level weirdness is exact and all those beautiful, symmetric, smoothed out formalisms are the approximation.
    But by this rule, I wouldn’t make a distinction other than “exists mathematically” or “is physically realized.” Clearly, there is a subset of the former “is computable up to any approximation” but I don’t know that this subset is important enough to give it special preference.

    Reply
  21. Thomas Winwood

    That graphical method looks like a great way to demonstrate it – can you show how it translates into the other numerical method you showed for getting a continued fraction?

    Reply
  22. Xanthir

    Rho – Nope, you’re wrong. The value is most definitely 0 and 1. It depends on which way you approach the problem.
    Let’s begin with the more general expression sqrt(x+sqrt(x+sqrt(x+…
    If we approach it your way, where x=0, then the value of that expression is 0. If instead we take the limit of the expression as x approaches 0, the value is 1. In other words, the equation y=sqrt(x+sqrt(x+sqrt(x+… is discontinuous at that point. Its value is 0, but its limit is 1.
    Let’s look at the more general equation, which is not necessarily equal to 0. Using that, we can obtain the equation:
    L^2 – L – x = 0
    This doesn’t easily factor, so let’s run it through the quadratic formula:
    (1 +/- sqrt(1+4x))/2
    Simplified, we get two possible values for L:
    .5 + sqrt(1+4x)/2
    and
    .5 – sqrt(1+4x)/2
    If x=0, then the first is equal to 1, and the second is equal to 0.
    Using infinite recursion to figure the value of a definite number isn’t a good idea, by the way. By that argument, .999… is less than one. Finite approximations have a way of not always representing the truth of the infinite-precision value.

    Reply
  23. PaulC

    Thomas Winwood: here’s how I worked out the graphical interpretation.
    Assume you want to write the continued fraction for some ratio x/y less than 1. Represent this as the ratio of area covered by an xXy rectangle in a yXy square.
    Since x/y = 1/(y/x), we can write this as 1/(k+r), where k=floor(y/x) and r=(y/x-floor(y/x)). Now we just need to represent r as a continued fraction, and we will have one for x/y.
    By the construction, we have decomposed the xXy rectangle into k xXx squares and another zXx rectangle for some value of z. Observe that z=y-kx in the construction. Moreover, k+r=y/x (see above), so kx+rx=y and it follows that rx=y-kx, so rx=z, and r=z/x.
    So if we want to find a continued fraction, we can proceed by decomposing a zXx rectangle, which happens to be the one we are left with after removing the first square(s).
    I’m not sure if there is a simpler demonstration, but that’s the one I came up with.

    Reply
  24. PaulC

    Another thing I was curious about yesterday was the connection to the Fibonacci sequence. You can actually represent the evaluation of continued fractions as a linear recurrence, as follows.
    Suppose you want to evaluate a finite continued fraction and always express it as a ratio of two integers. You would proceed from the inside out, and recursively apply the steps as follows.
    The fraction to evaluate is a+1/Q, where Q is another continued fraction. Recursively evaluate Q as a ratio m/n, where m and n are integers. Then we just need to evaluate a+1/(m/n) = a+n/m, which can be expressed as the integer ratio (am+n)/m. So the new denominator is the old numerator, and the new numerator is am+n.
    So, given a continued fraction series ordered in reverse: [a_k; a_k-1, …, a_1, a_0] the continued fraction is the ratio x_(k+1)/x_k in a series given by:
    x_(i+1) = a_i*x_i + x_(i-1)
    where x_0=1 and x_(-1)=0.
    If a_i is always 1, then this recurrence is just the Fibonacci series.

    Reply
  25. Torbjörn Larsson

    PaulC:
    My picture from Uffe’s comment was that first as you so eloquently say “the universe, with all it’s chunkiness, spikiness, and quantum-level weirdness is exact and all those beautiful, symmetric, smoothed out formalisms are the approximation” and then that approximation is further approximated by computable numbers which we get out of the formalisms and also from our experiments which we compare with.
    In this sense our results are models of models, and further and, I think, firmer removed from platonic ideals.

    Reply
  26. Canuckistani

    Xanthir,
    Rho sort of has a point. What does it really mean to square an infinite expression like sqrt(x + sqrt(x + … ? The proper way to go about it is the way GreedyAlgorithm and rho did, by defining a sequence of functions indexed to the natural numbers. But rho must have missed GreedyAlgorithm’s demonstration that the iterated function can approach 1 in some situations.
    The interesting thing is that our formal manipulations of infinite expressions actually reflect the more rigorous mathematics. Euler was famous for using such formal manipulations to come up with interesting formulae which he and others then proved rigorously.

    Reply
  27. Xanthir

    Rho sort of has a point. What does it really mean to square an infinite expression like sqrt(x + sqrt(x + … ?

    Eh, interpret the expression as sqrt(x+y). It just so happens that y=sqrt(x+sqrt(x+sqrt(x+… Y is a definite number, after all, nothing exotic. Square this, and you get x+y. X is 0, so you’re left with just y, which is equal to the original expression.
    As far as I can tell, this is no different from taking .999…, multiplying it by 10, and then subtracting 9. You’re left with the same number you started with. Values that are defined with an infinite series aren’t anything special, they’re just one way to write that value.

    Reply
  28. Canuckistani

    Xanthir,
    I would argue that the appropriate procedure in the case of 0.999… is to define xn as 0.999…9 (with n 9’s) and then show that the limit as n goes to infinity is 1.
    When it comes to the infinite, I always want to know exactly which limit it is we’re dealing with.

    Reply
  29. truth machine

    3.14159265392. That’s correct to the first 11 places in decimal. Not bad, eh?
    It’s quite bad considering that only the first 10 digits are accurate.

    Reply
  30. Uffe

    PaulC, MarkCC and others:
    Digging deep in my files I finally found the Chaitin lecture arguing against real numbers and continuum in general. It is still there and on-line! So if you have 1h12 to spare for something amazing and thought provoking, here it is:
    Lisbon Lecture
    Great stuff and fun too…

    Reply
  31. truth machine

    If we approach it your way, where x=0, then the value of that expression is 0. If instead we take the limit of the expression as x approaches 0, the value is 1.
    This is silly. x is 0; that’s a given. The limit of the expression for non-zero x approaching zero is irrelevant.

    Reply
  32. truth machine

    I’m a little lost on what it means for a number to “exist” in this context
    The only meaningful sense of “exist” in mathematics is that the set given by a description isn’t empty. This is quite distinct from the empirical notion of the set of instances matching a description being non-empty, and the confusion between these gives rise to the sort of muddle seen in Uffe’s comment and at the base of so-called mathematical Platonism.

    Reply
  33. Canuckistani

    truth machine,
    What’s interesting is that the formal manipulations of the expression pull out both values (i.e., x = 0 and x -> 0), even though we have no right to expect that just fiddling around will do that.

    Reply
  34. Torbjörn Larsson

    “My picture from Uffe’s comment”
    Looking at this comment today I wonder if I wasn’t affected of trying to make a rationalisation of a mistake, because I can’t any longer see what the immediate picture I got from Uffe’s comment was. But I like the final picture since I haven’t taken the requirement of computability into consideration before.

    Reply
  35. Andrew Love

    I think there’s an error in the first example. I get
    [2; 2, 1, 8, 2, 1, 1, 4] for the continued fraction form of 2.3456, and using your result of [2; 2, 3, 1, 3, 4, 5, 6, 4], I get approximately 2.4414. Or I may be confused…

    Reply
  36. Xanthir

    Andrew Love – I get the same answer as you using an online calculator found here. It expands the 4 at the end into (3,1), but that’s equivalent. I also get 2.4414 from Mark’s continued fraction.
    Working it out by hand, I also confirm that [2; 2, 1, 8, 2, 1, 1, 4] does indeed evaluate to 2.3456.

    Reply
  37. Xanthir

    Running a quick algorithm that simply does the subtract-and-invert thing, I get basically the same answer as before. Rounding errors appear to plague it, though – after the 1,1,3,1 I still have a tiny bit left over. But it confirms that Mark went crazy. ^_^

    Reply
  38. Mark C. Chu-Carroll

    Guys:
    Yeah, I blew it; messed up the calculation. You’re right.
    Xanthir: when you’re calculating continued fraction, you have to be very careful to keep track of everything as integer ratios, not decimals on a calculator or computer. Rounding errors will *totally* blow you out of the water otherwise.

    Reply
  39. Xanthir

    Yeah, that’s what I assumed. I just threw together a really fast program to do it for me – the accuracy was close enough until right at the end. I first wondered if there was an upper bound you could set on the number to expect, but I quickly realized you couldn’t. The continued fraction for 1000/1001, for example, is [0; 1, 1000].
    On a side note, I discovered that the continued fractions calculator I was using must have certain rounding errors as well. When calculating 1000/1001, it gives me [0; 1, 1000, 9077495378] A huge number like that means that it still had an extremely tiny remainder in the algorithm.

    Reply

Leave a Reply