My Favorite Strange Number: Ω (classic repost)

I’m away on vacation this week, taking my kids to Disney World. Since I’m not likely to
have time to write while I’m away, I’m taking the opportunity to re-run an old classic series
of posts on numbers, which were first posted in the summer of 2006. These posts are mildly
revised.

Ω is my own personal favorite transcendental number. Ω isn’t really a specific number, but rather a family of related numbers with bizarre properties. It’s the one real transcendental number that I know of that comes from the theory of computation, that is important, and that expresses meaningful fundamental mathematical properties. It’s also deeply non-computable; meaning that not only is it non-computable, but even computing meta-information about it is non-computable. And yet, it’s almost computable. It’s just all around awfully cool.

So. What is it Ω?

It’s sometimes called the halting probability. The idea of it is that it encodes the probability that a given infinitely long random bit string contains a prefix that represents a halting program.

It’s a strange notion, and I’m going to take a few paragraphs to try to elaborate on what that means, before I go into detail about how the number is generated, and what sorts of bizarre properties it has.

Remember that in the theory of computation, one of the most fundamental results is the non-computability of the halting problem. The halting problem is the question “Given a program P and input I, if I run P on I, will it ever stop?” You cannot write a program that reads an arbitrary P and I and gives you the answer to the halting problem. It’s impossible. And what’s more, the statement that the halting problem is not computable is actually equivalent to the fundamental statement of Gödel’s incompleteness theorem.

Ω is something related to the halting problem, but stranger. The fundamental question of Ω is: if you hand me a string of 0s and 1s, and I’m allowed to look at it one bit at a time, what’s the probability that eventually the part that I’ve seen will be a program that eventually stops?

When you look at this definition, your reaction should be “Huh? Who cares?”

The catch is that this number – this probability – is a number which is easy to define; it’s not computable; it’s completely uncompressible; it’s normal.

Let’s take a moment and look at those properties:

  1. Non-computable: no program can compute Ω. In fact, beyond a certain value N (which is non-computable!), you cannot compute the value of any bit of Ω.
  2. Uncompressible: there is no way to represent Ω in a non-infinite way; in fact, there is no way to represent any substring of Ω in less bits than there are in that substring.
  3. Normal: the digits of Ω are completely random and unpatterned; the value of and digit in Ω is equally likely to be a zero or a one; any selected pair of digits is equally likely to be any of the 4 possibilities 00, 01, 10, 100; and so on.

So, now we know a little bit about why Ω is so strange, but we still haven’t really defined it precisely. What is Ω? How does it get these bizarre properties?

Well, as I said at the beginning, Ω isn’t a single number; it’s a family of numbers. The value of an Ω is based on two things: an effective (that is, turing equivalent) computing device; and a prefix-free encoding of programs for that computing device as strings of bits.

(The prefix-free bit is important, and it’s also probably not familiar to most people, so let’s take a moment to define it. A prefix-free encoding is an encoding where for any given string which is valid in the encoding, no prefix of that string is a valid string in the encoding. If you’re familiar with data compression, Huffman codes are a common example of a prefix-free encoding.)

So let’s assume we have a computing device, which we’ll call φ. We’ll write the result of running φ on a program encoding as the binary number p as &phi(p). And let’s assume that we’ve set up φ so that it only accepts programs in a prefix-free encoding, which we’ll call ε; and the set of programs coded in ε, we’ll call Ε; and we’ll write φ(p)↓ to mean φ(p) halts. Then we can define Ω as:

Ωφ,ε = Σp ∈ Ε|p↓ 2-len(p)

So: for each program in our prefix-free encoding, if it halts, we add 2-length(p) to Ω.

Ω is an incredibly odd number. As I said before, it’s random, uncompressible, and has a fully normal distribution of digits. But where it gets fascinatingly strange is when you start considering its computability properties.

Ω is definable. We can (and have) provided a specific, precise definition of it. We’ve even described a procedure by which you can conceptually generate it. But despite that, it’s deeply uncomputable. There are procedures where we can compute a finite prefix of it. But there’s a limit: there’s a point beyond which we cannot compute any digits of it. And there is no way to compute that point. So, for example, there’s a very interesting paper where the authors
computed the value of Ω for a Minsky machine; they were able to compute 84 bits of it; but the last 20 are unreliable, because it’s uncertain whether or not they’re actually beyond the limit, and so they may be wrong. But we can’t tell!

What does Ω mean? It’s actually something quite meaningful. It’s a number that encodes some of the very deepest information about what it’s possible to compute. It gives a way to measure the probability of computability. In a very real sense, it represents the overall nature and structure of computability in terms of a discrete probability.

Ω is actually even the basis of a halting oracle – that is, if you knew the
value of Ω, then you could easily write a program which solves the halting problem!

Ω is also an amazingly dense container of information – as an infinitely long number and so thoroughly non-compressible, it contains an unmeasurable quantity of information. And we can’t even figure out what most of it is!

0 thoughts on “My Favorite Strange Number: Ω (classic repost)

  1. John Fouhy

    I read a book by Chaitin — _Meta Math!_. Most of it has now left me, but one thing sticks in my mind: Apparently, if you choose a real number at random, then with probability 1, it will be:
    1. Transcendental.
    2. Random.
    3. Uncomputable.
    In other words, we can’t write it down, and neither can we describe it precisely. Even if I knew exactly what this number is, there is no way I could impart that information to you. So, in what sense do real numbers really exist anyway?

    Reply
  2. Jair

    I’ve also read that book by Chaitin, I really enjoyed it. It really makes you philosophically suspicious about the set of real numbers. Also it’s mostly non-technical from what I remember.

    Reply
  3. The Young Linguist

    As an undergraduate math major, my question is this.
    If I define an $Omega$, can I determine the computing machine that it is for?
    I’m just curious, because wouldn’t that make this ironically close to ‘the question’ and ‘the answer’ in HHGTTG? You can only know an $Omega$ or a machine, not both?
    (I realize it would be amazingly hard to define an Omega, given Uncompressible and Normal)

    Reply
  4. Jonathan Vos Post

    John Fouhy:
    “in what sense do real numbers really exist anyway?”
    You are on the edge of entering a vast and deep forest of metaphysical disagreement.
    You are making assumptions about what “with probability 1” means, which are intuitive to you, but not what you think.
    You are making assumptions about what it means for mathematical objects to exist, which wise people have debated for centuries.
    Here be dragons.

    Reply
  5. Mark C. Chu-Carroll

    Re John #5:
    Chaitin makes a very strong case that the whole idea of the real numbers is philosophically ill-founded.
    It’s a damned tricky question. On the one hand, we’ve got
    real observable things that at least approximate irrational, transcendental numbers – things like π and e. In that sense, they’ve got to exist. A perfect circle doesn’t exist in the real world – but it seems strange to say that circles don’t exist. And yet, if circles exist, then π must exist; and if we accept the reality of π, then we’re stuck with the whole mess of irrationals.
    But on the other hand, the moment we accept the idea of the irrational numbers, suddenly the whole concept of numbers starts to become downright nonsensical – as you point out, the overwhelming majority of numbers can’t even be described.
    (And contrary to JvP, I don’t think that you’re misunderstanding what “with probability 1” means; I think the fact that you’re questioning whether the reals really exist in a meaningful sense shows that you get it; the
    undescribable numbers so outweigh the describables that
    the describables are a vanishingly small fraction of the set of reals.)

    Reply
  6. Jonathan Vos Post

    “the describables are a vanishingly small fraction of the set of reals”
    Yes, but they are a particularly *meaningful* fraction, and the amount of information in each is well modeled by the Minimum Description Length paradigm. MDL is more easily calculated than the Kolmogorov approach.
    Someone who works with computers for a living may, in a sense, be a Constructivist (in my classification of the Ontology of Mathematical objects), and ONLY care about describable numbers. Because those lead to the describable numbers on one’s paychecks.

    Reply
  7. Valhar2000

    Are irrational numbers really that problematic? After all, even if you can’t describe them, you can work with them. No matter how hard they are to describe, e^iπ=-1, and sqrt(2)^2=2.

    Reply
  8. Jonathan Vos Post

    Valhar2000: The problem is with real numbers whose description intrinsically is infinitely long. For instance, a real number whose decimal expansion or continued fraction expansion has an infinite (countable) number of digits, and no way to predict what the (n+1)-th digit is from knowing the first n digits, and no way to “compress” the number to any shorter or more efficient representation.
    That’s the essence of the problem: that “almost all” real numbers fall into that class.
    I don’t consider that these problems mean that such numbers do not “exist” — but that they can’t be finitely constructed makes some people question their ontological and epistemological status — i.e. do they exist, and what can we know about them?
    This goes beyond the algebraic / irrational / transcendetal distinction to the difficulties that Chaitin discusses.

    Reply
  9. Mark C. Chu-Carroll

    Re JvP #12:
    It’s true that the computable reals are particularly important in a philosophical sense. But still – they are so dwarfed by the undescribables that the probability of a randomly selected real number being undescribable is 1.
    That probability statement says nothing about the philosophical or practical importance of the describable or computable numbers.
    One thing I might post about at some point is the computable numbers. I saw a presentation a few years ago (I *think* it was one of Chaitin’s, but I’m not sure) that presented a construction of the set of computable numbers. It was an interesting idea – it defined computable numbers as numbers where you could write a program that would emit the digits of that number in sequence. So, for example, π is computable – because we can write a program that emits the digits of π. But undescribable numbers aren’t – because there’s no finite program to emit their digits. It was a very neat idea – and the argument that surrounded the definition of the computables was that we should, perhaps, think of math in terms of the computable numbers, rather than the reals, because the reals are philosophically ill-founded because of the undescribables.

    Reply
  10. Jonathan Vos Post

    Computable numbers were defined independently by Turing, Post and Church. See The Undecidable, ed. Martin Davis, for further original papers.
    Wikipedia reminds us of the following.
    Informally, Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936, i.e. as “sequences of digits interpreted as decimal fractions” between 0 and 1:
    “A computable number [is] one for which there is a Turing machine which, given n on its initial tape, terminates with the nth digit of that number [encoded on its tape].” (Minsky 1967:159)
    The key notions in the definition are (1) that some n is specified at the start, (2) when the machine’s internal counter reaches this n the computation terminates after printing the nth decimal digit on its tape — otherwise it would continue computing forever.
    An alternate form of (2) — the machine successively prints all n of the digits on its tape, halting after printing the nth — emphasizes Minsky’s observation: (3) That by use of a Turing machine, a finite definition — in the form of the machine’s TABLE — is being used to define what is a potentially-infinite string of decimal digits.
    Marvin Minsky 1967, Computation: Finite and Infinite Machines, Prentice-Hall, Inc. Englewood Cliffs, NJ. No ISBN. Library of Congress Card Catalog No. 67-12342. See especially chapter §9 “The Computable Real Numbers”
    There are presentations floating around of key results from:
    Oliver Aberth 1968, Analysis in the Computable Number Field, Journal of the Association for Computing Machinery (JACM), vol 15, iss 2, pp 276-299. This paper describes the development of the calculus over the computable number field.
    Klaus Weihrauch 2000, Computable analysis, Texts in theoretical computer science, Springer, ISBN 3540668179. (online version) §1.3.2 introduces the definition by nested sequences of intervals converging to the singleton real. Other representations are discussed in §4.1.
    Related to my classification of mathematical ontologies, one naturally asks: “is possible to disregard or throw away the full set of reals and use computable numbers for all of mathematics?” This part of the ideocosm is appealing to Constructivists, especially by what Bishop and Richman call the Russian school of constructive mathematics. Caveat: Even though the calculus can be developed over the computable numbers, the set of computable numbers is not closed under basic operations such as taking the supremum of a bounded sequence, so this set can NOT be used as a replacement for the full set of real numbers in classical mathematics.

    Reply
  11. Bob

    “in what sense do real numbers really exist anyway?”
    Or, are real numbers “real”? Some arguments here begin to sound like the question of Berkeley: “If a tree falls in the forest…” Valhar2000 reminds us that “e^iπ=-1, and sqrt(2)^2=2” are “real” in the sense that they have a meaning beyond irrationality. But before the observations about the sqrt(2) and pi as “real” numbers, they surely did exist (in a “real” sense), no? Until a specific number enters our collective conscious, what does it symbolize?
    What is the nature of “Schrodinger’s Cat”? Is the cat a manifestation of a calculation that comes to an end? An observation of a cat (dead or alive) only makes sense when there is a sentient being to do the observing. Are we here collectively observing the Omega’s out there by considering their existence? We (collectively) “use” pi, for example, whether or not we can ever write it down.
    When a program does “halt”, the probability of halting is immediately known: it is 1. I suspect that anyone who bothers to comment here tacitly acknowledges the importance of Omega.
    We all presume that our lives and our consciousness will someday come to a halt. Are we not programs?

    Reply
  12. Sandler

    Fuck me, aren’t you an arrogant little Jew, preserving your own CLASSIC posts for posterity.
    Skewed perspective, Jewbacca.

    Reply
  13. Dante

    You cannot write a program that reads an arbitrary P and I and gives you the answer to the halting problem. It’s impossible.

    How much time you have to wait for the answer? I’d write a debugger which runs P with input I and returns true when P exits, or false when P enters a previous state (and thus would loop forever).

    Reply
  14. Bruno

    Dante: finite time. So your debugger itself must be proven not to ever, under any circumstances, get on an infinite loop itself. Also, returning to a previous state is only one kind of infinite looping. There are also recursion infinite loops, forks, or legit programs which never go back to a previous state and still loop forever — like any program that computes an uncomputable number (say, e). Your debugger would have to be able to interpret what the program is doing, and tell whether or not it eventually halts. If you did come up with one such debugger, one nice sanity check would be to make it test itself and see if it can tell whether or not, for any given input, it eventually halts.

    Reply
  15. Ian Calvert

    Dante,
    “I’d write a debugger which runs P with input I and returns true when P exits, or false when P enters a previous state (and thus would loop forever). ”
    Ok, how about I modify your debugger, now called D, to loop infinitely if the input program halts and halt if the program would loop infinitely. [I’m pretty sure this is right, it’s 5am though]
    What if I call D(D)? Does it halt or not?
    It’s essentially *this statement is false* in a computer progrm 🙂

    Reply
  16. Dante

    I realize I’m probably wrong and people have been looking a lot at this problem, I just don’t see it.
    @Bruno
    So I have to make a proof, beside the program? I don’t think I can, other than “It’s trivial to see that D answers the halting problem, if computing resources are not a problem (memory, time)” 😉
    About P being the program for e/weirdly recursive, since there are only so many machine instructions, D is possible. Unless Marks also means “on any (future) machine architecture”.
    @Ian
    My debugger would work since it never halts, for any program: just returns true or (after a couple of days) false.

    Reply
  17. Dante

    Repost, please ignore/delete my previous entry
    I realize I’m probably wrong and people have been looking a lot at this problem, I just don’t see it.
    @Bruno
    So I have to make a proof, beside the program? I don’t think I can, other than “It’s trivial to see that D answers the halting problem, if computing resources are not a problem (memory, time)” 😉
    About P being the program for e/weirdly recursive, since there are only so many machine instructions, D is possible. Unless Mark also means “on any (future) machine architecture”.
    @Ian
    My debugger would work since it always halts, for any program: just returns true or (after a couple of days) false.

    Reply
  18. Kyle Lahnakoski

    Jonathan Vos Post said:
    “…the set of computable numbers is not closed under basic operations such as taking the supremum of a bounded sequence…”
    Of course computable numbers are not closed under incomputable operations!

    Reply
  19. Jonathan Vos Post

    Kyle: which is why I said so, in pointing out the flaw in what Bishop and Richman call the Russian school of constructive mathematics. The point seems to be that we NEED the full set of the Real Numbers to do Calculus. We can’t throw the baby out with the ontological bathwater.
    Unless you can find some position in between what is obvious to you and what is desired by those hard-core wishful-thinking constructivists. If so, you might have something worth publishing.
    How much CAN we do with only computable numbers? What’s the least that we need to add to the set to do some of the other things basic to Mathematics?

    Reply
  20. NeilF

    These are deep waters. (Yeah, I know this is very very late, and no-one except maybe Mark CC will read it.)
    It’s true, as you’ve stated, that almost all reals are undefinable. There is no pattern or rule that picks them out, no algorithm that can generate their digits.
    Even worse, the standard axioms of set theory tell us almost nothing about how *many* of these undefinable reals there are.
    Using forcing one can (in a certain sense) adjoin extra real numbers to the universe. One can even add vastly more reals than there were to begin with. (This can be done without collapsing any cardinals – see next para.)
    Going the other way, one can using forcing to “collapse” a cardinal and the make the continuum hypothesis come out true: Whatever number 2^(aleph_0) is, one can graft onto the universe a new set which is a bijection between that number and aleph_1 (and this can be done without inadvertently adding any extra reals.)
    Intuitively the idea of “all subsets of the natural numbers” seems so straightforward and unambiguous that it must determine a single abstract entity just as definitively as the Peano postulates determine the natural numbers. However, the independence phenomena of set theory make this sound naive.
    Even so, I would disagree with any normative suggestion that comes out of this (e.g. “we ought to think of math in terms of computable numbers only”), for the following reasons:
    Surprisingly, in “normal mathematics” the indeterminate properties of the reals (of which cardinality is the most notable, but not the only one) rarely come to the fore, and the reals do behave like a single Platonic entity. Furthermore, in order for the basic theorems of real analysis to come out right, we *do* need the reals to be complete, which presupposes uncountability, which requires there to be all of those mysterious uncomputable numbers floating around.
    Also, the suggestion that we somehow ‘disregard’ uncomputable numbers inevitably entails that set theorists should simply put their pens down and go and do something ‘worthwhile’ instead. However, if set theorists can prove interesting results about them (and they can) then of course they’re doing something just as ‘worthwhile’ as researchers in any other esoteric branch of pure maths.

    Reply

Leave a Reply