Ω: my favorite strange number

Ω is my own personal favorite transcendental number. Ω isn’t really a specific number, but rather a family of related numbers with bizzare properties. It’s the one real transcendental number that I know of that comes from the theory of computation, that is important, and that expresses meaningful fundamental mathematical properties. It’s also deeply non-computable; meaning that not only is it non-computable, but even computing meta-information about it is non-computable. And yet, it’s *almost* computable. It’s just all around awfully cool.
So. What is it Ω?
It’s sometimes called the *halting probability*. The idea of it is that it encodes the *probability* that a given infinitely long random bit string contains a prefix that represents a halting program.
It’s a strange notion, and I’m going to take a few paragraphs to try to elaborate on what that means, before I go into detail about how the number is generated, and what sorts of bizzare properties it has.
Remember that in the theory of computation, one of the most fundamental results is the non-computability of *the halting problem*. The halting problem is the question “Given a program P and input I, if I run P on I, will it ever stop?” You cannot write a program that reads P and I and gives you the answer to the halting problem. It’s impossible. And what’s more, the statement that the halting problem is not computable is actually *equivalent* to the fundamental statement of Gödel’s incompleteness theorem.
Ω is something related to the halting problem, but stranger. The fundamental question of Ω is: if you hand me a string of 0s and 1s, and I’m allowed to look at it one bit at a time, what’s the probability that *eventually* the part that I’ve seen will be a program that eventually stops?
When you look at this definition, your reaction *should* be “Huh? Who cares?”
The catch is that this number – this probability – is a number which is easy to define; it’s not computable; it’s completely *uncompressible*; it’s *normal*.
Let’s take a moment and look at those properties:
1. Non-computable: no program can compute Ω. In fact, beyond a certain value N (which is non-computable!), you cannot compute the value of *any bit* of Ω.
2. Uncompressible: there is no way to represent Ω in a non-infinite way; in fact, there is no way to represent *any substring* of Ω in less bits than there are in that substring.
3. Normal: the digits of Ω are completely random and unpatterned; the value of and digit in Ω is equally likely to be a zero or a one; any selected *pair* of digits is equally likely to be any of the 4 possibilities 00, 01, 10, 100; and so on.
So, now we know a little bit about why Ω is so strange, but we still haven’t really defined it precisely. What is Ω? How does it get these bizzare properties?
Well, as I said at the beginning, Ω isn’t a single number; it’s a family of numbers. The value of *an* Ω is based on two things: an effective (that is, turing equivalent) computing device; and a prefix-free encoding of programs for that computing device as strings of bits.
(The prefix-free bit is important, and it’s also probably not familiar to most people, so let’s take a moment to define it. A prefix-free encoding is an encoding where for any given string which is valid in the encoding, *no prefix* of that string is a valid string in the encoding. If you’re familiar with data compression, Huffman codes are a common example of a prefix-free encoding.)
So let’s assume we have a computing device, which we’ll call φ. We’ll write the result of running φ on a program encoding as the binary number p as &phi(p). And let’s assume that we’ve set up φ so that it only accepts programs in a prefix-free encoding, which we’ll call ε; and the set of programs coded in ε, we’ll call Ε; and we’ll write φ(p)↓ to mean φ(p) halts. Then we can define Ω as:

Ωφ,ε = Σp ∈ Ε|p↓ 2-len(p)

So: for each program in our prefix-free encoding, if it halts, we *add* 2-length(p) to Ω.
Ω is an incredibly odd number. As I said before, it’s random, uncompressible, and has a fully normal distribution of digits.
But where it gets fascinatingly bizzare is when you start considering its computability properties.
Ω is *definable*. We can (and have) provided a specific, precise definition of it. We’ve even described a *procedure* by which you can conceptually generate it. But despite that, it’s *deeply* uncomputable. There are procedures where we can compute a finite prefix of it. But there’s a limit: there’s a point beyond which we cannot compute *any* digits of it. *And* there is no way to compute that point. So, for example, there’s a very interesting paper where the authors [computed the value of Ω for a Minsky machine][omega]; they were able to compute 84 bits of it; but the last 20 are *unreliable*, because it’s uncertain whether or not they’re actually beyond the limit, and so they may be wrong. But we can’t tell!
What does Ω mean? It’s actually something quite meaningful. It’s a number that encodes some of the very deepest information about *what* it’s possible to compute. It gives a way to measure the probability of computability. In a very real sense, it represents the overall nature and structure of computability in terms of a discrete probability. It’s an amazingly dense container of information – as a n infinitely long number and so thoroughly non-compressible, it contains an unmeasurable quantity of information. And we can’t get near most of it!
[omega]: http://www.expmath.org/expmath/volumes/11/11.3/Calude361_370.pdf

Q&A: What is information?

I received an email from someone with some questions about information theory; they relate to some sufficiently common questions/misunderstandings of information theory that I thought it was worth turning the answer into a post.
There are two parts here: my correspondent started with a question; and then after I answered it, he asked a followup.
The original question:
————————
>Recently in a discussion group, a member posted a series of symbols, numbers,
>and letters:
>
>`+
>The question was what is its meaning and whether this has information or not.
>I am saying is does have information and an unknown meaning (even though I am
>sure they were just characters randomly typed by the sender), because of the
>fact that in order to recognize a symbol, that is information. Others are
>claiming there is no information because there is no meaning. Specifically that
>while the letters themselves have meaning, together the message or “statement”
>does not, and therefore does not contain information. Or that the information
>content is zero.
>
>Perhaps there are different ways to define information?
>
>I think I am correct that it does contain information, but just has no meaning.
This question illustrates one of the most common errors made when talking about information. Information is *not* language: it does *not* have *any* intrinsic meaning. You can describe information in a lot of different ways; in this case, the one that seems most intuitive is: information is something that reduces a possibility space. When you sit down to create a random string of characters, there is a huge possibility space of the strings that could be generated by that. A specific string narrows the space to one possibility. Anything that reduces possibilities is something that *generates* information.
For a very different example, suppose we have a lump of radium. Radium is a radioactive metal which breaks down and emits alpha particles (among other things). Suppose we take out lump of radium, and put it into an alpha particle detector, and record the time intervals between the emission of alpha particles.
The radium is *generating information*. Before we started watching it, there were a huge range of possibilities for exactly when the decays would occur. Each emission – each decay event – narrows the possibility space of other emissions. So the radium is generating information.
That information doesn’t have any particular *meaning*, other than being the essentially random time-stamps at which we observed alpha particles. But it’s information.
A string of random characters may not have any *meaning*; but that doesn’t mean it doesn’t contain information. It *does* contain information; in fact, it *must* contain information: it is a *distinct* string, a *unique* string – one possibility out of many for the outcome of the random process of generation; and as such, it contains information.
The Followup
—————
>The explanation I have gotten from the person I have been debating, as to what
>he says is information is:
>
>I = -log2 P(E)
>
>where:
>I: information in bits
>P: probability
>E: event
>
>So for the example:
>`+
>He says:
>”I find that there is no meaning, and therefore I infer no information. I
>calculate that the probability of that string occuring was ONE (given no
>independent specification), and therefore the amount of information was ZERO. I
>therefore conclude it has no meaning.”
>
>For me, even though that string was randomly typed, I was able to look at the
>characters, and find there placement on a QWERTY keyboard, compare the
>characters to the digits on the hand, and found that over all, the left hand
>index finger keys were used almost twice as much. I could infer that a person
>was left handed tried to type the “random” sequence. So to me, even though I
>don’t know the math, and can’t measure the amount of information, the fact I
>was able to make that inference of a left handed typer tells me that there is
>information, and not “noise”.
The person quoted by my correspondent is an idiot; and clearly one who’s been reading Dembski or one of his pals. In my experience, they’re the only ones who continually stress that log-based equation for information.
But even if we ignore the fact that he’s a Dembski-oid, he’s *still* an idiot. You’ll notice that nowhere in the equation does *meaning* enter into the definition of information content. What *does* matter is the *probability* of the “event”; in this case, the probability of the random string of characters being the result of the process that generated it.
I don’t know how he generated that string. For the sake of working things through, let’s suppose that it was generated by pounding keys on a minimal keyboard; and let’s assume that the odds of hitting any key on that keyboard are equal (probably an invalid assumption, but it will only change the quantity of information, not its presence, which is the point of this example.) A basic minimal keyboard has about sixty keys. (I’m going by counting the keys on my “Happy Hacking Keyboard”. Of those, seven are shifts of various sorts: 2 shift, 2 control, one alt, one command, one mode), and one is “return”. So we’re left with 52 keys. To make things simple, let’s ignore the possibility of shifts affecting the result (this will result in us getting a *lower* information content, but that’s fine). So, it’s 80 characters; each *specific* character generated is an event with probability 1/52. So each *character* of the string has, by the Shannon-based definition quoted above, about 5.7 bits of information. The string as a whole has about 456 bits of information.
The fact that the process that generated it is random doesn’t make the odds of a particular string be one. For a trivial example, I’m going to close my eyes and pound on my keyboard with both hands twice, and then select the first 20 characters from each:
First: “`;kldsl;ksd.z.mdΩ.l.x`”
Second: “`lkficeewrflk;erwm,.r`”
Quite different results overall? Seems like I’m a bit heavy on the right hand on the k/l area. But note how different the outcomes are. *That* is information. Information without *semantics*; without any intrinsic meaning. But that *doesn’t matter*. Information *is not language*; the desire of [certain creationist morons][gitt] to demand that information have the properties of language is nonsense, and has *nothing* to do with the mathematical meaning of information.
The particular importance of this fact is that it’s a common creationist canard that information *must* have meaning; that therefore information can’t be created by a natural process, because natural processes are random, and randomness has no meaning; that DNA contains information; and therefore DNA can’t be the result of a natural process.
The sense in which DNA contains information is *exactly* the same as that of the random strings – both the ones in the original question, and the ones that I created above. DNAs information is *no different*.
*Meaning* is something that you get from language; information *does not* have to be *language*. Information *does not* have to have *any* meaning at all. That’s why we have distinct concepts of *messages*, *languages*, *semantics*, and *information*: because they’re all different.
[gitt]: http://scienceblogs.com/goodmath/2006/07/bad_bad_bad_math_aig_and_infor.php

Irrational and Transcendental Numbers

If you look at the history of math, there’ve been a lot of disappointments for mathematicians. They always start off with an idea of math as a clean, beautiful, elegant thing. And they seem to often wind up disappointed.
Which leads us into todays strange numbers: irrational and transcendental numbers. Both of them were huge disappointments to the mathematicians who discovered them.
So what are they? We’ll start with the irrationals. They’re numbers that aren’t integers, and that aren’t a ratio of any two integers. So you can’t write them as a normal fraction. If you write them as a continued fraction, then they go on forever. If you write them in decimal form, they go on forever without repeating. π, √2, *e* – they’re all irrational. (Incidentally, the reason that they’re called “irrational” isn’t because they don’t make sense, or because their decimal representation is crazy, but because they can’t be written *as ratios*. My high school algebra teacher who first talked about irrational numbers said that they were irrational because numbers that never repeated in decimal form weren’t sensible.)
The transcendentals are even worse. Transcendental numbers are irrational; but not only can transcendental numbers not be written as a ratio of integers; not only do their decimal forms go on forever without repeating; transcendental numbers are numbers that *can’t* be described by algebraic operations: there’s no finite sequence of multiplications, divisions, additions, subtractions, exponents, and roots that will give you the value of a transcendental number. √2 is not transcendental; *e* is.
History
———–
The first disappointment involving the irrational numbers happened in Greece, around 500 bce. A rather brilliant man by the name of Hippasus, who was part of the school of Pythagoras, was studying roots. He worked out a geometric proof of the fact that √2 could not be written as a ratio of integers. He showed it to his teacher, Pythagoras. Pythagoras was convinced that numbers were clean and perfect, and could not accept the idea of irrational numbers. After analyzing Hippasus’s proof, and being unable to find any error in it, he became so enraged that he *drowned* poor Hippasus.
A few hundred years later, Eudoxus worked out the basic theory of irrationals; and it was published as a part of Euclid’s mathematical texts.
From that point, the study of irrationals pretty much disappeared for nearly 2000 years. It wasn’t until the 17th century that people started really looking at them again. And once again, it led to disappointment; but at least no one got killed this time.
Mathematicians had come up with yet another idea of what the perfection of math meant – this time using algebra. They decided that it made sense that algebra could describe all numbers; so you could write an equation to define any number using a polynomial with rational coefficients; that is, an equation using addition, subtraction, multiple, division, exponents, and roots.
Leibniz was studying algebra and numbers, and he’s the one who made the unfortunate discovery: lots of irrational numbers are algebraic; but lots of them *aren’t*. He discovered it indirectly, by way of the sin function. You see, sin(x) *can’t* be computed from x using algebra. There’s no algebraic function that can compute it. Leibniz called sin a transcendental function, since it went beyond algebra.
Building on the work of Leibniz, Liouville worked out that you could easily construct numbers that couldn’t be computed using algebra. For example, the constant named after Liouville consists of a string of 0s and 1s where 10-i is 1 if/f there is some integer n such that n!=i.
Not too long after, it was discovered that *e* was transcendental. The main reason for this being interesting is because up until that point, every transcendental number known was *constructed*; *e* is a natural, unavoidable constant. Once *e* was shown irrational, others followed; in one neat side-note, π was shown to be transcendental *using e*. One of the properties that they discovered after the transcendence of *e* was that any transcendental number raised to a non-transcendental power was transcendental. Since e is *not* transcendental – it’s 1 – then π must be transcendental.
The final disappointment in this area came soon after; Cantor, studying the irrationals, came up with the infamous “Cantor’s diagonalization” argument, which shows that there are *more* transcendental numbers than there are algebraic ones. *Most* numbers are not only irrational; they’re transcendental.
What does it mean, and why does it matter?
———————————————-
Irrational and transcendental numbers are everywhere. Most numbers aren’t rational. Most numbers aren’t even algebraic. That’s a very strange notion: *we can’t write most numbers down*.
Even stranger, even though we know, per Cantor, that most numbers are transcendental, it’s *incredibly* difficult to prove that any particular number is transcendental. Most of them are; but we can’t even figure out *which ones*.
What does that mean? That our math-fu isn’t nearly as strong as we like to believe. Most numbers are beyond us.
Some interesting numbers that we know are either irrational or transcendental:
1. *e*: transcendental.
2. π: transcendental.
3. √2 : irrational, but algebraic,
4. √x, for all x that are not perfect squares are irrational.
5. log23 is irrational.
6. Ω, Chaitin’s constant, is transcendental.
What’s interesting is that we really don’t know very much about how transcendentals interact; and given the difficulty of proving that something is transcendental, even for the most well-known transcendentals, we don’t know much of what happens when you put them together. π+*e*; π×*e*; ππ; *e**e*, πe are all numbers that we *don’t know* if they;’re transcendental. In fact, for π+*e* (and π-*e*) we don’t even know if it’s *irrational*.
That’s the thing about these number. We have *such* a weak grasp of them that even things that seem like they should be easy and fundamental, *we don’t know how to do*.

Friday Pathological Programming: Programming Through Grammars in Thue

Today’s treat: [Thue][thue], pronounced two-ay, after a mathematician named Axel Thue.
Thue is a language based on [semi-Thue][semi-thue] grammars. semi-Thue grammars are equivalent to Chosmky-0 grammars, except that they’re even more confusing: a semi-Thue grammar doesn’t distinguish between terminal and non-terminal symbols. (Since Chomsky-0 is equivalent to a turing machine, we know that Thue is turing complete.) The basic idea of it is that a program is a set of rewrite rules that tell you how to convert one string into another. The language itself is incredibly simple; programs written in it are positively mind-warping.
There are two parts to a Thue program. First is a set of grammar rules, which are written in what looks like pretty standard BNF: “*<lhs> ::= <rhs>*”. After that, there is a section which is the initial input string for the program. The two parts are separated by the “::=” symbol by itself on a line.
As I said, the rules are pretty much basic BNF: whatever is on the left-hand side can be replaced with whatever is on the right-hand side of the rule, whenever a matching substring is found. There are two exceptions, for input and output. Any rule whose right hand side is “:::” reads a line of input from the user, and replaces its left-hand-side with whatever it read from the input. And any rule that has “~” after the “::=” prints whatever follows the “~” to standard out.
So, a hello world program:
x ::=~ Hello world
::=
x
Pretty simple: the input string is “x”; there’s a replacement for “x” which replaces “x” with nothing, and prints out “Hello world”.
How about something much more interesting. Adding one in binary. This assumes that the binary number starts off surrounded by underscore characters to mark the beginning and end of a line.
1_::=1++
0_::=1
01++::=10
11++::=1++0
_0::=_
_1++::=10
//::=:::
::=
_//_
Let’s tease that apart a bit.
1. Start at the right hand side:
1. “1_::=1++” – If you see a “1” on the right-hand end of the number,
replace it with “1++”, removing the “_” on the end of the line. The rest
of the computation is going to use the “++” to mark the point in
string where the increment is going to alter the string next.
2. “0_::=1” – if you see a “0” on the right-hand-end, then replace it with 1. This doesn’t insert a “++”, so none of the other rules are going to do anything: it’s going to stop. That’s what you want: if the string ended in a zero, then adding one to it just changes that 0 to a 1.
2. Now, we get to something interesting: doing the addition:
1. If the string by the “++” is “01”, then adding one will change it to “10”, and that’s the end of the addition. So we just change it to 10, and get rid of the “++. ”
2. If the string by the “++” is “11”, then we change the last digit to “0”, and move the “++” over to the left – that’s basically like adding 1 to 9 in addition: write down a zero, carry the one to the left.
3. Finally, some termination cleanup. If we get to the left edge, and there’s a zero, after the “_”, we can get rid of it. If we get to the left edge, and there’s a “1++”, then we replace it with 10 – basically adding a new digit to the far left; and get rid of the “++”, because we’re done.
3. Finally, inputting the number “//::=:::” says “If you see “//”, then read some input from the user, and replace the “//” with whatever you read.
4. Now we have the marker for the end of the rules section, and the string to start the program is “_//_”. So that triggers the input rule, which inputs a binary number, and puts it between the “_”s.
Let’s trace this on a couple of smallish numbers.
* Input: “111”
1. “_111_” → “_111++”
2. “_111++_” → “_11++0”
3. “_11++0” → “_1++00”
4. “_1++00” → “1000”
* Input: “1011”
1. “_1011_” → “_1011++”
2. “_1011++” → “_101++0”
3. “_101++0” → “_1100”.
Not so hard, eh?
Decrement is the same sort of thing; here’s decrement with an argument hardcoded instead of reading from input:
0_::=0–
1_::=0
10–::=01
00–::=0–1
_1–::=@
_0–::=1
_0::=
::=
_1000000000000_
Now, something more complicated. From [here][vogel], a very nicely documented program that computes fibonacci numbers. It’s an interesting mess – it converts things into a unary form, does the computation in unary, then converts back. This program uses a fairly neat trick to get comments. Thue doesn’t have any comment syntax. But they use “#” on the left hand side as a comment marker, and then make sure that it never appears anywhere else – so none of the comment rules can ever be invoked.
—-
#::= fibnth.t – Print the nth fibonacci number, n input in decimal
#::= (C) 2003 Laurent Vogel
#::= GPL version 2 or later (http://www.gnu.org/copyleft/gpl.html)
#::= Thue info at: http://www.catseye.mb.ca/esoteric/thue/
#::= This is modified thue where only foo::=~ outputs a newline.
#::= ‘n’ converts from decimal to unary-coded decimal (UCD).
n9::=*********,n
n8::=********,n
n7::=*******,n
n6::=******,n
n5::=*****,n
n4::=****,n
n3::=***,n
n2::=**,n
n1::=*,n
n0::=,n
n-::=-
#::= ‘.’ prints an UCD number and eats it.
.*********,::=.~9
.********,::=.~8
.*******,::=.~7
.******,::=.~6
.*****,::=.~5
.****,::=.~4
.***,::=.~3
.**,::=.~2
.*,::=.~1
.,::=.~0
~9::=~9
~8::=~8
~7::=~7
~6::=~6
~5::=~5
~4::=~4
~3::=~3
~2::=~2
~1::=~1
~0::=~0
#::= messages moving over ‘,’, ‘*’, ‘|’
*<::=<*
,<::=<,
|<::=*::=*0>
0>,::=,0>
0>|::=|0>
1>*::=*1>
1>,::=,1>
1>|::=|1>
2>*::=*2>
2>,::=,2>
2>|::=|2>
#::= Decrement n. if n is >= 0, send message ‘2>’;
#::= when n becomes negative, we delete the garbage using ‘z’ and
#::= print the left number using ‘.’.
*,-::=,2>
,,-::=,-*********,
|,-::=z
z*::=z
z,::=z
z|::=.
#::= move the left number to the right, reversing it (0 becomes 9, …)
#::= reversal is performed to help detect the need for carry. As the
#::= order of rule triggering is undetermined in Thue, a rule matching
#::= nine consecutive * would not work.
*c<::=c
,c<::=c
|c
0>d*::=d
1>d::=d*********,
#::= when the copy is done, ‘d’ is at the right place to begin the sum.
2>d::=f<|
#::= add left to right. 'f' moves left char by char when prompted by a '<'.
#::= it tells 'g' to its right what the next char is.
*f
,f
#::= when done for this sum, decrement nth.
|f’ ‘g’ drops an ‘i’ that increments the current digit.
0>g::=ig
*i::=<
,i::=i,*********
|i::=’ tells ‘g’ to move left to the next decimal position,
#::= adding digit 0 if needed (i.e. nine ‘*’ in reversed UCD)
*1>g::=1>g*
,1>g::=g::=|’ tells ‘g’ that the sum is done. We then prepare for copy (moving
#::= a copy cursor ‘c’ to the correct place at the beginning of the left
#::= number) and reverse (using ‘j’) the right number back.
*2>g::=2>g*
,2>g::=2>g,
|2>g::=c|*********j
#::= ‘j’ reverses the right number back, then leaves a copy receiver ‘d’
#::= and behind it a sum receiver ‘g’.
*j*::=j
j,*::=,********j
j,,::=,*********j,
j,|::=’ sent by ‘-‘
#::= will start when hitting ‘g’ the first swap + sum cycle
#::= for numbers 0 (left) and 1 (reversed, right).
::=
|n?-|,|********g,|
——
Ok. Now, here’s where we go *really* pathological. Here, in all it’s wonderful glory, is a [Brainfuck][brainfuck] interpreter written in Thue, written by Mr. Frederic van der Plancke. (You can get the original version, with documentation and example programs, along with a Thue interpreter written in Python from [here][vdp-thue])
——
#::=# BRAINFUNCT interpreter in THUE
#::=# by Frederic van der Plancke; released to the Public Domain.
o0::=0o
i0::=0i
o1::=1o
i1::=1i
o]::=]o
i]::=]i
o[::=[o
i[::=[i
oz::=zo
oZ::=Zo
iz::=zi
iZ::=Zi
0z::=z0
1z::=z1
]z::=z]
[z::=z[
_z::=z_
0Z::=Z0
1Z::=Z1
]Z::=Z]
[Z::=Z[
_Z::=Z_
}}]::=}]}
&}]::=]&}
&]::=]^
}[::=[{
&[::=[0::=0}
{1::=1{
?Z[::=[&
?z[::=[^
?z]::=]
?Z]::=]^
[{{::={[{
[{::=[
[::=[^
]{::={]
]::={]
0::=
1::=1
0{::={0
1{::={1
^[::=?[iii
^]::=?]iii
}|::=}]|WARN]WARN
&|::=&]|WARN]WARN
WARN]WARN::=~ERROR: missing ‘]’ in brainfunct program; inserted at end
^000::=000^ooo
^001::=001^ooi
^010::=010^oio
^011::=011^oii
^100::=100^ioo
^101::=101^ioi
ooo|::=|ooo
ooi|::=|ooi
oio|::=iio|oio
oii|::=|oii
ioo|::=|ioo
ioi|::=|ioi
iii|::=|iii
iio|!::=|
|z::=z|
|Z::=Z|
o_::=_o
i_::=_i
0!::=!0
1!::=!1
_!::=!_
/!::=!/
oooX!::=Xooo
oooY::=$Y
0$::=!1
1$::=$0
X$::=X!1
ooiX!::=Xooi
ooiY::=@1Y
1@1::=@00
0@1::=@11
1@0::=@01
0@0::=@00
X@1::=X!WARNDEC0WARN
WARNDEC0WARN::=~WARNING: attempt at decrementing zero (result is still zero)
X@00::=X@0
X@01::=X!1
X@0Y::=X!0Y
oioX!::=LLYLR
0LL::=LL0
1LL::=LL1
_LL::=!X!
|LL::=|!0X!
LR0::=0LR
LR1::=1LR
LRY::=_
oiiX!::=_oii
oiiY::=X!%
%0::=0%
%1::=1%
%_0::=Y0
%_1::=Y1
%_/::=Y0_/
iooX!::=X(in)
(in)0::=(in)
(in)1::=(in)
(in)Y::=Yi
i/0::=Zi/
i/1::=zi/
i/_::=!/
XY!::=X!0Y
0Y!::=0!Y
1Y!::=1!Y
YZ::=0Y
Yz::=1Y
ioiX!::=X(out)
(out)0::=0(out)O0
(out)1::=1(out)O1
(out)Y::=OO!Y
O0::=~0
O1::=~1
OO::=~_
iiiX!0::=ZX!0
iiiX!1::=zX!1
+::=000
-::=001
::=011
,::=100
.::=101
:::=|X!0Y0_/
::=
^*
[vdp-thue]: http://fvdp.homestead.com/files/eso_index.html#BFThue
[thue]: http://esoteric.voxelperfect.net/wiki/Thue
[semi-thue]: http://en.wikipedia.org/wiki/Semi-Thue_grammar
[brainfuck]: http://scienceblogs.com/goodmath/2006/07/gmbm_friday_pathological_progr.php
[vogel]: http://lvogel.free.fr/thue.htm

Causeless Math from Dembski and Friend

Over at his blog, William Dembski, my least-favorite pathetic excuse for a mathematician, [has cited an article][dembski] written by one John Davison about the theory of evolution. According to Dembski, this article explains “why the naked emperor still lingers”; that is, why the theory of evolution is still around even though it’s so obviously wrong. (Update: I originally typed “Paul Davison” instead of “John Davison”, I don’t know why. Any “Paul Davison”s out there, sorry for associating your name with this dreck. Additionally, the article is posted on Dembski’s blog, but it wasn’t posted by Dembski himself; it was posted by one of the site moderators “Scott”.)
It’s easy to see why Dembski likes this article. I’ve always found Dembski’s writing to be obnoxiously pretentious; this guy writes in that same snotty faux-intellectual style. Here’s a taste from the beginning of Davison’s article.
>Darwinism has persisted because it failed to recognize the nature of first
>causes. It is only natural to assume that results had causes and it is the duty
>of the scientist to find and reveal those causes. At this science has been
>incredibly successful. Many examples are presented in medical science with the
>discovery of the causes, treatments and cures for hundreds of diseases. All of
>Chemistry has been revealed from the consideration of how atomic forces have
>caused molecules to have the structure and properties that they have. This is
>analytical science and it is great science.
>
>I like titles presented as questions because that is what science is really
>supposed to be all about – answering questions. One cannot answer a question
>until it has been posed.
>
>I have used this technique in the past with “Is evolution finished” and most
>recently, also in the current issue of Rivista di Biologia, “Do we have an
>evolutionary theory?”
>
>You will note that I choose my words carefully. I do not question that it has
>persisted because that is self-evident, but rather how has that been possible?
>
>I have the answer and here it is in abbreviated form.
See what I mean? This section also already starts to hint at what’s wrong; but what really set me off, and let me to write about it here, on a math blog, is what comes next:
>Darwinism has persisted because it failed to recognize the nature of first
>causes. It is only natural to assume that results had causes and it the duty of
>the scientist to find and reveal those causes. At this science has been
>incredibly successful. Many examples are presented in medical science with the
>discovery of the causes, treatments and cures for hundreds of diseases. All of
>Chemistry has been revealed from the consideration of how atomic forces have
>caused molecules to have the structure and properties that they have. This is
>analytical science and it is great science.
>
>But does this approach have limits beyond which it cannot DIRECTLY proceed?
>This is another very critical question and I will answer it with a resounding
>yes.
>
>Those limits are met when we attempt to identify the causes of the tools with
>which we proceed. I will use mathematics as an example. Mathematics has
>rightfully been described as “The Queen of the Sciences.” Without math there
>could be no science, at least a science as we know it.
Yeah, he’s going to invoke mathematics as his argument. And of course, it’s going to be *bad*. **Really** bad. Stupid bad.
>So here comes the moment of truth as it were. What is the cause of mathematics?
>More accurately we should ask – what WAS the cause of mathematics because it
>has always been there just waiting to be discovered. That discovery began with
>the Pythagoreans and continues to this day.
>
>Mathematics has no discernable cause does it? Now what does this all have to do
>with evolution? It has everything to do with evolution because both ontogeny
>and phylogeny, like mathematics have no discernable cause.
Yes, the root of his argument is that mathematics has *no cause*. And evolution, like mathematics, also has no discernable cause.
What the hell does this *mean*? Well, to be frank, absolutely bloody nothing. This is what is crudely known as “talking out your ass”.
>And so we come to the answer to the question posed in my title.
>
>Darwinism has persisted because it assumes a detectable, discernable cause, a
>cause which never existed. It even claims to tell us all about this
>non-existent cause. The cause is random changes in genes (mutations) coupled
>with nature acting to select which of these should survive. These two
>processes, genetic variation and selection, have been the sole means by which
>organisms have evolved.
Yeah, y’see, evolution has *no cause*, just like mathematics. But the theory of evolution has hung around not because it actually explains anything; not because it has evidence to support it; not because it matches the facts; it’s because it creates an *illusion* of a cause.
>Now what is the actual tangible evidence to support this model? That is another
>very good question by the way. That is what science is all about, asking
>questions and then trying to answer them. In this case the answers that emerge
>are very clear.
That’s a very good question indeed. Shame he doesn’t bother to answer it.
>Natural selection first of all is very real. Its effect is to prevent change
>rather than to promote it. This was first recognized by Mivart and then
>subsequently and independently by Reginald C. Punnett and then Leo Berg.
Yeah, y’see there were these two guys, and like we were talking? and they said that natural solution prevents change, and they were, like, really convincing.
That’s his “tangible evidence” for the argument that evolution as a theory has persisted because it creates an illusion of cause where there is none.
>So you see there are really two reasons that Darwinism has persisted.
>
>The first I have already presented. It assumes a cause which never existed. The
>second reason it has persisted is because it has also assumed that no one ever
>existed who questioned the cause which never existed.
And yet again, random bullshit comes out of nowhere. Evolution has persisted because it denies the existence of people who question it.
>Like mathematics, both ontogeny and phylogeny never had exogenous causes. Both
>are manifestations of the expression of preformed pre-existent blocks of highly
>specific information which has been released over the millennia as part of a
>self-limiting process known as organic evolution, a phenomenon, my opinion, no
>longer in progress.
And again, we come back to that horrible comparison to math. Math, according to Davison is “causeless”; it consists of a set of facts that exist independently of any cause. Likewise, he claims that evolution is “causeless”; it’s nothing but the expression of a set of genetic information that has been coded into life from the very beginning. Evidence? He’s so smart, he doesn’t need any stinking evidence! Evidence is for stuff that has a cause!
>Everything we are now learning supports this interpretation which I have
>presented in summary form in my recent paper – “A Prescribed Evolutionary
>Hypothesis.”
Everything we’re learning supports this. Of course, he doesn’t mention *any* of it; not one fact, not one scrap of evidence; not anything about all of the genomes we’ve mapped out; not the name of one biologist who’s done work supporting this, not one paper that talks about this evidence. Nothing.
*This* is what Dembski thinks of as a *good* article arguing in favor of ID.
[dembski]: http://www.uncommondescent.com/index.php/archives/1353

Something Nifty: A Taste of Simple Continued Fractions

One of the annoying things about how we write numbers is the fact that we generally write things one of two ways: as fractions, or as decimals.
You might want to ask, “Why is that annoying?” (And in fact, that’s what I want you to ask, or else there’s no point in my writing the rest of this!)
It’s annoying because both fractions and decimals can both only describe
*rational* numbers – that is, numbers that are a perfect ratio of two integers. And *most* numbers aren’t rational.
But it’s even more annoying than that: if you use decimals, then there are lots of rational numbers that you can’t represent exactly (i.e., 1/3); and if you use fractions, then it’s hard to express the idea that the fraction isn’t exact. (How do you write π as a fraction? 22/7 is a standard fractional approximation, but how do you say π, which is *almost* 22/7?)
So what do we do?
One of the answers is something called *continued fractions*. A continued fraction is a very neat thing. Here’s the idea: take a number where you don’t know it’s fractional form. Pick the nearest simple fraction 1/n that’s just a *little bit too large*. If you were looking at, say, 0.4, you’d take 1/2 – it’s a bit bigger. Now – you’ve got an approximation, but it’s too large. So that means that the demoninator is *too small*. So you add a correction to the denominator to make it a little bit bigger. And you just keep doing that – you approximate the correction to the denominator by adding a fraction to the denominator that’s just a little too big, and then you add a correction to *that* correction.
Let’s look at an example: 2.3456
1. It’s close to 2. So we start with 2 + (0.3456)
2. Now, we start approximating the fraction. The way we do that is we take the *reciprocal* of 0.3456 and take the integer part of it: 1/0.3456 rounded down is 2 . So we make it 2 + 1/2; and we know that the denominator is off by .3088.
3. We take the reciprocal again, and get 3, and it’s off by .736
4. We take the reciprocal again, and get 1, and it’s off by 0.264
5. Next we get 3, but it’s off by 208/1000
6. Then 4, off by 0.168
7. Then 5, off by .16
8. Then 6, off by .25
9. Then 4, off by 0; so now we have an exact result.
So as a continued fraction, 2.3456 looks like:
continued.jpg
As a shorthand, continued fractions are normally written using a list notation inside of square brackets: the integer part, following by a semicolon, followed by a comma-separated list of the denominators of each of the fractions. So our continued fraction for 2.3456 would be written [2; 2, 3, 1, 3, 4, 5, 6, 4].
There’s a very cool visual way of understanding that algorithm. I’m not going to show it for 2.3456, because it’s a bit too much… But let’s look at something simpler: let’s try to write 9/16ths as a continued fraction. Basically, we make a grid consisting of 16 squares across by 9 squares up and down. We draw the *largest* square we can on that grid. The number of squares of that size that we can draw is the first digit of the continued fraction. Now there’s a rectangle left over: we draw the largest squares we can, there. And so on:

square-continued.jpg

So the continued fraction for 9/16ths is [0; 1, 1, 3, 2].
Using continued fractions, we can represent *any* rational number in a finite-length continued fraction.
One incredibly nifty thing about this way of writing numbers is: what’s the reciprocal of 2.3456, aka [2; 2, 3, 1, 3, 4, 5, 6, 4]? It’s [0; 2, 2, 3, 1, 3, 4, 5, 6, 4]. We just add a zero to the front as the integer part, and push everything else one place to the right. If it was a zero in front, then we would have removed the zero and pulled everything else one place to the left.
Irrational numbers are represented as *infinite* continued fractions. So there’s an infinite series of correction fractions. You can understand it as a series of every-improving approximations of the value of the number. And you can define it using a recurrence relation (that is, a recursive formula) for how to get to the next digit.
For example, π = [3; 7, 15, 1, 292, 1, …]. If we work that out, the first six places of the continued fraction for pi work out in decimal form to 3.14159265392. That’s correct to the first *11* places in decimal. Not bad, eh?
A very cool property of the continued fractions is: square roots written as continued fractions *always repeat*. Even cooler? What’s the square root of two as a continued fraction? [1; 2, 2, 2, 2, …. ].

e – the Unnatural Natural Number

Looks like I’ve accidentally created a series of articles here about fundamental numbers. I didn’t intend to; originally, I was just trying to write the zero article I’d promised back during the donorschoose drive.
Anyway. Todays number is *e*, aka Euler’s constant, aka the natural log base. *e* is a very odd number, but very fundamental. It shows up constantly, in all sorts of strange places where you wouldn’t expect it.
What is e?
————
*e* is a transcendental irrational number. It’s roughly 2.718281828459045. It’s also the base of the natural logarithm. That means that by definition, if ln(x)=y, then *e*y=x. Given my highly warped sense of humor, and my love of bad puns (especially bad *geek* puns) , I like to call *e* the *unnatural natural number*. (It’s natural in the sense that it’s the base of the natural logarithm; but it’s not a natural number according to the usual definition of natural numbers. Hey, I warned you that it was a bad geek pun.)
But that’s not a sufficient answer. We call it the *natural* logarithm. Why is that bizzare irrational number just a bit smaller than 2 3/4 *natural*?
Take the curve y=1/x. The area under the curve from 1 to n is the natural log of n. *e* is the point on the x axis where the area under the curve from 1 is equal to one:
ln.jpg
It’s also what you get if you you add up the reciprocal of the factorials of every natural number: (1/0! + 1/1! + 1/2! + 1/3! + 1/4! + …)
It’s also what you get if you take the limit: *lim*n → ∞ (1 + 1/n)n.
It’s also what you get if you work out this very strange looking series:

2 + 1/(1+1/(2+2/(3+3/(4+..))))

It’s also the base of a very strange equation: the derivative of *e*x is… *e*x.
And of course, as I mentioned yesterday, it’s the number that makes the most amazing equation in mathematics work: *e*=-1.
Why does it come up so often? It’s really deeply fundamental. It’s tied to the fundamental structure of numbers. It really is a deeply *natural* number; it’s tied into the shape of a circle, to the basic 1/x curve. There are dozens of different ways of defining it, because it’s so deeply embedded in the structure of *everything*.
Wikipedia even points out that if you put $1 into a bank account paying 100% interest compounded continually, at the end of the year, you’ll have exactly *e* dollars. (That’s not too suprising; it’s just another way of stating the integral definition of *e*, but it’s got a nice intuitiveness to it.)
History

Lighter Topics – what do you want to know?

The category theory series is finally winding down; I’ve got one topic I’d like to write about, and then I’ll have had my fill of category theory for a while. I don’t want to dive right in to another really deep topic like topology, so I’m looking for some subjects that people are interested in that can be covered in one or two posts. I could come up with some by myself (and probably will), but there are a lot of things like the zero article which so many people seemed to enjoy which I could write about, but probably wouldn’t think of on my own.
So, what would you like to see one or two posts on?

i : the Imaginary Number

After the amazing response to my post about zero, I thought I’d do one about something that’s fascinated me for a long time: the number *i*, the square root of -1. Where’d this strange thing come from? Is it real (not in the sense of real numbers, but in the sense of representing something *real* and meaningful)?
History
———
The number *i* has its earliest roots in some of the work of early arabic mathematicians; the same people who really first understood the number 0. But they weren’t quite as good with *i* as they were with 0: they didn’t really get it. They had some concept of roots of a cubic equation, where sometimes the tricks for finding the roots of the equation *just didn’t work*. They knew there was something going on, some way that the equation needed to have roots, but just what that really mean, they didn’t get.
Things stayed that way for quite a while. Various others, like the Greeks, encountered them in various ways when things didn’t work, but no one *really* grasped the idea that algebra required numbers that were more than just points on a one-dimensional number-line.
The next step was in Italy, over 1000 years later. During the 16th century, people were searching for solutions to the cubic equations – the same thing that the arabic scholars were looking at. But getting some of the solutions – even solutions to equations with real roots – required playing with the square root of -1 along the way. It was first really described by Rafael Bombelli in the context of the solutions to the cubic; but Bombello didn’t really think that they were *real*, *meaningful* numbers: it was viewed as a useful artifact of the process of solving the equations, but it wasn’t accepted.
It got its name as the *imaginary number* as a result of a diatribe by Rene Descartes, who believed it was a phony artifact of sloppy algebra. He did not accept that it had any meaning at all: thus it was an “imaginary” number.
They finally came into wide acceptance as a result of the work of Euler in the 18th century. Euler was probably the first to really, fully comprehend the complex number system created by the existence of *i*. And working with that, he discovered one of the most fascinating and bizzare mathematical discoveries ever, known as *Euler’s equation*. I have no idea how many years it’s been since I was first exposed to this, and I *still* have a hard time wrapping my head around *why* it’s true.

e = cos θ + i sin θ

And what *that* really means is:

e = -1

That’s just astonishing. The fact that there is *such* a close relationship between i, π, and e is just shocking to me.
What *i* does
—————
Once the reality of *i* as a number was accepted, mathematics was changed irrevocably. Instead of the numbers described by algebraic equations being points on a line, suddenly they become points *on a plane*. Numbers are really *two dimensional*; and just like the integer “1” is the unit distance on the axis of the “real” numbers, “i” is the unit distance on the axis of the “imaginary” numbers. As a result numbers *in general* become what we call *complex*: they have two components, defining their position relative to those two axes. We generally write them as “a + bi” where “a” is the real component, and “b” is the imaginary component.

complex-axis.jpg

The addition of *i* and the resulting addition of complex numbers is a wonderful thing mathematically. It means that *every* polynomial equation has roots; in particular, a polynomial equation in “x” with maximum exponent “n” will always have exactly “n” complex roots.
But that’s just an effect of what’s really going on. The real numbers are *not* closed algebraically under multiplication and addition. With the addition of *i*, multiplicative algebra becomes closed: every operation, every expression in algebra becomes meaningful: nothing escapes the system of the complex numbers.
Of course, it’s not all wonderful joy and happiness once we go from real to complex. Complex numbers aren’t ordered. There is no < comparison for complex numbers. The ability to do meaningful inequalities evaporates when complex numbers enter the system in a real way.
What *i* means
——————
But what do complex numbers *mean* in the real world? Do they really represent real phenomena? Or are they just a mathematical abstraction?
They’re very real. There’s one standard example that everyone uses: and the reason that we all use it is because it’s such a perfect example. Take the electrical outlet that’s powering your computer. It’s providing alternating current. What does that mean?
Well, the *voltage* – which (to oversimplify) can be viewed as the amount of force pushing the current – is complex. In fact, if you’ve got a voltage of 110 volts AC at 60 hz (the standard in the US), what that means is that the voltage is a number of magnitude “110”. If you were to plot the “real” voltage on a graph with time on the X axis and voltage of the Y, you’d see a sine wave:

sinewave.jpg

But that’s not really accurate. If you grabbed the wire when the voltage is supposedly zero on that graph, *you’d still get a shock*! Take the moment marked “t1” on the graph above. The voltage at time t1 on the complex plane is a point at “110” on the real axis. At time t2, the voltage on the “real” axis is zero – but on the imagine axis it’s 110. In fact, the *magnitude* of the voltage is *constant*: it’s always 110 volts. But the vector representing that voltage *is rotating* through the complex plane.

voltage.jpg

You also see it in the Fourier transform: when we analyze sound using a computer, one of the tricks we use is decomposing a complex waveform (like a human voice speaking) into a collection of basic sine waves, where the sine waves added up equal the wave at a given point in time. The process by which we
do that decomposition is intimately tied with complex numbers: the fourier transform, and all of the analyses and transformations built on it are dependent on the reality of complex numbers (and in particular on the magnificent Euler’s equation up above).

Bad, bad, bad math! AiG and Information Theory

While taking a break from some puzzling debugging, I decided to hit one of my favorite comedy sites, Answers in Genesis. I can pretty much always find something sufficiently stupid to amuse me on their site. Today, I came across a gem called [“Information, science and biology”][gitt], by the all too appropriately named “Werner Gitt”. It’s yet another attempt by a creationist twit to find some way to use information theory to prove that life must have been created by god.
It looks like the Gitt hasn’t actually *read* any real information theory, but has rather just read Dembski’s wretched mischaracterizations, and then regurgitated and expanded upon them. Dembski was bad enough; building on an incomplete understand of Dembski’s misrepresentations and errors is just astonishing.
Anyway, after butchering an introduction to Shannon theory, he moves onward.
>The highest information density known to us is that of the DNA
>(deoxyribonucleic acid) molecules of living cells. This chemical storage medium
>is 2 nm in diameter and has a 3.4 NM helix pitch (see Figure 1). This results
>in a volume of 10.68 x 10-21 cm3 per spiral. Each spiral contains ten chemical
>letters (nucleotides), resulting in a volumetric information density of 0.94 x
>1021 letters/cm3. In the genetic alphabet, the DNA molecules contain only the
>four nucleotide bases, that is, adenine, thymine, guanine and cytosine. The
>information content of such a letter is 2 bits/nucleotide. Thus, the
>statistical information density is 1.88 x 1021 bits/cm3.
This is, of course, utter gibberish. DNA is *not* the “highest information density known”. In fact, the concept of *information density* is not well-defined *at all*. How do you compare the “information density” of a DNA molecule with the information density of an electromagnetic wave emitted by a pulsar? It’s meaningless to compare. But even if we accept information as only physically encoded, Consider the information density of a crystal, like a diamond. A diamond is an incredibly compact crystal of carbon atoms. There are no perfect diamonds: all crystals contain irregularities and impurities. Consider how dense the information of that crystal is: the position of every flaw, every impurity, the positions of the subset of carbon atoms in the crystal that are carbon-14 as opposed to carbon-12. Considerably denser than DNA, huh?
After this is where it *really* starts to get silly. Our Gitt claims that Shannon theory is incomplete, because after all, it’s got a strictly *quantitative* measure of information: it doesn’t care about what the message *means*. So he sets out to “fix” that problem. He proposes five levels of information: statistics, syntax, semantics, pragmatics, and apobetics. He claims that Shannon theory (and in fact information theory *as a whole*) only concerns itself with the first; because it doesn’t differentiate between syntactically valid and invalid information.
Let’s take a quick run through the five, before I start mocking them.
1. Statistics. This is what information theory refers to as information content, expressed in terms of an event sequence (as I said, he’s following Dembski); so we’re looking at a series of events, each of which is receiving a character of a message, and the information added by each event is how surprising that event was. That’s why he calls it statistical.
2. Syntax. The structure of the language encoded by the message. At this level, it is assumed that every message is written in a *code*; you can distinguish between “valid” and “invalid” messages by checking whether they are valid strings of characters for the given code.
3. Semantics. What the message *means*.
4. Pragmatics. The *primitive intention* of the transmitter of the message; the specific events/actions that the transmitter wanted to occur as a result of sending the message.
5. Apobetics: The *purpose* of the message.
According to him, level 5 is the most important one.
Throughout the article, he constantly writes “theorems”. He clearly doesn’t understand what the word “theorem” means, because these things are just statements that he would *like* to be true, but which are unproven, and often unprovable. A few examples?
For example, if we look at the section about “syntax”, we find the following as theorems:
>Theorem 4: A code is an absolutely necessary condition for the representation
>of information.
>
>Theorem 5: The assignment of the symbol set is based on convention and
>constitutes a mental process.
>
>Theorem 6: Once the code has been freely defined by convention, this definition
>must be strictly observed thereafter.
>
>Theorem 7: The code used must be known both to the transmitter and receiver if
>the information is to be understood.
>
>Theorem 8: Only those structures that are based on a code can represent
>information (because of Theorem 4). This is a necessary, but still inadequate,
>condition for the existence of information.
>
>These theorems already allow fundamental statements to be made at the level of
>the code. If, for example, a basic code is found in any system, it can be
>concluded that the system originates from a mental concept.
How do we conclude that a code is a necessary condition for the representation of information? We just assert it. Worse, how do we conclude that *only* things that are based on a code represent information? Again, just an assertion – but an *incredibly* strong one. He is asserting that *nothing* without a
structured encoding is information. And this is also the absolute crux of his argument: information only exists as a part of a code *designed by an intelligent process*.
Despite the fact that he claims to be completing Shannon theory, there is *nothing* to do with math in the rest of this article. It’s all words. Theorems like the ones quoted above, but becoming progressively more outrageous and unjustified.
For example, his theorem 11:
>The apobetic aspect of information is the most important, because it embraces
>the objective of the transmitter. The entire effort involved in the four lower
>levels is necessary only as a means to an end in order to achieve this
>objective.
After this, we get to his conclusion, which is quite a prize.
>On the basis of Shannon’s information theory, which can now be regarded as
>being mathematically complete, we have extended the concept of information as
>far as the fifth level. The most important empirical principles relating to the
>concept of information have been defined in the form of theorems.
See, to him, a theorem is nothing but a “form”: a syntactic structure. And this whole article, to him, is mathematically complete.
>The Bible has long made it clear that the creation of the original groups of
>fully operational living creatures, programmed to transmit their information to
>their descendants, was the deliberate act of the mind and the will of the
>Creator, the great Logos Jesus Christ.
>
>We have already shown that life is overwhelmingly loaded with information; it
>should be clear that a rigorous application of the science of information is
>devastating to materialistic philosophy in the guise of evolution, and strongly
>supportive of Genesis creation.
That’s where he wanted to go all through this train-wreck. DNA is the highest-possible density information source. It’s a message originated by god, and transmitted by each generation to its children.
And as usual for the twits (or Gitts) that write this stuff, they’re pretending to put together logical/scientific/mathematical arguments for god; but they can only do it by specifically including the necessity of god as a premise. In this case, he asserts that DNA is a message; and a message must have an intelligent agent creating it. Since living things cannot be the original creators of the message, since the DNA had to be created before us. Therefore there must be a god.
Same old shit.
[gitt]: http://www.answersingenesis.org/tj/v10/i2/information.asp