# A Glance at Hyperreal Numbers

Since we talked about the surreals, I thought it would be interesting to take a *very* brief look at an alternative system that also provides a way of looking at infinites and infinitessimals: the *hyperreal* numbers.
The hyperreal numbers are not a construction like the surreals; instead they’re defined by axiom. The basic idea is that for the normal real numbers, there are a set of basic statements that we can make – statements of first order logic; and there is a basic structure of the set: it’s an *ordered field*.
Hyperreals add the “number” ω, the size of the set of natural numbers, so that you can construct numbers using ω, like ω+1, 1/ω, etc; but it constrains it by axiom so that the set of hyperreals is *still* an ordered field; and all statements that are true in first-order predicate logic over the reals are true in first-order predicate logic over the hyperreals.
For notation, we write the real field ℜ, and the hyperreal field ℜ*.
The Structure of Reals and Hyperreals: What is an ordered field?
——————————————————————-
If you’ve been a long-time reader of GM/BM, you’ll remember the discussion of group theory. If not, you might want to take a look at it; there’s a link in the sidebar.
A field is a commutative ring, where the multiplicative identity and the additive identity are not equal; where all numbers have an additive inverse, and all numbers except 0 have a multiplicative inverse.
Of course, for most people, that statement is completely worthless.
In abstract algebra, we study things about the basic structure of the sets where algebra works. The most basic structure is a *group*. A group is basically a set of values with a single operation, “×”, called the *group operator*. The “×” operation is *closed* over the set, meaning that for any values x and y in the set, x × y produces a value that is also in the set. The group operator also must be associative – that is, for any values x, y, and z, x&times(y×z) = (x×y)×z. The set contains an *identity element* for the group, generally written “1”, which has the property that for every value x in the group, “x×1=x”. And finally, for any value x in the set, there must be a value x-1 such that x×x-1=1. We often write a group as (G,×) where G is the set of values, and × is the group operator.
So, for example, the integers with the “+” operation form a group, (Z,+). The real numbers *almost* form a group with multiplication, except that “0” has no inverse. If you take the real numbers without 0, then you get a group.
If the group operator is also commutative (x=y if/f y=x), then it’s called an *abelian group*. Addition with “+” is an abelian group.
A *ring* (R,+,×) is a set with two operations. (R,+) must be an abelian group; (R-{0},×) needs to be a group. If × is commutative (meaning (R-{0},×) is abelian), then the group is called a *commutative* group.
A *field* (F,+,×) is a commutative ring with two operators “+” and “×”; where the identity value for “+” is written 0, and the identity for “×” is written 1; all values have additive inverses, all values except 0 have multiplicative inverses; and 0 ≠ 1. A *subfield* (S,+,×) of a field (F,+,×) is a field with the same operations as F, and where its set of values is a subset of the values of F.
*Finally*, an *ordered* field is a field with a total order “≤”: for any two values x and y, either “x ≤ y” or “y ≤ x”, and if x ≤ y ∧ y ≤ x then x=y. The total order must also respect the two operations: if a ≤ b, then a + x ≤ b + x; and if 0 ≤ a and 0 ≤ b then 0 ≤ a×b.
The real numbers are the canonical example of an ordered field.
*(The definitions above were corrected to remove several errors pointed out in the comments by readers “Dave Glasser” and “billb”. As usual, thanks for the corrections!)*
One of the things we need to ensure for the hyperreal numbers to work is that they form an ordered field; and that the real numbers are an ordered subfield of the hyperreals.
The Transfer Principle
————————
To do the axiomatic definition of the hyperreal numbers, we need something called *the transfer principle*. I’m not going to go into the full details of the transfer principle, because it’s not a simple thing to fully define it, and prove that it works. But the intuitive idea of it isn’t hard.
What the transfer principle says is: *For any **first order** statement L that’s true for the ordered field of real numbers, L is also true for the ordered field of hyperreal numbers*.
So for example: ∀ x ∈ ℜ, &exists; y ∈ ℜ : x ≤ y. Therefore, for any hyperreal number x ∈ ℜ*, &exists y ∈ ℜ* : x ≤ y.
Defining the Hyperreal Numbers
——————————–
To define the hyperreal numbers so that they do form an ordered field with the right property, we need to two things:
1. Define at least one hyperreal number that is not a real number.
2. Show that the *transfer principle* applies.
So we define ω as a hyperreal number such that ∀ r ∈ ℜ, r < ω.
What we *should* do next is prove that the transfer principle applies. But that’s well beyond the scope of this post.
What we end up with is very similar to what we have with the surreal numbers. We have infinitely large numbers. And because of the transfer principle, since there’s a successor for any real number, that means that there’s a successor for ω, so there is an ω+1. Since multiplication works (by transfer), there is a number 2×ω. Since the hyperreals are a field, ω has a multiplicative inverse, the infinitessimal 1/ω, and an additive inverse, -ω.
There is, of course, a catch. Not quite everything can transfer from ℜ to ℜ*. We are constrained to *first order* statements. What that means is that we are limited to simple direct statements; we can’t make statements that are quantified over other statements.
So for example, we can say that for any real number N, the series 1,1+1,1+1+1,1+1+1,1+1+1+1,… will eventually reach a point where every element after that point will be larger than N.
But that’s not a first order statement. The *series* 1, 1+1, 1+1+1, … is a *second order* statement: it isn’t talking about a simple single statement. It’s talking about a *series* of statements. So the transfer principle fails.
That does end up being a fairly serious limit. There are a lot of things that you can’t say using first-order statements. But in exchange for that limitation, you get the ability to talk about infinitely large and infinitely small values, which can make some problems *much* easier to understand.

# Arithmetic with Surreal Numbers

Last thursday, I introduced the construction of John Conway’s beautiful surreal numbers. Today I’m going to show you how to do arithmetic using surreals. It’s really quite amazing how it all works! If you haven’t read the original post introducing surreals, you’ll definitely want to [go back and read that][surreals] before looking at this post!

# Random Quotes

I figured it was time I did the latest random thing to be wandering its way around Scienceblogs. [Janet has introduced the “random quotes” meme.][janet], in which we’re supposed to go wandering through the [quotes here][quotes], and pick the first five that reflect “who you are or what you believe”.
1. “Human beings are perhaps never more frightening than when they are convinced beyond doubt that they are right.”, Laurens Van der Post, The Lost World of the Kalahari (1958). Could any quote possibly be more true?
2. “He that would make his own liberty secure, must guard even his enemy from oppression; for if he violates this duty, he establishes a precedent that will reach to himself.”, Thomas Paine (1737 – 1809). Given recent events in this country, this is particularly apropos.
3. “We are here to change the world with small acts of thoughtfulness done daily rather than with one great breakthrough.”, Rabbi Harold Kushner. I’ve actually taken a class with Rabbi Kushner, and it was wonderful; this quote to me sums up a big part of how I try to live my life.
4. “Science is facts; just as houses are made of stones, so is science made of facts; but a pile of stones is not a house and a collection of facts is not necessarily science.”, Henri Poincare (1854 – 1912). I couldn’t possibly pick quotes for this blog without quoting a mathematician. Considering what I do on this blog, this one is quote appropriate for me. So many of the creationist screeds that I criticize are based on collections of actual facts put together in stupid ways that turn them in garbage instead of science.
5. “The art of dining well is no slight art, the pleasure not a slight pleasure.”, Michel de Montaigne (1533 – 1592). Yes, I’m a foodie. Maybe someday I’ll post a recipe or two. I used to have a website with the recipes from my Y2K New Years Eve party, but it got lost when I switched ISPs and forgot to copy it.
[janet]: http://scienceblogs.com/ethicsandscience/2006/08/random_quotations_meme.php
[quotes]: http://www.quotationspage.com/random.php3

# A Crank Responds: Georgie-Boy and his "Scientific Proof of God"

Remember my post several weeks ago about [“The First Scientific Proof of God”?][georgie] The author, Georgie-boy Shollenberger popped up [in the comments yesterday][georgie-comments], and posted [a response][georgie-responds] on his blog.
This is how he describes this blog:
>This website is an example of how some math teachers are thinking and teaching
>your children. In general, this website is a Good Math, Bad Math web. On this
>web, debunking creationism is listed under the bad math category. So, your
>children are most likely taught by atheists. Is this what parents want?
If this blog is indeed an example of how math teachers are thinking and teaching, then I’m very flattered. I’m not a teacher, but I would very much *like* to be one; if my writing works well for teaching math, then that means I’m doing a good job.
But the second part: does “debunking creationism” imply atheism? For a guy who purports to have made the greatest discovery in the history of the world; and who claims to show why both math and logic need to be reexamined from their very roots, this is a pathetic claim.
First: Creationism is *not* the only possible theistic belief system.
Second: Creationism is *bad* theism. It consists of throwing away math, logic, science, and reason all in an effort to support a bizarre and unreasonable interpretation of one poorly translated religious text.
Third: I’m not an atheist.
He follows that intro with an interesting nonsequitur:
>Mathematicians review my suggestion to restudy mathematics. First, they do not
>believe that humans might be living on other planets. You might agree with them
>but my scientific proof requires other planets to maintain human life
>eternally. Apparently, the reviewers believe that the evening stars are merely
>lights as the ancients thought. How mindless. When seeking the effects of a
>proven God, planet earth is not the first planet that has humans and will not
>be the last planet that has humans.
Fascinating though process there, huh? I criticize him for sloppy mathematical arguments, and therefore “I do not believe that humans might be living on other planets”, and I “believe that the evening stars are merely lights”.
As a matter of fact, I *don’t* believe that there are humans living on other planets. But how one can conclude from my criticism of his math that I think “evening stars are merely lights”? (Or that I believe that humans don’t live on other planets, for that matter? Just because I *do* believe that humans don’t live on other planets doesn’t mean you can conclude that from my criticism of his sloppy math!)
>… But, the author gripes because my book must
>be purchased to determine what I say. Yet, mathematicians make and sell books
>regularly.
Yes, mathematicians make and sell books. But I’ve yet to see a major mathematical discovery that you could *only* see if you were willing to pay
the author.
For example, the other day, I wrote about Grigory Perelman’s proof of the Poincare conjecture. It’s available online for anyone who wants it:
* The entropy formula for the Ricci flow and its geometric applications
* Ricci flow with surgery on three-manifolds
* Finite extinction time for the solutions to the Ricci flow on certain three-manifolds
Or Conway’s surreal numbers? Yes, he wrote an [excellent book][onag] on them. He also made information on them widely available to all sorts of people. He showed them to Don Knuth, who wrote [the first book][knuth-book] on them. There’ve been articles on them all over – from Marvin Gardner in Scientific American to random people on personal websites. He didn’t demand that everyone give him money to see his work.
How about Einstein? He published relativity in a physics journal called “[Annalen der Physik][annalen]”. At the time, there was nothing like the internet, and scientists pretty much always published in journals (as they continue to do today). Annalen does *not* pay the authors of papers; it’s available in *every* major university library; and you are permitted to make personal copies of articles from it for academic use.
Mathematicians and scientists publish articles and books – but we don’t expect (or even particularly want) to be able to restrict access to our ideas *only* to people willing to give us money to see them.
Georgie-boy doesn’t do anything like that. If you want to see his wonderful, world-changing proof, you have to pay him for it.
Finally, he gets around to addressing my criticism of his *math*.
>The author focuses on the concept of infinite, but does not seem to understand
>the great mathematician, Georg Cantor, who discovered the transfinite numbers.
>Instead, the author (1) plays with Aristotle’s potential infinity, which Cantor
>calls the mathematical or bad infinity, (2) plays with ‘infinity by division,’
>which is a verb that defined the atom for the ancients atomists, (3) plays with
>’infinity by addition,’ which applies to Cantor’s transfinite numbers, and (4)
>plays with surreal numbers in which infinity becomes a real number. I would
>throw John Conway’s surreal numbers into the circle file. Then, the author
>charges me with saying that God is a number infinity. At no time have I ever
>gave God a number because. God is not a number. God’s oneness opposes the
>universes’ manyness and thus precedes all finite and infinite numbers that will
>ever be found in the universe.
Why did I talk about Aristotle’s potential infinity? Because Georgie-boy *specifically claims that mathematicians use Aristotle’s infinity*. Infinity by addition and infinity by division are the two forms of infinity discussed by Aristotle. The horror! The horror! I actually *criticized* Georgie-boy by *addressing his arguments*! Oh, will my unfairness and unreasonability never cease?!
Oh, and why would he throw Conway’s surreals in the trash? Who knows? It’s particularly interesting the way that he juxtaposes Cantor and the transfinite numbers in defense of his ideas, while tossing Conway and the surreals into the trash. Because, you see, the surreals are based *on the same concept of ordinals* as the transfinite numbers. *(Note: the previous paragraph originally had a typo; where it currently says “transfinite numbers”, I originally repeated “surreal numbers”. Thanks to commenter “Noodle” for the catch.)*
>My suggestion to restudy mathematics is a serious matter because I discovered
>the first scientific proof of God. I conclude that this discovery has vast
>potentials in mathematics and all sciences. With this proof, big changes can be
>expected.
Yes, his theory has *vast potential*. It’s going to change the world! It’s going to revolutionize all of math and science! And all you need to do to learn about it is: **buy his book**! Because he won’t tell you about it otherwise.
>… For instance, Cantor’s transfinite numbers must be developed by our
>mathematicians so we can understand the universe’s atoms, the cosmology of God,
>and the cells of all the living bodies. Unfortunately, the atheistic
>mathematicianc believe that we live only in world of numbers. The theory of God
>will not go away during the life of any person. Today’s mathematicians have a
>choice to work with 85% of the people in the USA who believe in God. On the
>other hand, they can live privately ‘in their box of finites.’ I hope to
>convince ‘the majority’ in the USA that the field of mathematics is falling
>apart and must thus be reformed but also expanded considerably.
Yeah, we need to start studying transfinite numbers, because *nobody* has been studying anything like that. (Except, of course, for thousands of number theorists.)
And we need to stop being atheists (even when we aren’t), because the existence of god means, ummm, well, ummm…. Not very much in terms of math?
And mathematics is falling apart! Just because we’ve managed to accomplish trivial little things like proving the Poincare conjecture and Fermat’s last theorem; characterizing the fundamental limits of mathematics, and silly things like that means *nothing*. Mathematics is falling apart! Who can save us?!
Why, nothing can save us except Georgie-boy!
As long as we send him some cash.
[georgie]: http://scienceblogs.com/goodmath/2006/07/restudying_math_in_light_of_th.php
[georgie-comments]:http://scienceblogs.com/goodmath/2006/07/restudying_math_in_light_of_th.php#comment-194071
[georgie-responds]: http://georgeshollenberger.blogspot.com/2006/08/what-mathematicians-are-teaching-your.html
[onag]: http://rcm.amazon.com/e/cm?t=goodmathbadma-20&o=1&p=8&l=as1&asins=1568811276&fc1=000000&IS2=1&lt1=_blank&lc1=0000ff&bc1=000000&bg1=ffffff&f=ifr
[knuth-book]: http://rcm.amazon.com/e/cm?t=goodmathbadma-20&o=1&p=8&l=as1&asins=0201038129&fc1=000000&IS2=1&lt1=_blank&lc1=0000ff&bc1=000000&bg1=ffffff&f=ifr
[annalen]: http://www.wiley-vch.de/publish/en/journals/alphabeticIndex/2257/

# Beautiful Insanity: Pathological Programming in SNUSP

Todays programming language insanity is a real winner. It’s a language called SNUSP. You can find the language specification [here][snuspspec], a [compiler][snuspcomp], and [an interpreter embedded in a web page][snuspinterp]. It’s sort of like a cross between [Befunge][befunge] and [Brainfuck][brainfuck], except that it also allows subroutines. (And in a variant, *threads*!) The real beauty of SNUSP is its beauty: that is, programs in SNUSP are actually really quite pretty, and watching them run can be positively entrancing.
SNUSP keeps its data on a tape, like Brainfuck. The basic instructions are very Brainfuck like:
1. “” move the data tape pointer one cell to the right.
3. “+” add one to the value in the current data tape cell.
4. “-” subtract one from the value of the current data tape cell.
5. “,” read a byte from stdin to the current tape cell.
6. “.” write a byte from the current tape cell to stdout.
7. “!” skip the next instruction.
8. “?” skip the next instruction if the current tape cell contains zero.
Then there’s the two-dimensional control flow. There aren’t many instructions here.
1. “/” bounce the current control flow direction as if the “/” were a mirror: if the program is flowing up, switch to right; if it’s flowing down, switch to left; if it’s flowing left, switch to down; and if its flowing right, switch to up.
2. “” bounce the other way; also just like a mirror.
3. “=” noop for drawing a path flowing left/right.
4. “|” noop for drawing a path flowing up/down.
So far, we’ve pretty much got a straightforward mix between Brainfuck and Befunge. Here’s where it becomes particularly twisted. It also has two more
instructions for handling subroutines. There’s a call stack which records pairs of location and direction, and the two instructions work with that:
1. “@” means push the current program location and the current direction onto the stack.
2. “#” means pop the top of the stack, set the location and direction, and *skip* one cell. If there is nothing on the stack, then exit (end program).
Finally, the program execution starts out wherever there is a “\$”, moving to the right.
So, for example, here’s a program that reads a number and then prints it out twice:
/==!/===.===#
| |
\$====,===@/==@/====#
So, it starts at the “\$” flowing right. Then gets to the “,”, and reads a value
into the current tape cell. It hits the first “@”, records the location and direction on the stack. Then it hits the “/” mirror, and goes up until it hits the “/” mirror, and turns right. It gets to the “!” and skips over the “/” mirror, then the “.” prints, and the “#” pops the stack. So it returns to the
first “@”, skips over the “/” mirror, and gets to the second “@”, which pushes the stack, etc.
Here’s a simple subroutine for adding 48 to a cell:
=++++++++++++++++++++++++++++++++++++++++++++++++#
Or a slight variant:
/=+++++++++++++++++++++++++
| #+++++++++++++++++++++++/
|
\$
Or (copying from the language manual), how about this one? This one starts to give you an idea of what I like about this bugger; the programs just *look* cool. Writing a program in SNUSP can be as much art as it is programming.
#//
\$===!++++
/++++/
/=++++
!///
One last +48 subroutine,
123 4
/=@@@+@+++++#
|
\$
This last one is very clever, so I’ll walk through it. The “1234” on the top are comments; any character that isn’t an instruction is ignored. They’re there to label things for me to explain.
The program goes to @1. It pushes the loc/dir on the stack. Then it gets to @2, pushed again. (So now the stack is “@1right,@2right”). Then @3, push (stack=”@1right,@2right,@3right”). Then add one to the cell. Push again (stack=@1right,@2right,@3right,@4right”). Then 5 “+”s, so add 5 to the cell. So we’ve added 6. Then we hit “#”, so pop, return to @4, skip one cell. So 4+s get executed. So we’ve added 10. Then pop again (so stack is “@1right,@2right”), return to @3, skip one instruction. So we’re back to @4; push (stack=@1right,@2right,@4right). Add five (we’re up to “+15”), and pop the stack, which brings us back to @4 again, and skip one cell, so now add another 4 (+19). Pop (stack=@1right), and we’re at @2. Skip one instruction, so we jump over @3. Then add one (+20), and repeat what happened before when we first got to “@4”, adding another 9 (+29). Pop again (so stack is empty), skip one instruction, so we’re at @3. Skip, push, repeat from @4 (+38). Pop back to @2, skip @3, add one (+39), push @4, repeat the same thing from @4 again (+48).
Here’s a real beauty: Multiplication, with documentation. If you look at it carefully, it’s actually reasonably clear how it works! Given this instruction set, that’s *truly* an amazing feat.
read two characters ,>,== * /=================== ATOI ———-
convert to integers /=/@</@=/ * // /===== ITOA ++++++++++ /———-/
multiply @ =!=========/ // /++++++++++/ ———-
convert back !/@!============/ ++++++++++ /———-/
and print the result / .# * /++++++++++/ ——–#
/====================/ * ++++++++#
|
| /- #/?=<<<>>> />>+<+>>>+/ // /======== BSPL2 !======?/#
| /->+< /===|=========== FMOV4 =/ // /<+>-
| #?===/! FMOV1 =|===|============== /====/ /====== FSPL2 !======?/#
| /==|===|==============|==|=======/
| * * *|* | * | * * * * * * *|* | * * * /+@/>@/>>=== /====>>@<@=?/@==<#<<<== !<<<>>>-?/ * // /-
* \ /@========|======!/?>=!/?>!/=======?<<<#
| -/>>+<<+-?+>?/
\!=</</
=== 0 and y = 0
or A(x-1,A(x,y-1)) if x > 0 and y > 0
The value of Ackerman’s function grows like nothing else. A(4, 2) is about 2×1019728.
Here’s Ackerman’s in SNUSP:
/==!/==atoi=@@@-@—–#
| |
| | /=========!==!==== ** recursion **
\$,@/>,@/==ack=!? j+1
j i -@/# | | A(i,0) -> A(i-1,1)
@>@->@/@ A(i-1,A(i,j-1))
# # | | |
/-<>!=/ =====|==@>>>@< 0) ? ? | | |
>>+<>+>+<<>>+<<<-/
[snuspspec]: http://www.deepwood.net/~drlion/snusp/snusp-1.0-spec-wd1.pdf
[snuspcomp]: http://www.baumanfamily.com/john/esoteric.html
[snupsinterp]: http://www.quirkster.com/snusp/snusp-js.html
[brainfuck]: http://scienceblogs.com/goodmath/2006/07/gmbm_friday_pathological_progr.php
[befunge]: http://scienceblogs.com/goodmath/2006/07/friday_pathological_programmin.php

# Introducing the Surreal Numbers

Surreal numbers are a beautiful, simple, set-based construction that allows you to create and represent all real numbers, so that they behave properly; and in addition, it allows you to create infinitely large and infinitely small values, and have them behave and interact in a consistent way with the real numbers in their surreal representation.

The surreals were invented by John Horton Conway (yes, that John Conway, the same guy who invented the “Life” cellular automaton, and did all that amazing work in game theory). The name for surreal numbers was created by Don Knuth (yes, that Don Knuth!).

Surreal Numbers are something that has come up in discussions in the comments on a number of posts around here. Before they were first mentioned in a comment back when this blog was on Blogger, I had never heard of them. After someone said a little about them, I thought they sounded so cool that I rushed out and got the two main books that have been written about them: Conway’s “On Numbers and Games”, and Knuth’s “Surreal Numbers”.

### The Basic Rules

The basic idea of surreal numbers is that you can represent any real number $*$ using two sets: a set $L$ which contains numbers less than $N$, and a set $R$ which contains numbers greater than $N$. To make it easy to write, surreals are generally written N={ L | R }, where L is the definition of the set of values less than N, and R is the definition of the set of values greater than N.

There are two fundamental rules for surreal numbers:

1. The Construction Rule: If L and R are two sets of surreal numbers, and ∀ l ∈ L, r ∈ R : l < r, then { L | R } is a valid surreal number.
2. The Comparison Rule,/em>: If x = {XL | XR} and y = {YL | YR} then x ≤ y if/f:
¬ ∃ xi ∈ XL : y ≤ xi (y is not less than and member of the left set of X), and ¬ ∃ yi ∈ YR : yi ≤ x. (x is not greater than any member of the right set of Y.).

In addition, we need two basic axioms: finite induction, and equality. Finite induction is basically the induction rule from the Peano axioms, so I won’t repeat it here. The equality rule for surreals is also very simple: x = y if/f x ≤ y ∧ y ≤ x.

### Constructing Integers

With the rules set, we can start creating numbers. The first one we create is 0: we say that by definition, 0 is the number with the empty set as its L and R values: 0 = { ∅ | ∅ }, also written { | }.

Next we want to define the basic unit values, 1 and -1. So how do we define 1? Well, the only number we have so far is 0. So we define 1 as the value with 0 in its left set, and an empty right set: 1 = {0|}. -1 is pretty much the same deal, except that it uses 0 as its right set: -1 = {|0}.

We follow the same basic pattern for all of the integers: the positive integer i is a surreal number with all of the positive integers less than i in its left set, and an empty right set: i = { 0, …, i-2, i-1 | }. Similarly for the negatives: a negative number -i = { | -i+1, -i+2, … 0}.

Now, if we look at the possibilities for surreals so far – we’re always using a null set for either the left, the right, or both sets. What happens if, for a positive number i, we leave out, say, i-2 in its left set?

What happens is nothing. It’s effectively exactly the same number. Just think of the definition above, and look at an example: 4 = {0, 1, 2, 3 |}, and {2, 3|}. Is {0, 1, 2, 3|} ≤ {2, 3|}? Yes. Is {2, 3|} ≤ {0, 1, 2, 3|}? Yes. So they’re the same. So each integer is actually an equivalence class. Each positive integer i is the equivalence class of the surreal numbers formed by: { {l|} where l ⊂ { 0, …, i-1 } && i-1 ∈ l }.

At this point, we’ve got the ability to represent integers in the surreal form. But we haven’t even gotten to fractions, much less to infinites and infinitessimals.

### Constructing Fractions

The way we construct fractions is by dividing the range between two other numbers in half. So, for example, we can create the fraction 1/2, by saying it’s the simplest number between 0 and 1: {0 | 1}. We can write 3/4 by saying it’s halfway between 1/2 and 1: {1/2 | 1}.

This is, obviously, sort of limited. Given a finite left and right set for a number, we can only represent fractions whos denominators are powers of 2. So 1/3 is out of the question. We can create a value that is as close to 1/3 as we want, but we can’t get it exactly with finite left and right sets. (Fortunately, we aren’t restricted to finite left and right sets. So we’ll let ourselves be lazy and write 1/3, with the understanding that it’s really something like a continued fraction in surreal numbers.)

### Happy Birthday to 2

One thing that’s important in understanding the surreals is the idea of a birthday.

We started creating surreals with the number 0 = {|}. We say that 0 has the integer 0 as its birthday.

Then we created 1 and negative one. So it took one step each to get from the set of surreals we knew about to get to the new set consisting of {|0}, {|}, {0|}, or -1, 0, and 1. So -1 and 1 have a birthday of 1.

Once we had -1, 0, and 1, we could create -2, -1/2, 1/2, and 2 in one step. So they had a birthday of 2.

What we’re doing here is creating an idea of ordinals: the ordinal of a surreal number is its birthday – the number of steps it took until we could create that number.

Given any surreal number {L|R}, the value that it represents is the value between the largest value in L and the smallest value in R with the earliest birthday (or, equivalently, the lowest ordinal).

So if we had, for example {1, 2, 3 |} (a surreal for 4), and {23|} (a surreal for 24), then then surreal number { {1, 2, 3|} | {23|} } would be 5 – because it’s the surreal between 4 and 24 with the earliest birthday.

### What about infinity?

I promised that you could talk about infinity in interesting ways.

Let’s call the size of the set of natural numbers ω. What can we create with birthday ω + 1? Things with birthday ω+1 are things that require infinite L and R sets. So, for example, in step ω+1, we can create a precise version of 1/3.

How?

• Let’s give a name to the of all surreal numbers with birthday N. We’ll call it SN.
• The left set of 1/3 is: { x/2y ∈ Sω : 3x < 2y}
• The right set of 1/3 is: { x/2y ∈ Sω : 3x > 2y }
• So 1/3 = { { x/2y ∈ Sω : 3x < 2y} | { x/2y ∈ Sω : 3x > 2y } }, and its birthday is ω+1.
• Using pretty much the same trick, we can create a number that represents the size of the set of natural numbers, which we’ll call infinity. ∞ = { {x ∈ Sω} | }, with birthday ω+1. And then, obviously, ∞+1 = { ∞ | }, with birthday ω+2. And ∞/2 = { 0 | ∞ }, also with birthday ω+2.

Infinitesimals are more of the same. To write an infinitely small number, smaller than any real fraction, what we do is say: { 0 | { 1/2, 1/4, 1/8, 1/16, … } }, with birthday ω+1.

What about irrationals? No problem! They’re also in Sω+1. We use a powers-of-2 version of continued fractions.

π = {3, 25/8, 201/64, … | …, 101/32, 51/16, 13/4, 7/2, 4}, birthday=ω+1.

### Coming Soon

Either tomorrow or next monday (depending on how much time I have), I’ll show you how to do arithmetic with surreal numbers.

# Roman Numerals and Arithmetic

I’ve always been perplexed by roman numerals.
First of all, they’re just *weird*. Why would anyone come up with something so strange as a way of writing numbers?
And second, given that they’re so damned weird, hard to read, hard to work with, why do we still use them for so many things today?

# The Poincarė Conjecture

The Poincarė conjecture has been in the news lately, with an article in the Science Times today. So I’ve been getting lots of mail from people asking me to explain what the Poincarė conjecture is, and why it’s a big deal lately?
I’m definitely not the best person to ask; the reason for the recent attention to the Poincarė conjecture is deep topology, which is not one of my stronger fields. But I’ll give it my best shot. (It’s actually rather bad timing. I’m planning on starting to write about topology later this week; and since the Poincarė conjecture is specifically about topology, it really wouldn’t have hurt to have introduced some topology first. But that’s how the cookie crumbles, eh?)
So what is it?
—————–
In 1904, the great mathematician Henri Poincarė was studying topology, and came up with an interesting question.
We know that if we look at closed two-dimensional surfaces forming three dimensional shapes (manifolds), that if the three dimensional shape has no holes in it, then it’s possible to transform it by bending, twisting, and stretching – but *without tearing* – into a sphere.
Poincarė wondered about higher dimensions. What about a three dimensional closed surface in a four-dimensional space (a 3-manifold)? Or a closed 4-manifold?
The conjecture, expressed *very* loosely and imprecisely, was that in any number of dimensions *n*, any figure without holes could be reduced to an *n*-dimensional sphere.
It’s trivial to show that that’s true for 2-dimensional surfaces in a three dimensional space; that is, that all closed 2-dimensional surfaces without holes can be transformed without tearing into our familiar sphere (which topologists call a 2-sphere, because it’s got a two dimensional surface).
For surfaces with more than two dimensions, it becomes downright mind-bogglingly difficult. And in fact, it turns out to be *hardest* to prove this for the 3-sphere. Nearly every famous mathematician of the 20th century took a stab at it, and all of them failed. (For example, Whitehead of the infamous Russell & Whitehead “Principia” published an incorrect proof in 1934.)
Why is it so hard?
——————
Visualizing the shapes of closed 2-manifolds is easy. They form familiar figures in three dimensional space. We can imagine grabbing them, twisting them, stretching them. We can easily visualize almost anything that you can do with a closed two-dimensional surface. So reasoning about them is very natural to us.
But what about a “surface” that is itself three dimensional, forming a figure that takes 4 dimensions. What does it look like? What does *stretching* it mean? What is does a hole in a 4-dimensional shape look like? How can I tell if a particular complicated figure is actually just something tied in knots to make it look complicated, or if it actually has holes in it? What are the possible shapes of things in 4, 5, 6 dimensions?
That’s basically the problem. The math of it is generally expressed rather differently, but what it comes down to is that we don’t have a good intuitive sense of what transformations and what shapes really work in more than three dimensions.
What’s the big deal lately?
——————————-
The conjecture was proved for all surfaces with seven or more dimensions in 1960. Five and six dimensions followed only two years later, proven in 1962. It took another 20 years to find a proof for 4 dimensions, which was finally done in 1982. Since 1982, the only open question was the 3- manifold. Was the Poincarė conjecture true for all dimensions?
There’s a million dollar reward for answer to that question with a correct proof; and each of the other proofs of the conjecture for higher dimensions won the mathematical equivalent of the Nobel Prize. So the rewards for figuring out the answer and proving it are enormous.
In 2003, a rather strange reclusive Russian mathematician named Grigory Perelman published a proof of a *stronger* version of the Poincarė conjecture under the incredibly obvious title “The Entropy Formula for the Ricci Flow and Its Geometric Application”.
It’s taken 3 years for people to work through the proof and all of its details in order to verify its correctness. In full detail, it’s over 1000 pages of meticulous mathematical proof, so verifying its correctness is not exactly trivial. But now, three years later, to the best of my knowledge, pretty much everyone is pretty well convinced of its correctness.
So what’s the basic idea of the proof? This is *so* far beyond my capabilities that it’s almost laughable for me to even attempt to explain it, but I’ll give it my best shot.
The Ricci flow is a mathematical transformation which effectively causes a *shrinking* action on a closed metric 3-surface. As it shrinks, it “pinches off” irregularities or kinks in the surface. The basic idea behind the proof is that it shows that the Ricci flow applied to metric 3-surfaces will shrink to a 3-sphere. The open question was about the kinks: will the Ricci flow eliminate all of them? Or are there structures that will *continually* generate kinks, so that the figure never reduces to a 3-sphere?
What Perelman did was show that all of the possible types of kinks in the Ricci flow of a closed metric 3-surface would eventually all disappear into either a 3-sphere, or a 3-surface with a hole.
So now that we’re convinced of the proof, and people are ready to start handing out the prizes, where’s Professor Perelman?
*No one knows*.
He’s a recluse. After the brief burst of fame when he first published his proof, he disappeared into the deep woods in the hinterlands of Russia. The speculation is that he has a cabin back there somewhere, but no one knows. No one knows where to find him, or how to get in touch with him.

# Messing with big numbers: using probability badly

After yesterdays post about the sloppy probability from ann coulter’s chat site, I thought it would be good to bring back one of the earliest posts on Good Math/Bad Math back when it was on blogger. As usual with reposts, I’ve revised it somewhat, but the basic meat of it is still the same.
——————–
There are a lot of really bad arguments out there written by anti-evolutionists based on incompetent use of probability. A typical example is [this one][crapcrap]. This article is a great example of the mistakes that commonly get made with probability based arguments, because it makes so many of them. (In fact, it makes every single category of error that I list below!)
Tearing down probabilistic arguments takes a bit more time than tearing down the information theory arguments. 99% of the time, the IT arguments are built around the same fundamental mistake: they’ve built their argument on an invalid definition of information. But since they explicitly link it to mathematical information theory, all you really need to do is show why their definition is wrong, and then the whole thing falls apart.
The probabilistic arguments are different. There isn’t one mistake that runs through all the arguments. There’s many possibly mistakes, and each argument typically stacks up multiple errors.
For the sake of clarity, I’ve put together a taxonomy of the basic probabilistic errors that you typically see in creationist screeds.
Big Numbers
————-
This is the easiest one. This consists of using our difficulty in really comprehending how huge numbers work to say that beyond a certain probability, things become impossible. You can always identify these argument, by the phrase “the probability is effectively zero.”
You typically see people claiming things like “Anything with a probability of less than 1 in 10^60 is effectively impossible”. It’s often conflated with some other numbers, to try to push the idea of “too improbable to ever happen”. For example, they’ll often throw in something like “the number of particles in the entire universe is estimated to be 3×10^78, and the probability of blah happening is 1 in 10^100, so blah can’t happen”.
It’s easy to disprove. Take two distinguishable decks of cards. Shuffle them together. Look at the ordering of the cards – it’s a list of 104 elements. What’s the probability of *that particular ordering* of those 104 elements?
The likelihood of the resulting deck of shuffled cards having the particular ordering that you just produced is roughly 1 in 10166. There are more possible unique shuffles of two decks of cards than there are particles in the entire universe.
If you look at it intuitively, it *seems* like something whose probability is
100 orders of magnitude worse than the odds of picking out a specific particle in the entire observable universe *should* be impossible. Our intuition says that any probability with a number that big in its denominator just can’t happen. Our intuition is wrong – because we’re quite bad at really grasping the meanings of big numbers.
Perspective Errors
———————
A perspective error is a relative of big numbers error. It’s part of an argument to try to say that the probability of something happening is just too small to be possible. The perspective error is taking the outcome of a random process – like the shuffling of cards that I mentioned above – and looking at the outcome *after* the fact, and calculating the likelihood of it happening.
Random processes typically have a huge number of possible outcomes. Anytime you run a random process, you have to wind up with *some* outcome. There may be a mind-boggling number of possibilities; the probability of getting any specific one of them may be infinitessimally small; but you *will* end up with one of them. The probability of getting an outcome is 100%. The probability of your being able to predict which outcome is terribly small.
The error here is taking the outcome of a random process which has already happened, and treating it as if you were predicting it in advance.
The way that this comes up in creationist screeds is that they do probabilistic analyses of evolution built on the assumption that *the observed result is the only possible result*. You can view something like evolution as a search of a huge space; at any point in that spaces, there are *many* possible paths. In the history of life on earth, there are enough paths to utterly dwarf numbers like the card-shuffling above.
By selecting the observed outcome *after the fact*, and then doing an *a priori* analysis of the probability of getting *that specific outcome*, you create a false impression that something impossible happened. Returning to the card shuffling example, shuffling a deck of cards is *not* a magical activity. Getting a result from shuffling a deck of cards is *not* improbable. But if you take the result of the shuffle *after the fact*, and try to compute the a priori probability of getting that result, you can make it look like something inexplicable happened.
Bad Combinations
——————–
Combining the probabilities of events can be very tricky, and easy to mess up. It’s often not what you would expect. You can make things seem a lot less likely than they really are by making a easy to miss mistake.
The classic example of this is one that almost every first-semester probability instructor tries in their class. In a class of 20 people, what’s the probability of two people having the same birthday? Most of the time, you’ll have someone say that the probability of any two people having the same birthday is 1/3652; so the probability of that happening in a group of 20 is the number of possible pairs over 3652, or 400/3652, or about 1/3 of 1 percent.
That’s the wrong way to derive it. There’s more than one error there, but I’ve seen three introductory probability classes where that was the first guess. The correct answer is very close to 50%.
Fake Numbers
————–
To figure out the probability of some complex event or sequence of events, you need to know some correct numbers for the basic events that you’re using as building blocks. If you get those numbers wrong, then no matter how meticulous the rest of the probability calculation is, the result is garbage.
For example, suppose I’m analyzing the odds in a game of craps. (Craps is a casino dice game using six sided dice.) If I say that in rolling a fair die, the odds of rolling a 6 is 1/6th the odds of rolling a one, then any probabilistic prediction that I make is going to be wrong. It doesn’t matter that from that point on, I do all of the analysis exactly right. I’ll get the wrong results, because I started with the wrong numbers.
This one is incredibly common in evolution arguments: the initial probability numbers are just pulled out of thin air, with no justification.
Misshapen Search Space
————————-
When you model a random process, one way of doing it is by modeling it as a random walk over a search space. Just like the fake numbers error, if your model of the search space has a different shape than the thing you’re modeling, then you’re not going to get correct results. This is an astoundingly common error in anti-evolution arguments; in fact, this is the basis of Dembski’s NFL arguments.
Let’s look at an example to see why it’s wrong. We’ve got a search space which is a table. We’ve got a marble that we’re going to roll across the table. We want to know the probability of it winding up in a specific position.
That’s obviously dependent on the surface of the table. If the surface of the table is concave, then the marble is going to wind up in nearly the same spot every time we try it: the lowest point of the concavity. If the surface is bumpy, it’s probably going to wind up a concavity between bumps. It’s *not* going to wind up balanced on the tip of one of the bumps.
If we want to model the probability of the marble stopping in a particular position, we need to take the shape of the surface of the table into account. If the table is actually a smooth concave surface, but we build our probabilistic model on the assumption that the table is a flat surface covered with a large number of uniformly distributed bumps, then our probabilistic model *can’t* generate valid results. The model of the search space does not reflect the properties of the actual search space.
Anti-evolution arguments that talk about search are almost always built on invalid models of the search space. Dembski’s NFL is based on a sum of the success rates of searches over *all possible* search spaces.
False Independence
———————
If you want to make something appear less likely than it really is, or you’re just not being careful, a common statistical mistake is to treat events as independent when they’re not. If two events with probability p1 and p2 are independent, then the probability of both p1 and p2 is p1×p2. But if they’re *not* independent, then you’re going to get the wrong answer.
For example, take all of the spades from a deck of cards. Shuffle them, and them lay them out. What are the odds that you laid them out in numeric order? It’s 1/13! = 1/6,227,020,800. That’s a pretty ugly number. But if you wanted to make it look even worse, you could “forget” the fact that the sequential draws are dependent, in which case the odds would be 1/1313 – or 1/3×1014 – about 50,000 times worse.
[crapcrap]: http://www.parentcompany.com/creation_essays/essay44.htm

# Big Numbers: Bad Anti-Evolution Crap from anncoulter.com

A reader sent me a copy of an article posted to “chat.anncoulter.com”. I can’t see the original article; anncoulter.com is a subscriber-only site, and I’ll be damned before I *register* with that site.
Fortunately, the reader sent me the entire article. It’s another one of those stupid attempts by creationists to assemble some *really big* numbers in order to “prove” that evolution is impossible.
>One More Calculation
>
>The following is a calculation, based entirely on numbers provided by
>Darwinists themselves, of the number of small selective steps evolution would
>have to make to evolve a new species from a previously existing one. The
>argument appears in physicist Lee Spetner’s book “Not By Chance.”
>
>At the end of this post — by “popular demand” — I will post a bibliography of
>suggested reading on evolution and ID.
>
>**********************************************
>
>Problem: Calculate the chances of a new species emerging from an earlier one.
>
>What We Need to Know:
>
>(1) the chance of getting a mutation;
>(2) the fraction of those mutations that provide a selective advantage (because
>many mutations are likely either to be injurious or irrelevant to the
>organism);
>(3) the number of replications in each step of the chain of cumulative >selection;
>(4) the number of those steps needed to achieve a new species.
>
>If we get the values for the above parameters, we can calculate the chance of
>evolving a new species through Darwinian means.
Fairly typical so far. Not *good* mind you, but typical. Of course, it’s already going wrong. But since the interesting stuff is a bit later, I won’t waste my time on the intro 🙂
Right after this is where this version of this argument turns particularly sad. The author doesn’t just make the usual big-numbers argument; they recognize that the argument is weak, so they need to go through some rather elaborate setup in order to stack things to produce an even more unreasonably large phony number.
It’s not just a big-numbers argument; it’s a big-numbers *strawman* argument.
>Assumptions:
>
>(1) we will reckon the odds of evolving a new horse species from an earlier
>horse species.
>
>(2) we assume only random copying errors as the source of Darwinian variation.
>Any other source of variation — transposition, e.g., — is non-random and
>therefore NON-DARWINIAN.
This is a reasonable assumption, you see, because we’re not arguing against *evolution*; we’re arguing against the *strawman* “Darwinism”, which arbitrarily excludes real live observed sources of variation because, while it might be something that really happens, and it might be part of real evolution, it’s not part of what we’re going to call “Darwinism”.
Really, there are a lot of different sources of variation/mutation. At a minimum, there are point mutations, deletions (a section getting lost while copying), insertions (something getting inserted into a sequence during copying), transpositions (something getting moved), reversals (something get flipped so it appears in the reverse order), fusions (things that were separate getting merged – e.g., chromasomes in humans vs. in chimps), and fissions (things that were a single unit getting split).
In fact, this restriction *a priori* makes horse evolution impossible; because the modern species of horses have *different numbers of chromasomes*. Since the only change he allows is point-mutation, there is no way that his strawman Darwinism can do the job. Which, of course, is the point: he *wants* to make it impossible.
>(3) the average mutation rate for animals is 1 error every 10^10 replications
>(Darnell, 1986, “Molecular Cell Biology”)
Nice number, shame he doesn’t understand what it *means*. That’s what happens when you don’t bother to actually look at the *units*.
So, let’s double-check the number, and discover the unit. Wikipedia reports the human mutation rate as 1 in 108 mutations *per nucleotide* per generation.
He’s going to build his argument on 1 mutation in every 10^10 reproductions *of an animal*, when the rate is *per nucleotide*, *per cell generation*.
So what does that tell us if we’re looking at horses? Well, according to a research proposal to sequence the domestic horse genome, it consists of 3×109 nucleotides. So if we go by wikipedia’s estimate of the mutation rate, we’d expect somewhere around 30 mutations per individual *in the fertilized egg cell*. Using the numbers by the author of this wretched piece, we’d still expect to see 1 out of every three horses contain at least one unique mutation.
The fact is, pretty damned nearly every living thing on earth – each and every human being, every animal, every plant – each contains some unique mutations, some unique variations in their genetic code. Even when you start with a really big number – like one error in every 1010 copies; it adds up.
>(4) To be part of a typical evolutionary step, the mutation must: (a) have a
>positive selective value; (b) add a little information to the genome ((b) is a
>new insight from information theory. A new species would be distinguished from
>the old one by reason of new abilities or new characteristics. New
>characteristics come from novel organs or novel proteins that didn’t exist in
>the older organism; novel proteins come from additions to the original genetic
>code. Additions to the genetic code represent new information in the genome).
I’ve ripped apart enough bullshit IT arguments, so I won’t spend much time on that, other to point out that *deletion* is as much of a mutation, with as much potential for advantage, as *addition*.
A mutation also does not need to have an immediate positive selective value. It just needs to *not* have negative value, and it can propagate through a subset of the population. *Eventually*, you’d usually (but not always! drift *is* an observed phenomenon) expect to see some selective value. But that doesn’t mean that *at the moment the mutation occurs*, it must represent an *immediate* advantage for the individual.
>(5) We will also assume that the minimum mutation — a point mutation — is
>sufficient to cause (a) and (b). We don’t know if this is n fact true. We don’t
>know if real mutations that presumably offer positive selective value and small
>information increases can always be of minimum size. But we shall assume so
>because it not only makes the calculation possible, but it also makes the
>calculation consistently Darwinian. Darwinians assume that change occurs over
>time through the accumulation of small mutations. That’s what we shall assume,
>as well.
Note the continued use of the strawman. We’re not talking about evolution here; We’re talking about *Darwinism* as defined by the author. Reality be damned; if it doesn’t fit his Darwinism strawman, then it’s not worth thinking about.
>Q: How many small, selective steps would we need to make a new species?
>
>A: Clearly, the smaller the steps, the more of them we would need. A very
>famous Darwinian, G. Ledyard Stebbins, estimated that to get to a new species
>from an older species would take about 500 steps (1966, “Processes of Organic
>Evolution”).
>
>So we will accept the opinion of G. Ledyard Stebbins: It will take about 500
>steps to get a new species.
Gotta love the up-to-date references, eh? Considering how much the study of genetics has advanced in the last *40 years*, it would be nice to cite a book younger than *me*.
But hey, no biggie. 500 selective steps between speciation events? Sounds reasonable. That’s 500 generations. Sure, we’ve seen speciation in less than 500 generations, but it seems like a reasonable guestimate. (But do notice the continued strawman; he reiterates the “small steps” gibberish.)
>Q: How many births would there be in a typical small step of evolution?
>
>A: About 50 million births / evolutionary step. Here’s why:
>
>George Gaylord Simpson, a well known paleontologist and an authority on horse
>evolution estimated that the whole of horse evolution took about 65 million
>years. He also estimated there were about 1.5 trillion births in the horse
>line. How many of these 1.5 trillion births could we say represented 1 step in
>evolution? Experts claim the modern horse went through 10-15 genera. If we say
>the horse line went through about 5 species / genus, then the horse line went
>through about 60 species (that’s about 1 million years per species). That would
>make about 25 billion births / species. If we take 25 billion and divided it by
>the 500 steps per species transition, we get 50 million births / evolutionary
>step.
>
>So far we have:
>
>500 evolutionary steps/new species (as per Stebbins)
>50 million births/evolutionary step (derived from numbers by G. G. Simpson)
Here we see some really stupid mathematical gibberish. This is really pure doubletalk – it’s an attempt to generate *another* large number to add into the mix. There’s no purpose in it: we’ve *already* worked out the mutation rate and the number of mutations per speciation. This gibberish is an alternate formulation of essentially the same thing; a way of gauging how long it will take to go through a sequence of changes leading to speciation. So we’re adding an redundant (and meaningless) factor in order to inflate the numbers.
>Q: What’s the chance that a mutation in a particular nucleotide will occur and
>take over the population in one evolutionary step?
>
>A: The chance of a mutation in a specific nucleotide in one birth is 10^-10.
>Since there are 50 million births / evolutionary step, the chance of getting at
>least one mutation in the whole step is 50 million x 10^-10, or 1-in-200
>(1/200). For the sake of argument we can assume that there is an equal chance
>that the base will change to any one of the other three (not exactly true in
>the real world, but we can assume to make the calculation easier – you’ll see
>that this assumption won’t influence things so much in the final calculation);
>so the chance of getting specific change in a specific nucleotide is 1/3rd of
>1/200 or 1-in-600 (1/600).
>
>So far we have:
>
>500 evolutionary steps/new species (as per Stebbins)
>50 million births/evolutionary step (derived from numbers by G. G. Simpson)
>1/600 chance of a point mutation taking over the population in 1 evolutionary >step (derived from numbers by Darnell in his standard reference book)
This is pure gibberish. It’s so far away from being a valid model of things that it’s laughable. But worse, again, it’s redundant. Because we’ve already introduced a factor based on the mutation rate; and then we’ve introduced a factor which was an alternative formulation of the mutation rate; and now, we’re introducing a *third* factor which is an even *worse* alternative formulation of the mutation rate.
>Q: What would the “selective value” have to be of each mutation?
>
>A: According to the population-genetics work of Sir Ronald Fisher, the chances
>of survival for a mutant is about 2 x (selective value).
>”Selective Value” is a number that is ASSIGNED by a researcher to a species in
>order to be able to quantify in some way its apparent fitness. Selective Value
>is the fraction by which its average number of surviving offspring exceeds that
>of the population norm. For example, a mutant whose average number of surviving
>offspring is 0.1% higher than the rest of the population would have a Selective
>Value = 0.1% (or 0.001). If the norm in the population were such that 1000
>offspring usually survived from the original non-mutated organism, 1001
>offspring would usually survive from the mutated one. Of course, in real life,
>we have no idea how many offspring will, IN FACT, survive any particular
>organism – which is the reason that Survival Value is not something that you go
>into the jungle and “measure.” It’s a special number that is ASSIGNED to a
>species; not MEASURED in it (like a species’ average height, weight, etc.,
>which are objective attributes that, indeed, can we can measure).
>
>Fisher’s statistical work showed that a mutant with a Selective Value of 1% has
>a 2% chance of survival in a large population. A chance of 2-in-100 is that
>same as a chance of 1-in-50. If the Selective Value were 1/10th of that, or
>0.1%, the chance would be 1/10th of 2%, or about 0.2%, or 1-in-500. If the
>Selective Value were 1/100th of 1%, the chance of survival would be 1/100th of
>2%, or 0.02%, or 1-in-5000.
>
>We need a Selection Value for our calculation because it tells us what the
>chances are that a mutated species will survive. What number should we use? In
>the opinion of George Gaylord Simpson, a frequent value is 0.1%. So we shall
>use that number for our calculation. Remember, that’s a 1-in-500 chance of
>survival.
>
>So far we have:
>
>500 evolutionary steps/new species (as per Stebbins)
>50 million births/evolutionary step (derived from numbers by G. G. Simpson)
>1/600 chance of a point mutation taking over the population in 1 evolutionary
>step (derived from numbers by Darnell in his standard reference book)
>1/500 chance that a mutant will survive (as per G. G. Simpson)
And, once again, *another* meaningless, and partially redundant factor added in.
Why meaningless? Because this isn’t how selection works. He’s using his Darwinist strawman again: everything must have *immediate* *measurable* survival advantage. He also implicitly assumes that mutation is *rare*; that is, a “mutant” has a 1-in-500 chance of seeing its mutated genes propagate and “take over” the population. That’s not at all how things work. *Every* individual is a mutant. In reality, *every* *single* *individual* possesses some number of unique mutations. If they reproduce, and the mutation doesn’t *reduce* the likelihood of its offspring’s survival, the mutation will propagate through the generations to some portion of the population. The odds of a mutation propagating to some reasonable portion of the population over a number of generations is not 1 in 500. It’s quite a lot better.
Why partially redundant? Because this. once again, factors in something which is based on the rate of mutation propagating through the population. We’ve already included that twice; this is a *third* variation on that.
>Already, however, the numbers don’t crunch all that well for evolution.
>
>Remember, probabilities multiply. So the probability, for example, that a point
>mutation will BOTH occur AND allow the mutant to survive is the product of the
>probabilities of each, or 1/600 x 1/500 = 1/300,000. Not an impossible number,
>to be sure, but it’s not encouraging either … and it’s going to get a LOT
>worse. Why? Because…
**Bzzt. Bad math alert!**
No, these numbers *do not multiply*. Probabilities multiply *when they are independent*. These are *not* independent factors.
>V.
>
>Q. What are the chances that (a) a point mutation will occur, (b) it will add
>to the survival of the mutant, and (c) the last two steps will occur at EACH of
>the 500 steps required by Stebbins’ statement that the number of evolutionary
>steps between one species and another species is 500?
See, this is where he’s been going all along.
* He created the darwinian strawman to allow him to create bizzare requirements.
* Then he added a ton of redundant factors.
* Then he combined probabilities as if they were independent when they weren’t.
* and *now* he adds a requirement for simultaneity which has no basis in reality.
>A: The chances are:
>
>The product of 1/600 x 1/500 multiplied by itself 500 times (because it has to
>happen at EACH evolutionary step). Or,
>
>Chances of Evolutionary Step 1: 1/300,000 x
>Chances of Evolutionary Step 2: 1/300,000 x
>Chances of Evolution Step 3: 1/300,000 x …
>. . . Chances of Evolution Step 500: 1/300,000
>
>Or,
>
>1/300,000^500
*Giggle*, *snort*. I seriously wonder if he actually believe this gibberish. But this is just silly. For the reasons mentioned above: this is taking the redundant factors that he already pushed into each step, inflating them by adding the simultaneity requirement, and then *exponentiating* them.
>This is approximately equal to:
>
>2.79 x 10^-2,739
>
>A number that is effectively zero.
As I’ve said before: no one who understands math *ever* uses the phrase *effectively zero* in a mathematical argument. There is no such thing as effectively zero.
On a closing note, this entire thing, in addition to being both an elaborate strawman *and* a sloppy big numbers argument is also an example of another kind of mathematical error, which I call a *retrospective error*. A retrospective error is when you take the outcome of a randomized process *after* it’s done, treat it as the *only possible outcome*, and compute the probability of it happening.
A simple example of this is: shuffle a deck of cards. What’s the odds of the particular ordering of cards that you got from the shuffle? 1/52! = 1/(8 * 1067). If you then ask “What was the probability of a shuffling of cards resulting in *this order*?”, you get that answer: 1 in 8 * 1067 – an incredibly unlikely event. But it *wasn’t* an unlikely event; viewed from the proper perspective, *some* ordering had to happen: any result of the shuffling process would have the same probability – but *one* of them had to happen. So the odds of getting a result whose *specific* probability is 1 in 8 * 1067 was actually 1 in 1.
The entire argument that our idiot friend made is based on this kind of an error. It assumes a single unique path – a single chain of specific mutations happening in a specific order – and asks about the likelihood that *single chain* leading to a *specific result*.
But nothing ever said that the primitive ancestors of the modern horse *had* to evolve into the modern horse. If they weren’t to just go extinct, they would have to evolve into *something*; but demanding that the particular observed outcome of the process be the *only possibility* is simply wrong.