Category Archives: Numbers

Nullity – the Nonsense Number

Tons of folks have been writing to me this morning about the BBC story about an idiot math teacher who claims to have solved the problem of dividing by zero. This is an absolutely *infuriating* story, which does an excellent job of demonstrating what total innumerate idiots reporters are.

What this guy has done is invent a new number, which he calls “nullity”. This number is not on the number line, can’t be compared to other numbers by less than or greater than, etc. In other words, he’s given a name to the basic mathematical concept of “undefined”, and proclaimed that this is somehow a solution to a deep and important problem.

The thing is, there is no problem. We understand what division by zero means. You can’t do it. There is no number that meaningfully expresses the concept of what it means to divide by zero.

You can assign a name to the concept of “not a number”, which is what this bozo has done; but that’s not new! The standard floating point representation used by computer hardware manufacturers (NaN (Not a Number). NaN works as you should expect a non-number to work: it can’t be compared to anything, and no arithmetic operation works on it – because comparisons and arithmetic only work on numbers.

What he’s done is to take projective geometry – which (as I mentioned in the Steiner post a few weeks back) gives you some useful ways of using infinity; added the concept of a NaN value called nullity, and redefined the multiplication and division operators so that they’re defined to be able to produce nullity.

What good is it? Well, the crank behind it claims two things:

  1. That currently, dividing by zero on a computer causes problems because division by zero is undefined. But if computers adopted nullity, then division by zero errors could be gotten rid of, because they’ll produce nullity. Except of course that modern floating point hardware already does have a NaN value, and it doesn’t help with the kind of problem he’s talking about! Because the result is not a number; whether you call it undefined, or you define it as a value that’s outside the set of real numbers, it doesn’t help – because the calculation you’re performing can’t produce a meaningful result! He says if your pacemaker’s software divides by zero, you’ll die, because it will stop working; how will returning nullity instead of signalling a divide by zero error make it work?
  2. That it provides a well-defined value for 00, which he claims is a 1200 year old problem. Except that again, it’s not a problem. It’s a meaningless expression! If you’re raising 0 to the 0th power in a calculation, then something’s wrong with that calculation. Modifying basic math so that the answer is defined as NaN doesn’t help that.

Basically, he’s defined a non-solution to a non-problem. And by teaching it to his students, he’s doing them a great disservice. They’re going to leave his class believing that he’s a great genius who’s solved a supposed fundamental problem of math, and believing in this silly nullity thing as a valid mathematical concept.

It’s not like there isn’t already enough stuff in basic math for kids to learn; there’s no excuse for taking advantage of a passive audience to shove this nonsense down their throats as an exercise in self-aggrandizement.

To make matters worse, this idiot is a computer science professor! No one who’s studied CS should be able to get away with believing that re-inventing the concept of NaN is something noteworthy or profound; and no one who’s studied CS should think that defining meaningless values can somehow magically make invalid computations produce meaningful results. I’m ashamed for my field.

Egyptian Fractions

While I was researching yesterdays post on Archimedes integration, one of the things I read reminded me of one of the stranger things about Greek and earlier math. They had a notion that the only valid fractions were *unit* fractions; that is, fractions whose numerator is 1. A fraction that was written with a numerator larger than one was considered *wrong*. Even today, if you look in a lot of math books, they use the term “vulgar fraction” for non-unit fractions.
Obviously, there *are* fractions other that *1/n*. The way that they represented them is now known as *Egyptian fractions*. An Egyptian fraction is expressed as the sum of a finite set of unit fractions. So, for example, instead of writing the vulgar fraction 2/3, the Greeks would write
“1/2 + 1/6”.

Continue reading

A Bit About Number Bases

After my binary fingermath stuff, a few people wrote to me to ask about just how binary really works. For someone who does the kinds of crazy stuff that I do, the idea of different number bases is so fundamental that it’s easy to forget that most people really don’t understand the idea of using different bases.
To start off with, I need to explain how our regular number system works. Most people understand how our numbers work, but don’t necessarily understand *why* they’re set up that way.
Our number system is *positional* – that is, what a given digit of a number means is dependent on its position in the number. The positions each represent one power of ten, increasing as you go from right to left, which is why our number system is called base-10. You can do positional numbers using any number as the base – each digit will correspond to a power of the base.
So suppose we have a number like 729328. The first position from the right, containing the digit “8” is called position zero, because it tells us how many 100=1s we have. The next position is position 1, or the 101=10s column, because it says how many tens there are (2). And then we have the digit three in position 2, meaning that there are 3 102s = 3 100s, etc. So the number really means 7×105+2×104+9×103+3×102+2×101+8×100 = 7×100,000 + 2×10,000 + 8×1,000 + 3×100 + 2×10 + 8×1. Now let’s look at the same kind of thing in base 3, using the number 102120. That’s 1×35 + 0×34+2×33+1×32+2×31+0×30 = 1×254+0×81+2×27+1×9+2×3+0×1 = 254 + 54 + 9 + 6 = 323 in base 10.
radix.jpg
Normally, to show that we’re using a different base, we’d write the numbers with a subscript showing the base – so we’d normally write the binary for 11 as 10112.
To write numbers in bases larger than 10, we have one very small problem, which is that we only have numbers to write digits from 0 to 9, but hex has digits from 0 to 15. The way that we work around that is by using letters of the alphabet: A=10, B=11, C=12, D=13, E=14, F=15, … So base-16 uses 0-9 and A-F as digits.
Base-8 (octal) and base-16 (hexidecimal, or hex for short) are special to a lot of computer people. The reason for that is that because 8 and 16 are powers of 2, that means that each digit in octal and hexidecimal corresponds exactly to a specific number of binary digits. Every digit in hexidecimal is something between 0 and 15, which corresponds to exactly 4 binary digits. Base-10 doesn’t have this property – a digit in base ten takes *between* three and four digits – and that means that you *can’t* divide up a string of binary into parts that correspond to decimal digits without looking at the number. With hexidecimal, every hex-digit is 4 binary digits, which is remarkably convenient.
Why would that make a difference? Well, we store data in memory in 8-bit groups called bytes. How big a number can you store in one byte? 255. That’s a strange looking number. But what about in hexidecimal? It’s the largest number you can write in 2 digits: FF16. It gets worse as you get bigger. If you’ve got a computer that has bytes of memory numbered using 4 byte addresses, what’s the largest memory address? In hex, it’s easy to write: FFFFFFFF16. In decimal? 4294967296. The hex numbers are a lot more natural to work with when you’re doing something close to the hardware. In modern programming, we rarely use hex anymore, but a few years ago, it was absolutely ubiquitous. I used to be able to do arithmetic in hex as naturally as I can in decimal. Not anymore; I’ve gotten rusty.
Among computer geeks, there’s an old argument that number systems based on powers of two are more natural than others, but I’m not going to get into *that* basket of worms. If you’re interested, take a look at the [Intuitor Hex Headquarters](http://www.intuitor.com/hex/).

A Glance at Hyperreal Numbers

Since we talked about the surreals, I thought it would be interesting to take a *very* brief look at an alternative system that also provides a way of looking at infinites and infinitessimals: the *hyperreal* numbers.
The hyperreal numbers are not a construction like the surreals; instead they’re defined by axiom. The basic idea is that for the normal real numbers, there are a set of basic statements that we can make – statements of first order logic; and there is a basic structure of the set: it’s an *ordered field*.
Hyperreals add the “number” ω, the size of the set of natural numbers, so that you can construct numbers using ω, like ω+1, 1/ω, etc; but it constrains it by axiom so that the set of hyperreals is *still* an ordered field; and all statements that are true in first-order predicate logic over the reals are true in first-order predicate logic over the hyperreals.
For notation, we write the real field ℜ, and the hyperreal field ℜ*.
The Structure of Reals and Hyperreals: What is an ordered field?
——————————————————————-
If you’ve been a long-time reader of GM/BM, you’ll remember the discussion of group theory. If not, you might want to take a look at it; there’s a link in the sidebar.
A field is a commutative ring, where the multiplicative identity and the additive identity are not equal; where all numbers have an additive inverse, and all numbers except 0 have a multiplicative inverse.
Of course, for most people, that statement is completely worthless.
In abstract algebra, we study things about the basic structure of the sets where algebra works. The most basic structure is a *group*. A group is basically a set of values with a single operation, “×”, called the *group operator*. The “×” operation is *closed* over the set, meaning that for any values x and y in the set, x × y produces a value that is also in the set. The group operator also must be associative – that is, for any values x, y, and z, x&times(y×z) = (x×y)×z. The set contains an *identity element* for the group, generally written “1”, which has the property that for every value x in the group, “x×1=x”. And finally, for any value x in the set, there must be a value x-1 such that x×x-1=1. We often write a group as (G,×) where G is the set of values, and × is the group operator.
So, for example, the integers with the “+” operation form a group, (Z,+). The real numbers *almost* form a group with multiplication, except that “0” has no inverse. If you take the real numbers without 0, then you get a group.
If the group operator is also commutative (x=y if/f y=x), then it’s called an *abelian group*. Addition with “+” is an abelian group.
A *ring* (R,+,×) is a set with two operations. (R,+) must be an abelian group; (R-{0},×) needs to be a group. If × is commutative (meaning (R-{0},×) is abelian), then the group is called a *commutative* group.
A *field* (F,+,×) is a commutative ring with two operators “+” and “×”; where the identity value for “+” is written 0, and the identity for “×” is written 1; all values have additive inverses, all values except 0 have multiplicative inverses; and 0 ≠ 1. A *subfield* (S,+,×) of a field (F,+,×) is a field with the same operations as F, and where its set of values is a subset of the values of F.
*Finally*, an *ordered* field is a field with a total order “≤”: for any two values x and y, either “x ≤ y” or “y ≤ x”, and if x ≤ y ∧ y ≤ x then x=y. The total order must also respect the two operations: if a ≤ b, then a + x ≤ b + x; and if 0 ≤ a and 0 ≤ b then 0 ≤ a×b.
The real numbers are the canonical example of an ordered field.
*(The definitions above were corrected to remove several errors pointed out in the comments by readers “Dave Glasser” and “billb”. As usual, thanks for the corrections!)*
One of the things we need to ensure for the hyperreal numbers to work is that they form an ordered field; and that the real numbers are an ordered subfield of the hyperreals.
The Transfer Principle
————————
To do the axiomatic definition of the hyperreal numbers, we need something called *the transfer principle*. I’m not going to go into the full details of the transfer principle, because it’s not a simple thing to fully define it, and prove that it works. But the intuitive idea of it isn’t hard.
What the transfer principle says is: *For any **first order** statement L that’s true for the ordered field of real numbers, L is also true for the ordered field of hyperreal numbers*.
So for example: ∀ x ∈ ℜ, &exists; y ∈ ℜ : x ≤ y. Therefore, for any hyperreal number x ∈ ℜ*, &exists y ∈ ℜ* : x ≤ y.
Defining the Hyperreal Numbers
——————————–
To define the hyperreal numbers so that they do form an ordered field with the right property, we need to two things:
1. Define at least one hyperreal number that is not a real number.
2. Show that the *transfer principle* applies.
So we define ω as a hyperreal number such that ∀ r ∈ ℜ, r < ω.
What we *should* do next is prove that the transfer principle applies. But that’s well beyond the scope of this post.
What we end up with is very similar to what we have with the surreal numbers. We have infinitely large numbers. And because of the transfer principle, since there’s a successor for any real number, that means that there’s a successor for ω, so there is an ω+1. Since multiplication works (by transfer), there is a number 2×ω. Since the hyperreals are a field, ω has a multiplicative inverse, the infinitessimal 1/ω, and an additive inverse, -ω.
There is, of course, a catch. Not quite everything can transfer from ℜ to ℜ*. We are constrained to *first order* statements. What that means is that we are limited to simple direct statements; we can’t make statements that are quantified over other statements.
So for example, we can say that for any real number N, the series 1,1+1,1+1+1,1+1+1,1+1+1+1,… will eventually reach a point where every element after that point will be larger than N.
But that’s not a first order statement. The *series* 1, 1+1, 1+1+1, … is a *second order* statement: it isn’t talking about a simple single statement. It’s talking about a *series* of statements. So the transfer principle fails.
That does end up being a fairly serious limit. There are a lot of things that you can’t say using first-order statements. But in exchange for that limitation, you get the ability to talk about infinitely large and infinitely small values, which can make some problems *much* easier to understand.

Arithmetic with Surreal Numbers

Last thursday, I introduced the construction of John Conway’s beautiful surreal numbers. Today I’m going to show you how to do arithmetic using surreals. It’s really quite amazing how it all works! If you haven’t read the original post introducing surreals, you’ll definitely want to [go back and read that][surreals] before looking at this post!

Continue reading

Introducing the Surreal Numbers

Surreal numbers are a beautiful, simple, set-based construction that allows you to create and represent all real numbers, so that they behave properly; and in addition, it allows you to create infinitely large and infinitely small values, and have them behave and interact in a consistent way with the real numbers in their surreal representation.

The surreals were invented by John Horton Conway (yes, that John Conway, the same guy who invented the “Life” cellular automaton, and did all that amazing work in game theory). The name for surreal numbers was created by Don Knuth (yes, that Don Knuth!).

Surreal Numbers are something that has come up in discussions in the comments on a number of posts around here. Before they were first mentioned in a comment back when this blog was on Blogger, I had never heard of them. After someone said a little about them, I thought they sounded so cool that I rushed out and got the two main books that have been written about them: Conway’s “On Numbers and Games”, and Knuth’s “Surreal Numbers”.

The Basic Rules

The basic idea of surreal numbers is that you can represent any real number * using two sets: a set L which contains numbers less than N, and a set R which contains numbers greater than N. To make it easy to write, surreals are generally written N={ L | R }, where L is the definition of the set of values less than N, and R is the definition of the set of values greater than N.

There are two fundamental rules for surreal numbers:

  1. The Construction Rule: If L and R are two sets of surreal numbers, and ∀ l ∈ L, r ∈ R : l < r, then { L | R } is a valid surreal number.
  2. The Comparison Rule,/em>: If x = {XL | XR} and y = {YL | YR} then x ≤ y if/f:
    ¬ ∃ xi ∈ XL : y ≤ xi (y is not less than and member of the left set of X), and ¬ ∃ yi ∈ YR : yi ≤ x. (x is not greater than any member of the right set of Y.).

In addition, we need two basic axioms: finite induction, and equality. Finite induction is basically the induction rule from the Peano axioms, so I won’t repeat it here. The equality rule for surreals is also very simple: x = y if/f x ≤ y ∧ y ≤ x.

Constructing Integers

With the rules set, we can start creating numbers. The first one we create is 0: we say that by definition, 0 is the number with the empty set as its L and R values: 0 = { ∅ | ∅ }, also written { | }.

Next we want to define the basic unit values, 1 and -1. So how do we define 1? Well, the only number we have so far is 0. So we define 1 as the value with 0 in its left set, and an empty right set: 1 = {0|}. -1 is pretty much the same deal, except that it uses 0 as its right set: -1 = {|0}.

We follow the same basic pattern for all of the integers: the positive integer i is a surreal number with all of the positive integers less than i in its left set, and an empty right set: i = { 0, …, i-2, i-1 | }. Similarly for the negatives: a negative number -i = { | -i+1, -i+2, … 0}.

Now, if we look at the possibilities for surreals so far – we’re always using a null set for either the left, the right, or both sets. What happens if, for a positive number i, we leave out, say, i-2 in its left set?

What happens is nothing. It’s effectively exactly the same number. Just think of the definition above, and look at an example: 4 = {0, 1, 2, 3 |}, and {2, 3|}. Is {0, 1, 2, 3|} ≤ {2, 3|}? Yes. Is {2, 3|} ≤ {0, 1, 2, 3|}? Yes. So they’re the same. So each integer is actually an equivalence class. Each positive integer i is the equivalence class of the surreal numbers formed by: { {l|} where l ⊂ { 0, …, i-1 } && i-1 ∈ l }.

At this point, we’ve got the ability to represent integers in the surreal form. But we haven’t even gotten to fractions, much less to infinites and infinitessimals.

Constructing Fractions

The way we construct fractions is by dividing the range between two other numbers in half. So, for example, we can create the fraction 1/2, by saying it’s the simplest number between 0 and 1: {0 | 1}. We can write 3/4 by saying it’s halfway between 1/2 and 1: {1/2 | 1}.

This is, obviously, sort of limited. Given a finite left and right set for a number, we can only represent fractions whos denominators are powers of 2. So 1/3 is out of the question. We can create a value that is as close to 1/3 as we want, but we can’t get it exactly with finite left and right sets. (Fortunately, we aren’t restricted to finite left and right sets. So we’ll let ourselves be lazy and write 1/3, with the understanding that it’s really something like a continued fraction in surreal numbers.)

Happy Birthday to 2

One thing that’s important in understanding the surreals is the idea of a birthday.

We started creating surreals with the number 0 = {|}. We say that 0 has the integer 0 as its birthday.

Then we created 1 and negative one. So it took one step each to get from the set of surreals we knew about to get to the new set consisting of {|0}, {|}, {0|}, or -1, 0, and 1. So -1 and 1 have a birthday of 1.

Once we had -1, 0, and 1, we could create -2, -1/2, 1/2, and 2 in one step. So they had a birthday of 2.

What we’re doing here is creating an idea of ordinals: the ordinal of a surreal number is its birthday – the number of steps it took until we could create that number.

Given any surreal number {L|R}, the value that it represents is the value between the largest value in L and the smallest value in R with the earliest birthday (or, equivalently, the lowest ordinal).

So if we had, for example {1, 2, 3 |} (a surreal for 4), and {23|} (a surreal for 24), then then surreal number { {1, 2, 3|} | {23|} } would be 5 – because it’s the surreal between 4 and 24 with the earliest birthday.

What about infinity?

I promised that you could talk about infinity in interesting ways.

Let’s call the size of the set of natural numbers ω. What can we create with birthday ω + 1? Things with birthday ω+1 are things that require infinite L and R sets. So, for example, in step ω+1, we can create a precise version of 1/3.

How?

  • Let’s give a name to the of all surreal numbers with birthday N. We’ll call it SN.
  • The left set of 1/3 is: { x/2y ∈ Sω : 3x < 2y}
  • The right set of 1/3 is: { x/2y ∈ Sω : 3x > 2y }
  • So 1/3 = { { x/2y ∈ Sω : 3x < 2y} | { x/2y ∈ Sω : 3x > 2y } }, and its birthday is ω+1.
  • Using pretty much the same trick, we can create a number that represents the size of the set of natural numbers, which we’ll call infinity. ∞ = { {x ∈ Sω} | }, with birthday ω+1. And then, obviously, ∞+1 = { ∞ | }, with birthday ω+2. And ∞/2 = { 0 | ∞ }, also with birthday ω+2.

Infinitesimals are more of the same. To write an infinitely small number, smaller than any real fraction, what we do is say: { 0 | { 1/2, 1/4, 1/8, 1/16, … } }, with birthday ω+1.

What about irrationals? No problem! They’re also in Sω+1. We use a powers-of-2 version of continued fractions.

π = {3, 25/8, 201/64, … | …, 101/32, 51/16, 13/4, 7/2, 4}, birthday=ω+1.

Coming Soon

Either tomorrow or next monday (depending on how much time I have), I’ll show you how to do arithmetic with surreal numbers.

Roman Numerals and Arithmetic

I’ve always been perplexed by roman numerals.
First of all, they’re just *weird*. Why would anyone come up with something so strange as a way of writing numbers?
And second, given that they’re so damned weird, hard to read, hard to work with, why do we still use them for so many things today?

Continue reading

π

How can you talk about interesting numbers without bringing up π?
History
———
The oldest value we know for π comes from the Babylonians. (Man, but those guys were impressive mathematicians; almost any time you look at the history of fundamental numbers and math, you find the Babylonians in the roots.) They tended to work in ratios, and the approximation that they used 25/8s (3.125), which is not a terribly bad approximation. Especially when you realize *when* they came up with this approximation: 1900BC!
The next best approximation came from Egypt, around the time of Pharaoh Amenemhat in the mid 17th century BC, where it had been refined to 256/81 (3.1605). Which isn’t such a great step forward; it’s actually a hair *farther* from the true value of π than the Babylonian approximation.
We don’t see any real progress until we get to the Greek. Archimedes (yes, *that* Archimedes) worked out a better approximation. He used a *really* neat trick. He worked out how to compute the perimeter of a 96-sided polygon; and then worked out the perimeter of the largest 96-gon that could be drawn inside the circle; and the smallest 96-gon that the circle could be drawn inside. Here’s a quick diagram using octagons to give you a clearer idea of what he did:

circle-inscribe.jpg

That gives you an approximation of π as the average of 223/71 and 22/7 – or 3.14185.
And next, we find progress in India, where the mathematician Madhava worked out a power series definition of π, which allowed him to compute π to *13* decimal places. *13* decimal places, computing a power series completely by hand! Astounding!
Even better, during the same century, when this work made its way to the great Persian Arabic mathematicians, they worked it out to 9 digits in base-60 (base-60 was in inheritance from the Babylonians). 9 digits in base 60 is roughly 16 digits in decimal!
And finally, we get back to Europe; in the 17th century, van Ceulen used the power series to work out 35 decimal places of π. Alas, the publication of it was on his tombstone.
Then we get to the 19th century, when William Rutherford calculuated *208* decimal places of π. The real pity of that is that he made an error in the 153rd digit, and so only the first 152 digits were correct. (Can you imagine the amount of time he wasted?)
That was pretty much it until the first computers came along, and once that happened, the fun went out of trying to calculate it, since any bozo could write a program to do it. There’s a website that will let you look at its computation of [the first *2 hundred million* digits of π][digits].
The *name* of π came from Euler (he of the great equation, e + 1 = 0). It’s an abbreviation for *perimeter* in Greek.
There’s also one bit of urban myth about π that is, alas, not true. The story goes that some state in the American midwest (Indiana, Iowa, Ohio, Illinois in various versions) passed a law that π=3. Didn’t happen.
What is π?
—————–
Pretty much everyone is familiar with what π is. Take a circle on a plane. Measure the distance around the outside of it, which is called the circumference. Divide that by the diameter of the circle. That’s π.

circle-ratio.jpg

It also shows up in almost anything else involving measurements of circles and angles, from things like the sin function to the area of a circle to the volume of a sphere.
Where it gets interesting is when you start to ask about how to compute it. You get the relatively obvious things – like equations based on integrals to calculate the area of a circle. But then, you get the surprising ones. After all, π is a fundamental geometric number; it comes from the circumference of a circle.
So why in the world is the radius of a circle related to an infinite sum of the reciprocals of odd numbers? It is.
π/4 is the sum of the infinite series 1/1 – 1/3 + 1/5 – 1/7 + 1/9 – 1/11 + 1/13 …., or more formally:
pi-sum-eq.gif
Or how about this? π/2 =
pi-prod-eq.gif
What about a bit of probability? Pick any integer at random. What’s the probability that neither it nor any of its factors is a perfect square? 6/π2.
How about a a connection between circles and prime numbers? Take the set of all of the prime numbers P, then the *product* of all factors (1-1/p2) is.. 6/π2.
What’s the average number of ways to write an integer as the sum of two perfect squares? π/4.
There’s also a funny little trick for memorizing π, called a piem (argh!). A piem is a little poem where the length of each word is a digit of π.
How I need a drink, (3 1 4 1 5)
alcoholic of course, (9 2 6)
after the heavy lectures (5 3 5 8)
involving quantum mechanics. (9 7 9)
It’s a strange, fascinating little number.
[digits]: http://www.angio.net/pi/piquery

A Brilliant φ link

In the comments onmy post about φ, Polymath, (whose [blog][polymath] is well worth checking out) provided a really spectacular [link involving φ][desert]. It’s an excerpt from a book called “[Mathematical Gems 2][gems]”, showing a problem that came from John Conway, called the “Sending Scouts into the Desert” problem.
The problem is:
Suppose you’re given a checkerboard with all of the squares on the bottom filled. You’re allowed to do standard checks jumps (jump over a man and remove it), but you can’t jump diagonally, only up, left, or right. How far *up* can you get a man? How many men do you need to move to get that far?
To get to the first row above your men, you need to jump one man. To get two rows, you need to jump four men; three rows, you’ll need 8 men; four rows, you’ll need 20 men. How about five rows?
You *can’t* get five rows doing up and sideways jumps. Even if you had an infinitely wide checkerboard, with as many full rows of mens as you want.
The proof, along with a nice Java applet to let you try the solvable ones, is at the link. And φ is an inextricable part of the proof!
[polymath]: http://polymathematics.typepad.com/
[desert]: http://www.cut-the-knot.org/proofs/checker.shtml
[gems]: http://www.amazon.com/gp/redirect.html?link_code=ur2&tag=goodmathbadma-20&camp=1789&creative=9325&location=%2Fgp%2Fproduct%2F0883853027%2Fsr%3D8-2%2Fqid%3D1155392829%2Fref%3Dpd_bbs_2%3Fie%3DUTF8

Metamath and the Peano Induction Axiom

In email, someone pointed me at an automated proof system called [Metamath][metamath]. Metamath generates proofs of mathematical statements using ZF set theory. The proofs are actually relatively easy to follow, which is quite unusual for an automated theorem prover. I’ll definitely write more about Metamath some other time, but I thought it would be interesting today to point to [metamaths proof of the fifth axiom of Peano arithmetic][meta-peano], the principle of induction. Here’s a screenshot of the first 15 steps; following the link to see the whole thing.
peano5-proof.jpg
[metamath]: http://us.metamath.org/mpegif/mmset.html#overview
[meta-peano]: http://us.metamath.org/mpegif/peano5.html