The first step in Gödel’s incompleteness proof was finding a way of taking logical statements and encoding them numerically. Looking at this today, it seems sort-of obvious. I mean, I’m writing this stuff down in a text file – that text file is a stream of numbers, and it’s trivial to convert that stream of numbers into a single number. But when Gödel was doing it, it wasn’t so obvious. So he created a really clever mechanism for numerical encoding. The advantage of Gödel’s encoding is that it makes it much easier to express properties of the encoded statements arithmetically. (Arithmetically means something very specific here; we’ll see what in a later post.

Before we can look at how Gödel encoded his logic into numbers, we need to look at the logic that he used. Gödel worked with the specific logic variant used by the Principia Mathematica. The Principia logic is minimal and a bit cryptic, but it was built for a specific purpose: to have a minimal syntax, and a complete but minimal set of axioms.

The whole idea of the Principia logic is to be purely syntactic. The logic is expected to have a valid model, but you shouldn’t need to know anything about the model to use the logic. Russell and Whitehead were deliberately building a pure logic where you didn’t need to know what anything meant to use it. I’d really like to use Gödel’s exact syntax – I think it’s an interestingly different way of writing logic – but I’m working from a translation, and the translator updated the syntax. I’m afraid that switching between the older Gödel syntax, and the more modern syntax from the translation would just lead to errors and confusion. So I’m going to stick with the translation’s modernization of the syntax.

The basic building blocks of the logic are variables. Already this is a bit different from what you’re probably used to in a logic. When we think of logic, we usually consider predicates to be a fundamental thing. In this logic, they’re not. A predicate is just a shorthand for a set, and a set is represented by a variable.

Variables are *stratified*. Again, it helps to remember where Russell and Whitehead were coming from when they were writing the Principia. One of their basic motivations was avoiding self-referential statements like Russell’s paradox. In order to prevent that, they thought that they could create a stratified logic, where on each level, you could only reason about objects from the level below. A first-order predicate would be a second-level object could only reason about first level objects. A second-order predicate would be a third-level object which could reason about second-level objects. No predicate could ever reason about itself or anything on its on level. This leveling property is a fundamental property built into their logic. The way the levels work is:

- Type one variables, which range over simple atomic values, like specific single natural numbers. Type-1 variables are written as , .
- Type two variables, which range over sets of atomic values, like sets of natural numbers. A predicate, like
*IsOdd*, about specific natural numbers would be represented as a type-2 variable. Type-2 variables are written , , … - Type three variables range over sets of sets of atomic values. The mappings of a function could be represented as type-3 variables: in set theoretic terms, a function is set of ordered pairs. Ordered pairs, in turn, can be represented as sets of sets – for example, the ordered pair (1, 4) would be represented by the set { {1}, {1, 4} }. A function, in turn, would be represented by a type-4 variable – a set of ordered pairs, which is a set of sets of sets of values.

Using variables, we can form simple atomic expressions, which in Gödel’s terminology are called *signs*. As with variables, the signs are divided into stratified levels:

- Type-1 signs are variables, and successor expressions. Successor expressions are just Peano numbers written with “succ”: 0, succ(0), succ(succ(0)), succ(a
_{1}), etc. - Signs of any type greater than 1 are just variables of that type/level.

Once you have signs, you can assemble the basic signs into formulae. Gödel explained how to build formulae in a classic logicians form, which I think is hard to follow, so I’ve converted it into a grammar:

elementary_formula → sign_{n+1}(sign_{n}) formula → ¬(elementary_formula) formula → (elementary_formula) or (elementary_formula) formula → ∀ sign_{n}(elementary_formula)

That’s the entire logic! It’s tiny, but it’s enough. Everything else from predicate logic can be defined in terms of combinations of these basic formulae. For example, you can define logical “and” in terms of negation and logical “or”: *(a ∧ b) ⇔ ¬ (¬ a ∨ ¬ b)*.

With the syntax of the system set, the next thing we need is the basic axioms of logical inference in the system. In terms of logic the way I think of it, these axioms include both “true” axioms, and the inference rules defining how the logic works. There are five families of axioms.

- First, there’s the Peano axioms, which define the natural numbers.
- : 0 is a natural number, and it’s not the successor of anything.
- : Successors are unique.
- : induction works on the natural numbers.

- Next, we’ve got a set of basic inference rules about simple propositions. These are defined as axiom schemata, which can be instantiated for any set of formalae , , and .
- Axioms that define inference over quantification. is a variable, is any formula, is any formula where is not a free variable, and is a sign of the same level as , and which doesn’t have any free variables that would be bound if it were inserted as a replacement for .
- : if formula is true for all values of , then you can substitute any specific value for in , and must still be true.

- The Principia’s version of the set theory axiom of comprehension:

- And last but not least, an axiom defining set equivalence:

So, now, finally, we can get to the numbering. This is quite clever. We’re going to use the simplest encoding: for every possible string of symbols in the logic, we’re going to define a representation as a number. So in this representation, we are *not* going to get the property that every natural number is a valid formula: lots of natural numbers won’t be. They’ll be strings of nonsense symbols. (If we wanted to, we could make every number be a valid formula, by using a parse-tree based numbering, but it’s much easier to just let the numbers be strings of symbols, and then define a predicate over the numbers to identify the ones that are valid formulae.)

We start off by assigning numbers to the constant symbols:

Symbols | Numeric Representation |
---|---|

0 | 1 |

succ | 3 |

¬ | 5 |

∨ | 7 |

∀ | 9 |

( | 11 |

) | 13 |

Variables will be represented by powers of prime numbers, for prime numbers greater that 13. For a prime number *p*, *p* will represent a type one variable, *p ^{2}* will represent a type two variable,

*p*will represent a type-3 variable, etc.

^{3} Using those symbol encodings, we can take a formula written as symbols *x _{1}x_{2}x_{3}…x_{n}*, and encode it numerically as the product

*2*, where

^{x1}3^{x2}5^{x2}…p_{n}^{xn}*p*is the

_{n}*n*th prime number.

For example, suppose I wanted to encode the formula: *∀ x _{1} (y_{2}(x_{1})) ∨ x_{2}(x_{1})*.

First, I’d need to encode each symbol:

- “∀” would be 9.
- “x
_{1}“” = 17 - “(” = 11
- “y
_{2}” = 19^{2}= 361 - “(” = 11
- “x
_{1}” = 17 - “)” = 13
- “∨” = 7
- “x
_{2}” = 17^{2}= 289 - “(” = 11
- “x
_{1}” = 17 - “)” = 13
- “)” = 13

The formula would thus be turned into the sequence: [9, 17, 11, 361, 11, 17, 13, 7, 289, 11, 17, 13, 13]. That sequence would then get turned into a single number 2^{9} 3^{17} 5^{11} 7^{361} 11^{11} 13^{17} 17^{13} 19^{7} 23^{289} 29^{11} 31^{17} 37^{13} 41^{13}, which according to Hugs is the number (warning: you need to scroll to see it. a lot!):

1,821,987,637,902,623,701,225,904,240,019,813,969,080,617,900,348,538,321,073,935,587,788,506,071,830,879,280,904,480,021,357,988,798,547,730,537,019,170,876,649,747,729,076,171,560,080,529,593,160,658,600,674,198,729,426,618,685,737,248,773,404,008,081,519,218,775,692,945,684,706,455,129,294,628,924,575,925,909,585,830,321,614,047,772,585,327,805,405,377,809,182,961,310,697,978,238,941,231,290,173,227,390,750,547,696,657,645,077,277,386,815,869,624,389,931,352,799,230,949,892,054,634,638,136,137,995,254,523,486,502,753,268,687,845,320,296,600,343,871,556,546,425,114,643,587,820,633,133,232,999,109,544,763,520,253,057,252,248,982,267,078,202,089,525,667,161,691,850,572,084,153,306,622,226,987,931,223,193,578,450,852,038,578,983,945,920,534,096,055,419,823,281,786,399,855,459,394,948,921,598,228,615,703,317,657,117,593,084,977,371,635,801,831,244,944,526,230,994,115,900,720,026,901,352,169,637,434,441,791,307,175,579,916,097,240,141,893,510,281,613,711,253,660,054,258,685,889,469,896,461,087,297,563,191,813,037,946,176,250,108,137,458,848,099,487,488,503,799,293,003,562,875,320,575,790,915,778,093,569,309,011,025,000,000,000.

Next, we’re going to look at how you can express interesting mathematical properties in terms of numbers. Gödel used a property called primitive recursion as an example, so we’ll walk through a definition of primitive recursion, and show how Gödel expressed primitive recursion numerically.

In your axiom defining set equivalence, shouldn’t the y sub i actually be x sub i?

I’m reminded of that old joke where a mathematician, a physicist, and an engineer prove that all odd numbers are prime…

… and now I see that I misread the construction; the exponents do not need to be prime. Oh well.

Fascinating post! From my crude understanding of the subject, one inconsistency (*ahem*) seems to pop out — wouldn’t the commutativity of multiplication cause problems later on? The mapping from statements to numbers then ceases to be one-to-one.

It seems that the same number can represent a jumbled formula with the same symbols in a different order (though I’m sure there’s a pretty good reason why this shouldn’t matter for the final proof, and I’d love to know why).

Nope, each symbol is the power of a different prime in the final numbering. So the first symbol in the formula is the power of 2, the second symbol is the power of 3, etc. So the ordering of the symbols is preserved.

Right, I totally missed that crucial detail! Thank you. It’s an ingenious coding scheme; and the impossibly large numbers that result only add to the awe of Gödel’s construction.

you need to explain this better:

elementary_formula → signn+1(signn)

formula → ¬(elementary_formula)

formula → (elementary_formula) or (elementary_formula)

formula → ∀ signn (elementary_formula)

it’s indecipherable

for example, you say a2, b2 are type 2 variables, then you use xi+1, is this supposed to be a type i+1 variable?

What’s one of them?

You also do this xi+1(xi). What do the brackets mean? Is xi+1 an i+1 order predicate? What does that mean?

Am I supposed to already know the conventions you are using? There’s some irony, given what you are trying to explain.