Category Archives: Bad Numbers

Why we need formality in mathematics

The comment thread from my last Cantor crankery post has continued in a way that demonstrates a common issue when dealing with bad math, so I thought it was worth taking the discussion and promoting it to a proper top-level post.

The defender of the Cantor crankery tried to show what he alleged to be the problem with Cantor, by presenting a simple proof:

If we have a unit line, then this line will have an infinite number of points in it. Some of these points will be an irrational distance away from the origin and some will be a rational distance away from the origin.

Premise 1.

To have more irrational points on this line than rational points (plus 1), it is necessary to have at least two irrational points on the line so that there exists no rational point between them.

Premise 2.

It is not possible to have two irrational points on a line so that no rational point exists between them.

Conclusion.

It is not possible to have more irrational points on a line than rational points (plus 1).

This contradicts Cantor’s conclusion, so Cantor must have made a mistake in his reasoning.

(I’ve done a bit of formatting of this to make it look cleaner, but I have not changed any of the content.)

This is not a valid proof. It looks nice on the surface – it intuitively feels right. But it’s not. Why?

Because math isn’t intuition. Math is a formal system. When we’re talking about Cantor’s diagonalization, we’re working in the formal system of set theory. In most modern math, we’re specifically working in the formal system of Zermelo-Fraenkel (ZF) set theory. And that “proof” relies on two premises, which are not correct in ZF set theory. I pointed this out in verbose detail, to which the commenter responded:

I can understand your desire for a proof to be in ZFC, Peano arithmetic and FOPL, it is a good methodology but not the only one, and I am certain that it is not a perfect one. You are not doing yourself any favors if you let any methodology trump understanding. For me it is far more important to understand a proof, than to just know it “works” under some methodology that simply manipulates symbols.

This is the point I really wanted to get to here. It’s a form of complaint that I’ve seen over and over again – not just in the Cantor crankery, but in nearly all of the math posts.

There’s a common belief among crackpots of various sorts that scientists and mathematicians use symbols and formalisms just because we like them, or because we want to obscure things and make simple things seem complicated, so that we’ll look smart.

That’s just not the case. We use formalisms and notation because they are absolutely essential. We can’t do math without the formalisms; we could do it without the notation, but the notation makes things clearer than natural language prose.

The reason for all of that is because we want to be correct.

If we’re working with a definition that contains any vagueness – even the most subtle unintentional kind (or, actually, especially the most subtle unintentional kind!) – then we can easily produce nonsense. There’s a simple “proof” that we’ve discussed before that shows that 0 is equal to 1. It looks correct when you read it. But it contains a subtle error. If we weren’t being careful and formal, that kind of mistake can easily creep in – and once you allow one, single, innocuous looking error into a proof, the entire proof falls apart. The reason for all the formalism and all the notation is to give us a way to unambiguously, precisely state exactly what we mean. The reason that we insist of detailed logical step-by-step proofs is because that’s the only way to make sure that we aren’t introducing errors.

We can’t rely on intuition, because our intuition is frequently not correct. That’s why we use logic. We can’t rely on informal statements, because informal statements lack precision: they can mean many different things, some of which are true, and some of which are not.

In the case of Cantor’s diagonalization, when we’re being carefully precise, we’re not talking about the size of things: we’re talking about the cardinality of sets. That’s an important distinction, because “size” can mean many different things. Cardinality means one, very precise thing.

Similarly, we’re talking about the cardinality of the set of real numbers compared to the cardinality of the set of natural numbers. When I say that, I’m not just hand-waving the real numbers: the real numbers means something very specific: it’s the unique complete totally ordered field (R, +, *, <) up to isomorphism. To understand that, we’re implicitly referencing the formal definition of a field (with all of its sub-definitions) and the formal definitions of the addition, multiplication, and ordering operations.

I’m not just saying that to be pedantic. I’m saying that because we need to know exactly what we’re talking about. It’s very easy to put together an informal definition of the real numbers that’s different from the proper mathematical set of real numbers. For example, you can define a number system consisting of the set of all numbers that can be generated by a finite, non-terminating computer program. Intuitively, it might seem like that’s just another way of describing the real numbers – but it describes a very different set.

Beyond just definitions, we insist on using formal symbolic logic for a similar reason. If we can reduce the argument to symbolic reasoning, then we’ve abstracted away anything that could bias or deceive us. The symbolic logic makes every statement absolutely precise, and every reasoning step pure, precise, and unbiased.

So what’s wrong with the “proof” above? It’s got two premises. Let’s just look at the first one: “To have more irrational points on this line than rational points (plus 1), it is necessary to have at least two irrational points on the line so that there exists no rational point between them.”.

If this statement is true, then Cantor’s proof must be wrong. But is this statement true? The commenter’s argument is that it’s obviously intuitively true.

If we weren’t doing math, that might be OK. But this is math. We can’t just rely on our intuition, because we know that our intuition is often wrong. So we need to ask: can you prove that that’s true?

And how do you prove something like that? Well, you start with the basic rules of your proof system. In a discussion of a set theory proof, that means ZF set theory and first order predicate logic. Then you add in the definitions you need to talk about the objects you’re interested in: so Peano arithmetic, rational numbers, real number theory, and the definition of irrational numbers in real number theory. That gives you a formal system that you can use to talk about the sets of real numbers, rational numbers, and natural numbers.

The problem for our commenter is that you can’t prove that premise using ZF logic, FOPL, and real number theory. It’s not true. It’s based on a faulty understanding of the behavior of infinite sets. It’s taking an assumption that comes from our intuition, which seems reasonable, but which isn’t actually true within the formal system o mathematics.

In particular, it’s trying to say that in set theory, the cardinality of the set of real numbers is equal to the cardinality of the set of natural numbers – but doing so by saying “Ah, Why are you worrying about that set theory nonsense? Sure, it would be nice to prove this statement about set theory using set theory, but you’re just being picky on insisting that.”

Once you really see it in these terms, it’s an absurd statement. It’s equivalent to something as ridiculous as saying that you don’t need to modify verbs by conjugating them when you speak english, because in Chinese, the spoken words don’t change for conjugation.

One plus one equals Two?

My friend Dr24hours sent me a link via twitter to a new piece of mathematical crackpottery. It’s the sort of thing that’s so trivial that I might lust ignore it – but it’s also a good example of something that someone commented on in my previous post.

This comes from, of all places, Rolling Stone magazine, in a puff-piece about an actor named Terrence Howard. When he’s not acting, Mr. Howard believes that he’s a mathematical genius who’s caught on to the greatest mathematical error of all time. According to Mr. Howard, the product of one times one is not one, it’s two.

After high school, he attended Pratt Institute in Brooklyn, studying chemical engineering, until he got into an argument with a professor about what one times one equals. “How can it equal one?” he said. “If one times one equals one that means that two is of no value because one times itself has no effect. One times one equals two because the square root of four is two, so what’s the square root of two? Should be one, but we’re told it’s two, and that cannot be.” This did not go over well, he says, and he soon left school. “I mean, you can’t conform when you know innately that something is wrong.”

I don’t want to harp on Mr. Howard too much. He’s clueless, but sadly, he’s a not too atypical student of american schools. I’ll take a couple of minutes to talk about what’s wrong with his stuff, but in context of a discussion of where I think this kind of stuff comes from.

In American schools, math is taught largely by rote. When I was a kid, set theory came into vogue, but by and large math teachers didn’t understand it – so they’d draw a few meaningless Venn diagrams, and then switch into pure procedure.

An example of this from my own life involves my older brother. My brother is not a dummy – he’s a very smart guy. He’s at least as smart as I am, but he’s interested in very different things, and math was never one of his interests.

I barely ever learned math in school. My father noticed pretty early on that I really enjoyed math, and so he did math with me for fun. He taught me stuff – not as any kind of “they’re not going to teach it right in school”, but just purely as something fun to do with a kid who was interested. So I learned a lot of math – almost everything up through calculus – from him, not from school. My brother didn’t – because he didn’t enjoy math, and so my dad did other things with him.

When we were in high school, my brother got a job at a local fast food joint. At the end of the year, he had to do his taxes, and my dad insisted that he do it himself. When he needed to figure out how much tax he owed on his income, he needed to compute a percentage. I don’t know the numbers, but for the sake of the discussion, let’s say that he made $5482 that summer, and the tax rate on that was 18%. He wrote down a pair of ratios:

\frac{18}{100} = \frac{x}{5482}

And then he cross-multiplied, getting:

 18 \times 5482 = 100 \times x

 98676 = 100 \times x

and so x = 986.76.

My dad was shocked by this – it’s such a laborious way of doing it. So he started pressing at my brother. He asked him, if you went to a store, and they told you there was a 20% off sale on a pair of jeans that cost $18, how much of a discount would you get? He didn’t know. The only way he knew to figure it out was to do the whole ratios, cross-multiply, and solve. If you told him that 20% off of $18 was $5, he would have believed you. Because percentages just didn’t mean anything to him.

Now, as I said: my brother isn’t a dummy. But none of his math teachers had every taught him what percentages meant. He had no concept of their meaning: he knew a procedure for getting the value, but it was a completely blind procedure, devoid of meaning. And that’s what everything he’d learned about math was like: meaningless procedures performed by rote, without any comprehension.

That’s where nonsense like Terence Howard’s stuff comes from: math education that never bothered to teach students what anything means. If anyone had attempted to teach any form of meaning for arithmetic, the ridiculous of Mr. Howard’s supposed mathematics would be obvious.

For understanding basic arithmetic, I like to look at a geometric model of numbers.

Put a dot on a piece of paper. Label it “0”. Draw a line starting at zero, and put tick-marks on the line separated by equal distances. Starting at the first mark after 0, label the tick-marks 1, 2, 3, 4, 5, ….

In this model, the number one is the distance from 0 (the start of the line) to 1. The number two is the distance from 0 to 2. And so on.

What does addition mean?

Addition is just stacking lines, one after the other. Suppose you wanted to add 3 + 2. You draw a line that’s 3 tick-marks long. Then, starting from the end of that line, you draw a second line that’s 2 tick-marks long. 3 + 2 is the length of the resulting line: by putting it next to the original number-line, we can see that it’s five tick-marks long, so 3 + 2 = 5.

addition

Multiplication is a different process. In multiplication, you’re not putting lines tip-to-tail: you’re building rectangles. If you want to multiply 3 * 2, what you do is draw a rectangle who’s width is 3 tick-marks long, and whose height is 2 tick-marks long. Now divide that into squares that are 1 tick-mark by one tick-mark. How many squares can you fit into that rectangle? 6. So 3*2 = 6.

multiplication

Why does 1 times 1 equal 1? Because if you draw a rectangle that’s one hash-mark wide, and one hash-mark high, it forms exactly one 1×1 square. 1 times 1 can’t be two: it forms one square, not two.

If you think about the repercussions of the idea that 1*1=2, as long as you’re clear about meanings, it’s pretty obvious that 1*1=2 has a disastrously dramatic impact on math: it turns all of math into a pile of gibberish.

What’s 1*2? 2. 1*1=2 and 1*2=2, therefore 1=2. If 1=2, then 2=3, 3=4, 4=5: all integers are equal. If that’s true, then… well, numbers are, quite literally, meaningless. Which is quite a serious problem, unless you already believe that numbers are meaningless anyway.

In my last post, someone asked why I was so upset about the error in a math textbook. This is a good example of why. The new common core math curriculum, for all its flaws, does a better job of teaching understanding of math. But when the book teaches “facts” that are wrong, what they’re doing becomes the opposite. It doesn’t make sense – if you actually try to understand it, you just get more confused.

That teaches you one of two things. Either it teaches you that understanding this stuff is futile: that all you can do is just learn to blindly reproduce the procedures that you were taught, without understanding why. Or it teaches you that no one really understands any of it, and that therefore nothing that anyone tells you can possibly be trusted.

Bad Math Books and Cantor Cardinality

A bunch of readers sent me a link to a tweet this morning from Professor Jordan Ellenberg:

The tweet links to the following image:

(And yes, this is real. You can see it in context here.)

This is absolutely infuriating.

This is a photo of a problem assignment in a math textbook published by an imprint of McGraw-Hill. And it’s absolutely, unquestionably, trivially wrong. No one who knew anything about math looked at this before it was published.

The basic concept underneath this is fundamental: it’s the cardinality of sets from Cantor’s set theory. It’s an extremely important concept. And it’s a concept that’s at the root of a huge amount of misunderstandings, confusion, and frustration among math students.

Cardinality, and the notion of cardinality relations between infinite sets, are difficult concepts, and they lead to some very un-intuitive results. Infinity isn’t one thing: there are different sizes of infinities. That’s a rough concept to grasp!

Here on this blog, I’ve spent more time dealing with people who believe that it must be wrong – a subject that I call Cantor crackpottery – than with any other bad math topic. This error teaches students something deeply wrong, and it encourages Cantor crackpottery!

Let’s review.

Cantor said that two collections of things are the same size if it’s possible to create a one-to-one mapping between the two. Imagine you’ve got a set of 3 apples and a set of 3 oranges. They’re the same size. We know that because they both have 3 elements; but we can also show it by setting aside pairs of one apple and one orange – you’ll get three pairs.

The same idea applies when you look at infinitely large sets. The set of positive integers and the set of negative integers are the same size. They’re both infinite – but we can show how you can create a one-to-one relation between them: you can take any positive integer i, and map it to exactly one negative integer, 0 - i.

That leads to some unintuitive results. For example, the set of all natural numbers and the set of all even natural numbers are the same size. That seems crazy, because the set of all even natural numbers is a strict subset of the set of natural numbers: how can they be the same size?

But they are. We can map each natural number i to exactly one even natural number 2i. That’s a perfect one-to-one map between natural numbers and even natural numbers.

Where it gets uncomfortable for a lot of people is when we start thinking about real numbers. The set of real numbers is infinite. Even the set of real numbers between 0 and 1 is infinite! But it’s also larger than the set of natural numbers, which is also infinite. How can that be?

The answer is that Cantor showed that for any possible one-to-one mapping between the natural numbers and the real numbers between 0 and 1, there’s at least one real number that the mapping omitted. No matter how you do it, all of the natural numbers are mapped to one value in the reals, but there’s at least one real number which is not in the mapping!

In Cantor set theory, that means that the size of the set of real numbers between 0 and 1 is strictly larger than the set of all natural numbers. There’s an infinity bigger than infinity.

I think that this is what the math book in question meant to say: that there’s no possible mapping between the natural numbers and the real numbers. But it’s not what they did say: what they said is that there’s no possible map between the integers and the fractions. And that is not true.

Here’s how you generate the mapping between the integers and the rational numbers (fractions) between 0 and 1, written as a pseudo-Python program:

 i = 0
 for denom in Natural:
   for num in 1 .. denom:
      if num is relatively prime with denom:
         print("%d => %d/%d" % (i, num, denom))
         i += 1

It produces a mapping (0 => 0, 1 => 1, 2 => 1/2, 3 => 1/3, 4 => 2/3, 5 => 1/4, 6 => 3/4, …). It’ll never finish running – but you can easily show that for any possible fraction, there’ll be exactly one integer that maps to it.

That means that the set of all rational numbers between 0 and 1 is the same size as the set of all natural numbers. There’s a similar way of producing a mapping between the set of all fractions and the set of natural numbers – so the set of all fractions is the same size as the set of natural numbers. But both are smaller than the set of all real numbers, because there are many, many real numbers that cannot be written as fractions. (For example, \pi. Or the square root of 2. Or e. )

This is terrible on multiple levels.

  1. It’s a math textbook written and reviewed by people who don’t understand the basic math that they’re writing about.
  2. It’s teaching children something incorrect about something that’s already likely to confuse them.
  3. It’s teaching something incorrect about a topic that doesn’t need to be covered at all in the textbook. This is an algebra-2 textbook. You don’t need to cover Cantor’s infinite cardinalities in Algebra-2. It’s not wrong to cover it – but it’s not necessary. If the authors didn’t understand cardinality, they could have just left it out.
  4. It’s obviously wrong. Plenty of bright students are going to come up with the the mapping between the fractions and the natural numbers. They’re going to come away believing that they’ve disproved Cantor.

I’m sure some people will argue with that last point. My evidence in support of it? I came up with a proof of that in high school. Fortunately, my math teacher was able to explain why it was wrong. (Thanks Mrs. Stevens!) Since I write this blog, people assume I’m a mathematician. I’m not. I’m just an engineer who really loves math. I was a good math student, but far from a great one. I’d guess that every medium-sized high school has at least one math student every year who’s better than I was.

The proof I came up with is absolutely trivial, and I’d expect tons of bright math-geek kids to come up with something like it. Here goes:

  1. The set of fractions is a strict subset of the set of ordered pairs of natural numbers.
  2. So: if there’s a one-to-one mapping between the set of ordered pairs and the naturals, then there must be a one-to-one mapping between the fractions and the naturals.
  3. On a two-d grid, put the natural numbers across, and then down.
  4. Zigzag diagonally through the grid, forming pairs of the horizontal position and the vertical position: (0,0), (1, 0), (0, 1), (2, 0), (1, 1), (0, 2), (3, 0), (2, 1), (1, 2), (0, 3).
  5. This will produce every possible ordered pair of natural numbers. For each number in the list, produce a mapping between the position in the list, and the pair. So (0, 0) is 0, (2, 0) is 3, etc.

As a proof, it’s sloppy – but it’s correct. And plenty of high school students will come up with something like it. How many of them will walk away believing that they just disproved Cantor?

Arabic numerals have nothing to do with angle counting!

There’s an image going around that purports to explain the origin of the arabic numerals. It’s cute. It claims to show why the numerals that we use look the way that they do. Here it is:

According to this, the shapes of the numbers was derived from a notation where for each numeral contains its own number of angles. It’s a really interesting idea, and it would be really interesting if it were true. The problem is, it isn’t.

Look at the numerals in that figure. Just by looking at them, you can see quite a number of problems with them.

For a couple of obvious examples:

  • Look at the 7. The crossed seven is a recent invention made up to compensate for the fact that in cursive roman lettering, it can be difficult to distinguish ones from sevens, the mark was added to clarify. The serifed foot on the 7 is even worse: there’s absolutely no tradition of writing a serifed foot on the 7; it’s just a font decoration. The 7’s serifed foot is no more a part of the number than serifed foot on the lowercase letter l is an basic feature of the letter ls.
  • Worse is the curlique on the 9: the only time that curly figures like that appear in writing is in calligraphic documents, where they’re an aesthetic flourish. That curly thing has never been a part of the number 9. But if you want to claim this angle-counting nonsense, you’ve got to add angles to a 9 somewhere. It’s not enough to just add a serifed foot – that won’t get you enough angles. So you need the curlique, no matter how obviously ridiculous it is.

You don’t even need to notice stuff like that to see that this is rubbish. We actually know quite a lot about the history of arabic numeral notation. We know what the “original” arabic numerals looked like. For example, this wikipedia image shows the standard arabic numerals (this variant is properly called the Bakshali numerals) from around the second century BC:

Bakhshali_numerals_2

It’s quite fascinating to study the origins of our numeric notation. It’s true that we – “we” meaning the scholarly tradition that grew out of Europe – learned the basic numeric notation from the Arabs. But they didn’t invent it – it predates them by a fair bit. The notation originally came from India, where Hindu scholars, who wrote in an alphabet derived from Sanskrit, used a sanskrit-based numeric notation called Brahmi numerals (which, in turn, were derived from an earlier notation, Karosthi numerals, which weren’t used quite like the modern numbers, so the Brahmi numerals are considered the earliest “true” arabic numeral.) That notation moved westward, and was adopted by the Persians, who spread it to the Arabs. As the arabs adopted it, they changed the shapes to work with their calligraphic notations, producing the Bakshali form.

In the Brahmi numerals, the numbers 1 through 4 are written in counting-based forms: one is written as one horizontal line; 2 as two lines; 3 as three lines. Four is written as a pair of crossed lines, giving four quadrants. 5 through 9 are written using sanskrit characters: their “original” form had nothing to do with counting angles or lines.

The real history of numerical notations is really interesting. It crosses through many different cultures, and the notations reform each time it migrates, keeping the same essential semantics, but making dramatic changes in the written forms of individual numerals. It’s so much more interesting – and the actual numeral forms are so much more beautiful – than you’d ever suspect from the nonsense of angle-counting.