Category Archives: Bad Math

Understanding Global Warming Scale Issues

Aside from the endless stream of Cantor cranks, the next biggest category of emails I get is from climate “skeptics”. They all ask pretty much the same question. For example, here’s one I received today:

My personal analysis, and natural sceptisism tells me, that there are something fundamentally wrong with the entire warming theory when it comes to the CO2.

If a gas in the atmosphere increase from 0.03 to 0.04… that just cant be a significant parameter, can it?

I generally ignore it, because… let’s face it, the majority of people who ask this question aren’t looking for a real answer. But this one was much more polite and reasonable than most, so I decided to answer it. And once I went to the trouble of writing a response, I figured that I might as well turn it into a post as well.

The current figures – you can find them in a variety of places from wikipedia to the US NOAA – are that the atmosphere CO2 has changed from around 280 parts per million in 1850 to 400 parts per million today.

Why can’t that be a significant parameter?

There’s a couple of things to understand to grasp global warming: how much energy carbon dioxide can trap in the atmosphere, and hom much carbon dioxide there actually is in the atmosphere. Put those two facts together, and you realize that we’re talking about a massive quantity of carbon dioxide trapping a massive amount of energy.

The problem is scale. Humans notoriously have a really hard time wrapping our heads around scale. When numbers get big enough, we aren’t able to really grasp them intuitively and understand what they mean. The difference between two numbers like 300 and 400ppm is tiny, we can’t really grasp how in could be significant, because we aren’t good at taking that small difference, and realizing just how ridiculously large it actually is.

If you actually look at the math behind the greenhouse effect, you find that some gasses are very effective at trapping heat. The earth is only habitable because of the carbon dioxide in the atmosphere – without it, earth would be too cold for life. Small amounts of it provide enough heat-trapping effect to move us from a frozen rock to the world we have. Increasing the quantity of it increases the amount of heat it can trap.

Let’s think about what the difference between 280 and 400 parts per million actually means at the scale of earth’s atmosphere. You hear a number like 400ppm – that’s 4 one-hundreds of one percent – that seems like nothing, right? How could that have such a massive effect?!

But like so many other mathematical things, you need to put that number into the appropriate scale. The earths atmosphere masses roughly 5 times 10^21 grams. 400ppm of that scales to 2 times 10^18 grams of carbon dioxide. That’s 2 billion trillion kilograms of CO2. Compared to 100 years ago, that’s about 800 million trillion kilograms of carbon dioxide added to the atmosphere over the last hundred years. That’s a really, really massive quantity of carbon dioxide! scaled to the number of particles, that’s something around 10^40th (plus or minus a couple of powers of ten – at this scale, who cares?) additional molecules of carbon dioxide in the atmosphere. It’s a very small percentage, but it’s a huge quantity.

When you talk about trapping heat, you also have to remember that there’s scaling issues there, too. We’re not talking about adding 100 degrees to the earths temperature. It’s a massive increase in the quantity of energy in the atmosphere, but because the atmosphere is so large, it doesn’t look like much: just a couple of degrees. That can be very deceptive – 5 degrees celsius isn’t a huge temperature difference. But if you think of the quantity of extra energy that’s being absorbed by the atmosphere to produce that difference, it’s pretty damned huge. It doesn’t necessarily look like all that much when you see it stated at 2 degrees celsius – but if you think of it terms of the quantity of additional energy being trapped by the atmosphere, it’s very significant.

Calculating just how much energy a molecule of CO2 can absorb is a lot trickier than calculating the mass-change of the quantity of CO2 in the atmosphere. It’s a complicated phenomenon which involves a lot of different factors – how much infrared is absorbed by an atom, how quickly that energy gets distributed into the other molecules that it interacts with… I’m not going to go into detail on that. There’s a ton of places, like here, where you can look up a detailed explanation. But when you consider the scale issues, it should be clear that there’s a pretty damned massive increase in the capacity to absorb energy in a small percentage-wise increase in the quantity of CO2.

Polls and Sampling Errors in the Presidental Debate Results

My biggest pet peeve is press coverage of statistics. As someone who is mathematically literate, I’m constantly infuriated by it. Basic statistics isn’t that hard, but people can’t be bothered to actually learn a tiny bit in order to understand the meaning of the things they’re covering.

My twitter feed has been exploding with a particularly egregious example of this. After monday night’s presidential debate, there’s been a ton of polling about who “won” the debate. One conservative radio host named Bill Mitchell has been on a rampage about those polls. Here’s a sample of his tweets:

Let’s start with a quick refresher about statistics, why we use them, and how they work.

Statistical analysis has a very simple point. We’re interested in understanding the properties of a large population of things. For whatever reason, we can’t measure the properties of every object in that population.

The exact reason can vary. In political polling, we can’t ask every single person in the country who they’re going to vote for. (Even if we could, we simply don’t know who’s actually going to show up and vote!) For a very different example, my first exposure to statistics was through my father, who worked in semiconductor manufacturing. They’d produce a run of 10,000 chips for use in Satellites. They needed to know when, on average, a chip would fail from exposure to radiation. If they measured that in every chip, they’d end up with nothing to sell.)

Anyway: you can’t measure every element of the population, but you still want to take measurements. So what you do is randomly select a collection of representative elements from the population, and you measure those. Then you can say that with a certain probability, the result of analyzing that representative subset will match the result that you’d get if you measured the entire population.

How close can you get? If you’ve really selected a random sample of the population, then the answer depends on the size of the sample. We measure that using something called the “margin of error”. “Margin of error” is actually a terrible name for it, and that’s the root cause of one of the most common problems in reporting about statistics. The margin of error is a probability measurement that says “there is an N% probability that the value for the full population lies within the margin of error of the measured value of the sample.”.

Right away, there’s a huge problem with that. What is that variable doing in there? The margin of error measures the probability that the full population value is within a confidence interval around the measured sample value. If you don’t say what the confidence interval is, the margin of error is worthless. Most of the time – but not all of the time – we’re talking about a 95% confidence interval.

But there are several subtler issues with the margin of error, both due to the name.

  1. The “true” value for the full population is not guaranteed to be within the margin of error of the sampled value. It’s just a probability. There is no hard bound on the size of the error: just a high probability of it being within the margin..
  2. The margin of error only includes errors due to sample size. It does not incorporate any other factor – and there are many! – that may have affected the result.
  3. The margin of error is deeply dependent on the way that the underlying sample was taken. It’s only meaningful for a random sample. That randomness is critically important: all of sampled statistics is built around the idea that you’ve got a randomly selected subset of your target population.

Let’s get back to our friend the radio host, and his first tweet, because he’s doing a great job of illustrating some of these errors.

The quality of a sampled statistic is entirely dependent on how well the sample matches the population. The sample is critical. It doesn’t matter how big the sample size is if it’s not random. A non-random sample cannot be treated as a representative sample.

So: an internet poll, where a group of people has to deliberately choose to exert the effort to participate cannot be a valid sample for statistical purposes. It’s not random.

It’s true that the set of people who show up to vote isn’t a random sample. But that’s fine: the purpose of an election isn’t to try to divine what the full population thinks. It’s to count what the people who chose to vote think. It’s deliberately measuring a full population: the population of people who chose to vote.

But if you’re trying to statistically measure something about the population of people who will go and vote, you need to take a randomly selected sample of people who will go to vote. The set of voters is the full population; you need to select a representative sample of that population.

Internet polls do not do that. At best, they measure a different population of people. (At worst, with ballot stuffing, they measure absolutely nothing, but we’ll give them this much benefit of the doubt.) So you can’t take much of anything about the sample population and use it to reason about the full population.

And you can’t say anything about the margin of error, either. Because the margin of error is only meaningful for a representative sample. You cannot compute a meaningful margin of error for a non-representative sample, because there is no way of knowing how that sampled population compares to the true full target population.

And that brings us to the second tweet. A properly sampled random population of 500 people can produce a high quality result with a roughly 5% margin of error and a 95% confidence interval. (I’m doing a back-of-the-envelope calculation here, so that’s not precise.) That means that if the population were randomly sampled, we could say there is in 19 out of 20 polls of that size, the full population value would be within +/- 4% of value measured by the poll. For a non-randomly selected sample of 10 million people, the margin of error cannot be measured, because it’s meaningless. The random sample of 500 people tells us a reasonable estimate based on data; the non-random sample of 10 million people tells us nothing.

And with that, on to the third tweet!

In a poll like this, the margin of error only tells us one thing: what’s the probability that the sampled population will respond to the poll in the same way that the full population would?

There are many, many things that can affect a poll beyond the sample size. Even with a truly random and representative sample, there are many things that can affect the outcome. For a couple of examples:

How, exactly, is the question phrased? For example, if you ask people “Should police shoot first and ask questions later?”, you’ll get a very different answer from “Should police shoot dangerous criminal suspects if they feel threatened?” – but both of those questions are trying to measure very similar things. But the phrasing of the questions dramatically affects the outcome.

What context is the question asked in? Is this the only question asked? Or is it asked after some other set of questions? The preceding questions can bias the answers. If you ask a bunch of questions about how each candidate did with respect to particular issues before you ask who won, those preceding questions will bias the answers.

When you’re looking at a collection of polls that asked different questions in different ways, you expect a significant variation between them. That doesn’t mean that there’s anything wrong with any of them. They can all be correct even though their results vary by much more than their margins of error, because the margin of error has nothing to do with how you compare their results: they used different samples, and measured different things.

The problem with the reporting is the same things I mentioned up above. The press treats the margin of error as an absolute bound on the error in the computed sample statistics (which it isn’t); and the press pretends that all of the polls are measuring exactly the same thing, when they’re actually measuring different (but similar) things. They don’t tell us what the polls are really measuring; they don’t tell us what the sampling methodology was; and they don’t tell us the confidence interval.

Which leads to exactly the kind of errors that Mr. Mitchell made.

And one bonus. Mr. Mitchell repeatedly rants about how many polls show a “bias” by “over-sampling< democratic party supporters. This is a classic mistake by people who don't understand statistics. As I keep repeating, for a sample to be meaningful, it must be random. You can report on all sorts of measurements of the sample, but you cannot change it.

If you’re randomly selecting phone numbers and polling the respondents, you cannot screen the responders based on their self-reported party affiliation. If you do, you are biasing your sample. Mr. Mitchell may not like the results, but that doesn’t make them invalid. People report what they report.

In the last presidential election, we saw exactly this notion in the idea of “unskewing” polls, where a group of conservative folks decided that the polls were all biased in favor of the democrats for exactly the reasons cited by Mr. Mitchell. They recomputed the poll results based on shifting the samples to represent what they believed to be the “correct” breakdown of party affiliation in the voting population. The results? The actual election results closely tracked the supposedly “skewed” polls, and the unskewers came off looking like idiots.

We also saw exactly this phenomenon going on in the Republican primaries this year. Randomly sampled polls consistently showed Donald Trump crushing his opponents. But the political press could not believe that Donald Trump would actually win – and so they kept finding ways to claim that the poll samples were off: things like they were off because they used land-lines which oversampled older people, and if you corrected for that sampling error, Trump wasn’t actually winning. Nope: the randomly sampled polls were correct, and Donald Trump is the republican nominee.

If you want to use statistics, you must work with random samples. If you don’t, you’re going to screw up the results, and make yourself look stupid.

Why we need formality in mathematics

The comment thread from my last Cantor crankery post has continued in a way that demonstrates a common issue when dealing with bad math, so I thought it was worth taking the discussion and promoting it to a proper top-level post.

The defender of the Cantor crankery tried to show what he alleged to be the problem with Cantor, by presenting a simple proof:

If we have a unit line, then this line will have an infinite number of points in it. Some of these points will be an irrational distance away from the origin and some will be a rational distance away from the origin.

Premise 1.

To have more irrational points on this line than rational points (plus 1), it is necessary to have at least two irrational points on the line so that there exists no rational point between them.

Premise 2.

It is not possible to have two irrational points on a line so that no rational point exists between them.

Conclusion.

It is not possible to have more irrational points on a line than rational points (plus 1).

This contradicts Cantor’s conclusion, so Cantor must have made a mistake in his reasoning.

(I’ve done a bit of formatting of this to make it look cleaner, but I have not changed any of the content.)

This is not a valid proof. It looks nice on the surface – it intuitively feels right. But it’s not. Why?

Because math isn’t intuition. Math is a formal system. When we’re talking about Cantor’s diagonalization, we’re working in the formal system of set theory. In most modern math, we’re specifically working in the formal system of Zermelo-Fraenkel (ZF) set theory. And that “proof” relies on two premises, which are not correct in ZF set theory. I pointed this out in verbose detail, to which the commenter responded:

I can understand your desire for a proof to be in ZFC, Peano arithmetic and FOPL, it is a good methodology but not the only one, and I am certain that it is not a perfect one. You are not doing yourself any favors if you let any methodology trump understanding. For me it is far more important to understand a proof, than to just know it “works” under some methodology that simply manipulates symbols.

This is the point I really wanted to get to here. It’s a form of complaint that I’ve seen over and over again – not just in the Cantor crankery, but in nearly all of the math posts.

There’s a common belief among crackpots of various sorts that scientists and mathematicians use symbols and formalisms just because we like them, or because we want to obscure things and make simple things seem complicated, so that we’ll look smart.

That’s just not the case. We use formalisms and notation because they are absolutely essential. We can’t do math without the formalisms; we could do it without the notation, but the notation makes things clearer than natural language prose.

The reason for all of that is because we want to be correct.

If we’re working with a definition that contains any vagueness – even the most subtle unintentional kind (or, actually, especially the most subtle unintentional kind!) – then we can easily produce nonsense. There’s a simple “proof” that we’ve discussed before that shows that 0 is equal to 1. It looks correct when you read it. But it contains a subtle error. If we weren’t being careful and formal, that kind of mistake can easily creep in – and once you allow one, single, innocuous looking error into a proof, the entire proof falls apart. The reason for all the formalism and all the notation is to give us a way to unambiguously, precisely state exactly what we mean. The reason that we insist of detailed logical step-by-step proofs is because that’s the only way to make sure that we aren’t introducing errors.

We can’t rely on intuition, because our intuition is frequently not correct. That’s why we use logic. We can’t rely on informal statements, because informal statements lack precision: they can mean many different things, some of which are true, and some of which are not.

In the case of Cantor’s diagonalization, when we’re being carefully precise, we’re not talking about the size of things: we’re talking about the cardinality of sets. That’s an important distinction, because “size” can mean many different things. Cardinality means one, very precise thing.

Similarly, we’re talking about the cardinality of the set of real numbers compared to the cardinality of the set of natural numbers. When I say that, I’m not just hand-waving the real numbers: the real numbers means something very specific: it’s the unique complete totally ordered field (R, +, *, <) up to isomorphism. To understand that, we’re implicitly referencing the formal definition of a field (with all of its sub-definitions) and the formal definitions of the addition, multiplication, and ordering operations.

I’m not just saying that to be pedantic. I’m saying that because we need to know exactly what we’re talking about. It’s very easy to put together an informal definition of the real numbers that’s different from the proper mathematical set of real numbers. For example, you can define a number system consisting of the set of all numbers that can be generated by a finite, non-terminating computer program. Intuitively, it might seem like that’s just another way of describing the real numbers – but it describes a very different set.

Beyond just definitions, we insist on using formal symbolic logic for a similar reason. If we can reduce the argument to symbolic reasoning, then we’ve abstracted away anything that could bias or deceive us. The symbolic logic makes every statement absolutely precise, and every reasoning step pure, precise, and unbiased.

So what’s wrong with the “proof” above? It’s got two premises. Let’s just look at the first one: “To have more irrational points on this line than rational points (plus 1), it is necessary to have at least two irrational points on the line so that there exists no rational point between them.”.

If this statement is true, then Cantor’s proof must be wrong. But is this statement true? The commenter’s argument is that it’s obviously intuitively true.

If we weren’t doing math, that might be OK. But this is math. We can’t just rely on our intuition, because we know that our intuition is often wrong. So we need to ask: can you prove that that’s true?

And how do you prove something like that? Well, you start with the basic rules of your proof system. In a discussion of a set theory proof, that means ZF set theory and first order predicate logic. Then you add in the definitions you need to talk about the objects you’re interested in: so Peano arithmetic, rational numbers, real number theory, and the definition of irrational numbers in real number theory. That gives you a formal system that you can use to talk about the sets of real numbers, rational numbers, and natural numbers.

The problem for our commenter is that you can’t prove that premise using ZF logic, FOPL, and real number theory. It’s not true. It’s based on a faulty understanding of the behavior of infinite sets. It’s taking an assumption that comes from our intuition, which seems reasonable, but which isn’t actually true within the formal system o mathematics.

In particular, it’s trying to say that in set theory, the cardinality of the set of real numbers is equal to the cardinality of the set of natural numbers – but doing so by saying “Ah, Why are you worrying about that set theory nonsense? Sure, it would be nice to prove this statement about set theory using set theory, but you’re just being picky on insisting that.”

Once you really see it in these terms, it’s an absurd statement. It’s equivalent to something as ridiculous as saying that you don’t need to modify verbs by conjugating them when you speak english, because in Chinese, the spoken words don’t change for conjugation.

Cantor Crankery is Boring

Sometimes, I think that I’m being punished.

I’ve written about Cantor crankery so many times. In fact, it’s one of the largest categories in this blog’s index! I’m pretty sure that I’ve covered pretty much every anti-Cantor argument out there. And yet, not a week goes by when another idiot doesn’t pester me with their “new” refutation of Cantor. The “new” argument is always, without exception, a variation on one of the same old boring ones.

But I haven’t written any Cantor stuff in quite a while, and yet another one landed in my mailbox this morning. So, what the heck. Let’s go down the rabbit-hole once again.

We’ll start with a quick refresher.

The argument that the cranks hate is called Cantor’s diagonalization. Cantor’s diagonalization as argument that according to the axioms of set theory, the cardinality (size) of the set of real numbers is strictly larger than the cardinality of the set of natural numbers.

The argument is based on the set theoretic definition of cardinality. In set theory, two sets are the same size if and only if there exists a one-to-one mapping between the two sets. If you try to create a mapping between set A and set B, and in every possible mapping, every A is mapped onto a unique B, but there are leftover Bs that no element of A maps to, then the cardinality of B is larger than the cardinality of A.

When you’re talking about finite sets, this is really easy to follow. If I is the set {1, 2, 3}, and B is the set {4, 5, 6, 7}, then it’s pretty obvious that there’s no one to one mapping from A to B: there are more elements in B than there are in A. You can easily show this by enumerating every possible mapping of elements of A onto elements of B, and then showing that in every one, there’s an element of B that isn’t mapped to by an element of A.

With infinite sets, it gets complicated. Intuitively, you’d think that the set of even natural numbers is smaller than the set of all natural numbers: after all, the set of evens is a strict subset of the naturals. But your intuition is wrong: there’s a very easy one to one mapping from the naturals to the evens: {n → 2n }. So the set of even natural numbers is the same size as the set of all natural numbers.

To show that one infinite set has a larger cardinality than another infinite set, you need to do something slightly tricky. You need to show that no matter what mapping you choose between the two sets, some elements of the larger one will be left out.

In the classic Cantor argument, what he does is show you how, given any purported mapping between the natural numbers and the real numbers, to find a real number which is not included in the mapping. So no matter what mapping you choose, Cantor will show you how to find real numbers that aren’t in the mapping. That means that every possible mapping between the naturals and the reals will omit members of the reals – which means that the set of real numbers has a larger cardinality than the set of naturals.

Cantor’s argument has stood since it was first presented in 1891, despite the best efforts of people to refute it. It is an uncomfortable argument. It violates our intuitions in a deep way. Infinity is infinity. There’s nothing bigger than infinity. What does it even mean to be bigger than infinity? That’s a non-sequiter, isn’t it?

What it means to be bigger than infinity is exactly what I described above. It means that if you have a two infinitely large sets of objects, and there’s no possible way to map from one to the other without missing elements, then one is bigger than the other.

There are legitimate ways to dispute Cantor. The simplest one is to reject set theory. The diagonalization is an implication of the basic axioms of set theory. If you reject set theory as a basis, and start from some other foundational axioms, you can construct a version of mathematics where Cantor’s proof doesn’t work. But if you do that, you lose a lot of other things.

You can also argue that “cardinality” is a bad abstraction. That is, that the definition of cardinality as size is meaningless. Again, you lose a lot of other things.

If you accept the axioms of set theory, and you don’t dispute the definition of cardinality, then you’re pretty much stuck.

Ok, background out of the way. Let’s look at today’s crackpot. (I’ve reformatted his text somewhat; he sent this to me as plain-text email, which looks awful in my wordpress display theme, so I’ve rendered it into formatted HTML. Any errors introduced are, of course, my fault, and I’ll correct them if and when they’re pointed out to me.)

We have been told that it is not possible to put the natural numbers into a one to one with the real numbers. Well, this is not true. And the argument, to show this, is so simple that I am absolutely surprised that this argument does not appear on the internet.

We accept that the set of real numbers is unlistable, so to place them into a one to one with the natural numbers we will need to make the natural numbers unlistable as well. We can do this by mirroring them to the real numbers.

Given any real number (between 0 and 1) it is possible to extract a natural number of any length that you want from that real number.

Ex: From π-3 we can extract the natural numbers 1, 14, 141, 1415, 14159 etc…

We can form a set that associates the extracted number with the real number that it was extracted from.

Ex: 1 → 0.14159265…

Then we can take another real number (in any arbitrary order) and extract a natural number from it that is not in our set.

Ex: 1 → 0.14159266… since 1 is already in our set we must extract the next natural number 14.

Since 14 is not in our set we can add the pair 14 → 0.1415926l6… to our set.

We can do the same thing with some other real number 0.14159267… since 1 and 14 is already in our set we will need to extract a 3 digit natural number, 141, and place it in our set. And so on.

So our set would look something like this…

A) 1 → 0.14159265…
B) 14 → 0.14159266…
C) 141 → 0.14159267…
D) 1410 → 0.141
E) 14101 → 0.141013456789…
F) 5 → 0.567895…
G) 55 → 0.5567891…
H) 555 → 0.555067891…

Since the real numbers are infinite in length (some terminate in an infinite string of zero’s) then we can always extract a natural number that is not in the set of pairs since all the natural numbers in the set of pairs are finite in length. Even if we mutate the diagonal of the real numbers, we will get a real number not on the list of real numbers, but we can still find a natural number, that is not on the list as well, to correspond with that real number.

Therefore it is not possible for the set of real numbers to have a larger cardinality than the set of natural numbers!

This is a somewhat clever variation on a standard argument.

Over and over and over again, we see arguments based on finite prefixes of real numbers. The problem with them is that they’re based on finite prefixes. The set of all finite prefixes of the real numbers is that there’s an obvious correspondence between the natural numbers and the finite prefixes – but that still doesn’t mean that there are no real numbers that aren’t in the list.

In this argument, every finite prefix of π corresponds to a natural number. But π itself does not. In fact, every real number that actually requires an infinite number of digits has no corresponding natural number.

This piece of it is, essentially, the same thing as John Gabriel’s crankery.

But there’s a subtler and deeper problem. This “refutation” of Cantor contains the conclusion as an implicit premise. That is, it’s actually using the assumption that there’s a one-to-one mapping between the naturals and the reals to prove the conclusion that there’s a one-to-one mapping between the naturals and the reals.

If you look at his procedure for generating the mapping, it requires an enumeration of the real numbers. You need to take successive reals, and for each one in the sequence, you produce a mapping from a natural number to that real. If you can’t enumerate the real numbers as a list, the procedure doesn’t work.

If you can produce a sequence of the real numbers, then you don’t need this procedure: you’ve already got your mapping. 0 to the first real, 1 to the second real, 2 to the third read, 3 to the fourth real, and so on.

So, once again: sorry Charlie: your argument doesn’t work. There’s no Fields medal for you today.

One final note. Like so many other bits of bad math, this is a classic example of what happens when you try to do math with prose. There’s a reason that mathematicians have developed formal notations, formal language, detailed logical inference, and meticulous citation. It’s because in prose, it’s easy to be sloppy. You can accidentally introduce an implicit premise without meaning to. If you need to carefully state every premise, and cite evidence of its truth, it’s a whole lot harder to make this kind of mistake.

That’s the real problem with this whole argument. It’s built on the fact that the premise “you can enumerate the real numbers” is built in. If you wrote it in careful formal mathematics, you wouldn’t be able to get away with that.

UD Creationists and Proof

A reader sent me a link to a comment on one of my least favorite major creationist websites, Uncommon Descent (No link, I refuse to link to UD). It’s dumb enough that it really deserves a good mocking.

Barry Arrington, June 10, 2016 at 2:45 pm

daveS:
“That 2 + 3 = 5 is true by definition can be verified in a purely mechanical, absolutely certain way.”

This may be counter intuitive to you dave, but your statement is false. There is no way to verify that statement. It is either accepted as self-evidently true, or not. Think about it. What more basic steps of reasoning would you employ to verify the equation? That’s right; there are none. You can say the same thing in different ways such as || + ||| = ||||| or “a set with a cardinality of two added to a set with cardinality of three results in a set with a cardinality of five.” But they all amount to the same statement.

That is another feature of a self-evident truth. It does not depend upon (indeed cannot be) “verified” (as you say) by a process of “precept upon precept” reasoning. As WJM has been trying to tell you, a self-evident truth is, by definition, a truth that is accepted because rejection would be upon pain of patent absurdity.

2+3=5 cannot be verified. It is accepted as self-evidently true because any denial would come at the price of affirming an absurdity.

It’s absolutely possible to verify the statement “2 + 3 = 5”. It’s also absolutely possible to prove that statement. In fact, both of those are more than possible: they’re downright easy, provided you accept the standard definitions of arithmetic. And frankly, only a total idiot who has absolutely no concept of what verification or proof mean would ever claim otherwise.

We’ll start with verification. What does that mean?

Verification is the process of testing a hypothesis to determine if it correctly predicts the outcome. Here’s how you verify that 2+3=5:

  1. Get two pennies, and put them in a pile.
  2. Get three pennies, and put them in a pile.
  3. Put the pile of 2 pennies on top of the pile of 3 pennies.
  4. Count the resulting pile of pennies.
  5. If there are 5 pennies, then you have verified that 2+3=5.

Verification isn’t perfect. It’s the result of a single test that confirms what you expect. But verification is repeatable: you can repeat that experiment as many times as you want, and you’ll always get the same result: the resulting pile will always have 5 pennies.

Proof is something different. Proof is a process of using a formal system to demonstrate that within that formal system, a given statement necessarily follows from a set of premises. If the formal system has a valid model, and you accept the premises, then the proof shows that the conclusion must be true.

In formal terms, a proof operates within a formal system called a logic. The logic consists of:

  1. A collection of rules (called syntax rules or formation rules) that define how to construct a valid statement are in the logical language.
  2. )

  3. A collection of rules (called inference rules) that define how to use true statements to determine other true statements.
  4. A collection of foundational true statements called axioms.

Note that “validity”, as mentioned in the syntax rules, is a very different thing from “truth”. Validity means that the statement has the correct structural form. A statement can be valid, and yet be completely meaningless. “The moon is made of green cheese” is a valid sentence, which can easily be rendered in valid logical form, but it’s not true. The classic example of a meaningless statement is “Colorless green ideas sleep furiously”, which is syntactically valid, but utterly meaningless.

Most of the time, when we’re talking about logic and proofs, we’re using a system of logic called first order predicate logic, and a foundational system of axioms called ZFC set theory. Built on those, we define numbers using a collection of definitions called Peano arithmetic.

In Peano arithmetic, we define the natural numbers (that is, the set of non-negative integers) by defining 0 (the cardinality of the empty set), and then defining the other natural numbers using the successor function. In this system, the number zero can be written as z; one is s(z) (the successor of z); two is the successor of 1: s(1) = s(s(z)). And so on.

Using Peano arithmetic, addition is defined recursively:

  1. For any number x, x + 0 = x.
  2. For any number numbers x and y: s(x)+y=x+s(y).

So, using peano arithmetic, here’s how we can prove that 2+3=5:

  1. In Peano arithemetic form, 2+3 means s(s(z)) + s(s(s(z))).
  2. From rule 2 of addition, we can infer that s(s(z)) + s(s(s(z))) is the same as s(z) + s(s(s(s(z)))). (In numerical syntax, 2+3 is the same as 1+4.)
  3. Using rule 2 of addition again, we can infer that s(z) + s(s(s(s(z)))) = z + s(s(s(s(s(z))))) (1+4=0+5); and so, by transitivity, that 2+3=0+5.
  4. Using rule 1 of addition, we can then infer that 0+5=5; and so, by transitivity, 2+3=5.

You can get around this by pointing out that it’s certainly not a proof from first principles. But I’d argue that if you’re talking about the statement “2+3=5” in the terms of the quoted discussion, that you’re clearly already living in the world of FOPL with some axioms that support peano arithmetic: if you weren’t, then the statement “2+3=5” wouldn’t have any meaning at all. For you to be able to argue that it’s true but unprovable, you must be living in a world in which arithmetic works, and that means that the statement is both verifiable and provable.

If you want to play games and argue about axioms, then I’ll point at the Principia Mathematica. The Principia was an ultimately misguided effort to put mathematics on a perfect, sound foundation. It started with a minimal form of predicate logic and a tiny set of inarguably true axioms, and attempted to derive all of mathematics from nothing but those absolute, unquestionable first principles. It took them a ton of work, but using that foundation, you can derive all of number theory – and that’s what they did. It took them 378 pages of dense logic, but they ultimately build a rock-solid model of the natural numbers, and used that to demonstrate the validity of Peano arithmetic, and then in turn used that to prove, once and for all, that 1+1=2. Using the same proof technique, you can show from first principles, that 2+3=5.

But in a world in which we don’t play semantic games, and we accept the basic principle of Peano arithmetic as a given, it’s a simple proof. It’s a simple proof that can be found in almost any textbook on foundational mathematics or logic. But note how Arrington responds to it: by playing word-games, rephrasing the question in a couple of different ways to show off how much he knows, while completely avoiding the point.

What does it take to conclude that you can’t verify or prove something like 2+3=5? Profound, utter ignorance. Anyone who’s spent any time learning math should know better.

But over at Uncommon Descent? They don’t think they need to actually learn stuff. They don’t need to refer to ungodly things like textbook. They’ve got their God, and that’s all they need to know.

To be clear, I’m not anti-religion. I’m a religious Jew. Uncommon Descent and their rubbish don’t annoy me because I have a grudge against theism. I have a grudge against ignorance. And UD is a huge promoter of arrogant, dishonest ignorance.

Elon Musk’s Techno-Religion

A couple of people have written to me asking me to say something about Elon Musk’s simulation argument.

Unfortunately, I haven’t been able to find a verbatim quote from Musk about his argument, and I’ve seen a couple of slightly different arguments presented as being what Musk said. So I’m not really going to focus so much on Musk, but instead, just going to try to take the basic simulation argument, and talk about what’s wrong with it from a mathematical perspective.

The argument isn’t really all that new. I’ve found a couple of sources that attribute it to a paper published in 2003. That 2003 paper may have been the first academic publication, and it might have been the first to present the argument in formal terms, but I definitely remember discussing this in one of my philophy classes in college in the late 1980s.

Here’s the argument:

  1. Any advanced technological civilization is going to develop massive computational capabilities.
  2. With immense computational capabilities, they’ll run very detailed simulations of their own ancestors in order to understand where they came from.
  3. Once it is possible to run simulations, they will run many of them to explore how different parameters will affect the simulated universe.
  4. That means that advanced technological civilization will run many simulations of universes where their ancestors evolved.
  5. Therefore the number of simulated universes with intelligent life will be dramatically larger than the number of original non-simulated civilizations.

If you follow that reasoning, then the odds are, for any given form of intelligent life, it’s more likely that they are living in a simulation than in an actual non-simulated universe.

As an argument, it’s pretty much the kind of crap you’d expect from a bunch of half drunk college kids in a middle-of-the-night bullshit session.

Let’s look at a couple of simple problems with it.

The biggest one is a question of size and storage. The heart of this argument is the assumption that for an advanced civilization, nearly infinite computational capability will effectively become free. If you actually try to look at that assumption in detail, it’s not reasonable.

The problem is, we live in a quantum universe. That is, we live in a universe made up of discrete entities. You can take an object, and cut it in half only a finite number of times, before you get to something that can’t be cut into smaller parts. It doesn’t matter how advanced your technology gets; it’s got to be made of the basic particles – and that means that there’s a limit to how small it can get.

Again, it doesn’t matter how advanced your computers get; it’s going to take more than one particle in the real universe to simulate the behavior of a particle. To simulate a universe, you’d need a computer bigger than the universe you want to simulate. There’s really no way around that: you need to maintain state information about every particle in the universe. You need to store information about everything in the universe, and you need to also have some amount of hardware to actually do the simulation with the state information. So even with the most advanced technology that you can possible imagine, you can’t possible to better than one particle in the real universe containing all of the state information about a particle in the simulated universe. If you did, then you’d be guaranteeing that your simulated universe wasn’t realistic, because its particles would have less state than particles in the real universe.

This means that to simulate something in full detail, you effectively need something bigger than the thing you’re simulating.

That might sound silly: we do lots of things with tiny computers. I’ve got an iPad in my computer bag with a couple of hundred books on it: it’s much smaller than the books it simulates, right?

The “in full detail” is the catch. When my iPad simulates a book, it’s not capturing all the detail. It doesn’t simulate the individual pages, much less the individual molecules that make up those pages, the individual atoms that make up those molecules, etc.

But when you’re talking about perfectly simulating a system well enough to make it possible for an intelligent being to be self-aware, you need that kind of detail. We know, from our own observations of ourselves, that the way our cells operates is dependent on incredibly fine-grained sub-molecular interactions. To make our bodies work correctly, you need to simulate things on that level.

You can’t simulate the full detail of a universe bigger that the computer that simulates it. Because the computer is made of the same things as the universe that it’s simulating.

There’s a lot of handwaving you can do about what things you can omit from your model. But at the end of the day, you’re looking at an incredibly massive problem, and you’re stuck with the simple fact that you’re talking, at least, about building a computer that can simulate an entire planet and its environs. And you’re trying to do it in a universe just like the one you’re simulating.

But OK, we don’t actually need to simulate the whole universe, right? I mean, you’re really interested in developing a single species like yourself, so you only care about one planet.

But to make that planet behave absolutely correctly, you need to be able to correctly simulate everything observable from that planet. Its solar system, you need to simulate pretty precisely. The galaxy around it needs less precision, but it still needs a lot of work. Even getting very far away, you’ve got an awful lot of stuff to simulate, because your simulated intelligences, from their little planet, are going to be able to observe an awful lot.

To simulate a planet and its environment with enough precision to get life and intelligence and civilization, and to do it at a reasonable speed, you pretty much need to have a computer bigger than the planet. You can cheat a little bit, and maybe abstract parts of the planet; but you’ve got to do pretty good simulations of lots of stuff outside the planet.

It’s possible, but it’s not particularly useful. Because you need to run that simulation. And since it’s made up of the same particles as the things it’s simulating, it can’t move faster than the universe it simulates. To get useful results, you’d need to build it to be massively parallel. And that means that your computer needs to be even larger – something like a million times bigger.

If technology were to get good enough, you could, in theory, do that. But it’s not going to be something you do a lot of: no matter how advanced technology gets, building a computer that can simulate an entire planet and its people in full detail is going to be a truly massive undertaking. You’re not going to run large numbers of simulations.

You can certainly wave you hands and say that the “real” people live in a universe without the kind of quantum limit that we live with. But if you do, you’re throwing other assumptions out the window. You’re not talking about ancestor simulation any more. And you’re pretending that you can make predictions based on our technology about the technology of people living in a universe with dramatically different properties.

This just doesn’t make any sense. It’s really just techno-religion. It’s based on the belief that technology is going to continue to develop computational capability without limit. That the fundamental structure of the universe won’t limit technology and computation. Essentially, it’s saying that technology is omnipotent. Technology is God, and just as in any other religion, it’s adherents believe that you can’t place any limits on it.

Rubbish.

One plus one equals Two?

My friend Dr24hours sent me a link via twitter to a new piece of mathematical crackpottery. It’s the sort of thing that’s so trivial that I might lust ignore it – but it’s also a good example of something that someone commented on in my previous post.

This comes from, of all places, Rolling Stone magazine, in a puff-piece about an actor named Terrence Howard. When he’s not acting, Mr. Howard believes that he’s a mathematical genius who’s caught on to the greatest mathematical error of all time. According to Mr. Howard, the product of one times one is not one, it’s two.

After high school, he attended Pratt Institute in Brooklyn, studying chemical engineering, until he got into an argument with a professor about what one times one equals. “How can it equal one?” he said. “If one times one equals one that means that two is of no value because one times itself has no effect. One times one equals two because the square root of four is two, so what’s the square root of two? Should be one, but we’re told it’s two, and that cannot be.” This did not go over well, he says, and he soon left school. “I mean, you can’t conform when you know innately that something is wrong.”

I don’t want to harp on Mr. Howard too much. He’s clueless, but sadly, he’s a not too atypical student of american schools. I’ll take a couple of minutes to talk about what’s wrong with his stuff, but in context of a discussion of where I think this kind of stuff comes from.

In American schools, math is taught largely by rote. When I was a kid, set theory came into vogue, but by and large math teachers didn’t understand it – so they’d draw a few meaningless Venn diagrams, and then switch into pure procedure.

An example of this from my own life involves my older brother. My brother is not a dummy – he’s a very smart guy. He’s at least as smart as I am, but he’s interested in very different things, and math was never one of his interests.

I barely ever learned math in school. My father noticed pretty early on that I really enjoyed math, and so he did math with me for fun. He taught me stuff – not as any kind of “they’re not going to teach it right in school”, but just purely as something fun to do with a kid who was interested. So I learned a lot of math – almost everything up through calculus – from him, not from school. My brother didn’t – because he didn’t enjoy math, and so my dad did other things with him.

When we were in high school, my brother got a job at a local fast food joint. At the end of the year, he had to do his taxes, and my dad insisted that he do it himself. When he needed to figure out how much tax he owed on his income, he needed to compute a percentage. I don’t know the numbers, but for the sake of the discussion, let’s say that he made $5482 that summer, and the tax rate on that was 18%. He wrote down a pair of ratios:

\frac{18}{100} = \frac{x}{5482}

And then he cross-multiplied, getting:

 18 \times 5482 = 100 \times x

 98676 = 100 \times x

and so x = 986.76.

My dad was shocked by this – it’s such a laborious way of doing it. So he started pressing at my brother. He asked him, if you went to a store, and they told you there was a 20% off sale on a pair of jeans that cost $18, how much of a discount would you get? He didn’t know. The only way he knew to figure it out was to do the whole ratios, cross-multiply, and solve. If you told him that 20% off of $18 was $5, he would have believed you. Because percentages just didn’t mean anything to him.

Now, as I said: my brother isn’t a dummy. But none of his math teachers had every taught him what percentages meant. He had no concept of their meaning: he knew a procedure for getting the value, but it was a completely blind procedure, devoid of meaning. And that’s what everything he’d learned about math was like: meaningless procedures performed by rote, without any comprehension.

That’s where nonsense like Terence Howard’s stuff comes from: math education that never bothered to teach students what anything means. If anyone had attempted to teach any form of meaning for arithmetic, the ridiculous of Mr. Howard’s supposed mathematics would be obvious.

For understanding basic arithmetic, I like to look at a geometric model of numbers.

Put a dot on a piece of paper. Label it “0”. Draw a line starting at zero, and put tick-marks on the line separated by equal distances. Starting at the first mark after 0, label the tick-marks 1, 2, 3, 4, 5, ….

In this model, the number one is the distance from 0 (the start of the line) to 1. The number two is the distance from 0 to 2. And so on.

What does addition mean?

Addition is just stacking lines, one after the other. Suppose you wanted to add 3 + 2. You draw a line that’s 3 tick-marks long. Then, starting from the end of that line, you draw a second line that’s 2 tick-marks long. 3 + 2 is the length of the resulting line: by putting it next to the original number-line, we can see that it’s five tick-marks long, so 3 + 2 = 5.

addition

Multiplication is a different process. In multiplication, you’re not putting lines tip-to-tail: you’re building rectangles. If you want to multiply 3 * 2, what you do is draw a rectangle who’s width is 3 tick-marks long, and whose height is 2 tick-marks long. Now divide that into squares that are 1 tick-mark by one tick-mark. How many squares can you fit into that rectangle? 6. So 3*2 = 6.

multiplication

Why does 1 times 1 equal 1? Because if you draw a rectangle that’s one hash-mark wide, and one hash-mark high, it forms exactly one 1×1 square. 1 times 1 can’t be two: it forms one square, not two.

If you think about the repercussions of the idea that 1*1=2, as long as you’re clear about meanings, it’s pretty obvious that 1*1=2 has a disastrously dramatic impact on math: it turns all of math into a pile of gibberish.

What’s 1*2? 2. 1*1=2 and 1*2=2, therefore 1=2. If 1=2, then 2=3, 3=4, 4=5: all integers are equal. If that’s true, then… well, numbers are, quite literally, meaningless. Which is quite a serious problem, unless you already believe that numbers are meaningless anyway.

In my last post, someone asked why I was so upset about the error in a math textbook. This is a good example of why. The new common core math curriculum, for all its flaws, does a better job of teaching understanding of math. But when the book teaches “facts” that are wrong, what they’re doing becomes the opposite. It doesn’t make sense – if you actually try to understand it, you just get more confused.

That teaches you one of two things. Either it teaches you that understanding this stuff is futile: that all you can do is just learn to blindly reproduce the procedures that you were taught, without understanding why. Or it teaches you that no one really understands any of it, and that therefore nothing that anyone tells you can possibly be trusted.