One of the interestingly odd things about how people understand math is numbers. It’s
astonishing to see how many people don’t really understand what numbers are, or what different kinds of numbers there are. It’s particularly amazing to listen to people arguing
vehemently about whether certain kinds of numbers are really “real” or not.
Today I’m going to talk about two of the most basic kind of numbers: the naturals and the integers. This is sort of an advanced basics article; to explain things like natural numbers and integers, you can either write two boring sentences, or you can go a bit more formal. The formal
stuff is more fun. If you don’t want to bother with that, here are the two boring sentences:
- The natural numbers (written N) are zero and the numbers that can be
written without fractions that are greater than zero.
- The integers (written Z) are all of the numbers, both larger and smaller than
zero, that can be written without fractions.
The margin of error is the most widely misunderstood and misleading concept in statistics. It’s positively frightening to people who actually understand what it means to see how it’s commonly used in the media, in conversation, sometimes even by other scientists!
The basic idea of it is very simple. Most of the time when we’re doing statistics, we’re doing statistics based on a sample – that is, the entire population we’re interested in is difficult to study; so what we try to do is pick a representative subset called a sample. If the subset is truly representative, then the statistics you generate using information gathered from the sample will be the same as information gathered from the population as a whole.
But life is never simple. We never have perfectly representative samples; in fact, it’s impossible to select a perfectly representative sample. So we do our best to pick good samples, and we use probability theory to work out a predication of how confident we can be that the statistics from our sample are representative of the entire population. That’s basically what the margin of error represents: how well we think that the selected sample will allow us to predict things about the entire population.
Yet another reader forwarded me a link to a rather dreadful article. This one seems to be by
someone who knows better, but prefers to stick with his political beliefs rather than an honest
exploration of the facts.
He’s trying to help provide cover for the anti-global warming cranks. Now, in light of all of the
data that we’ve gathered, and all of the different kinds of analyses that have been used
on that data, for anyone in the real world, it’s pretty undeniable that global warming is
a real phenomena, and that at least part of it is due to humanity.
I decided that for today, I’d show the most thoroughly evil programming language ever
devised. This is a language so thoroughly evil that it’s named Malbolge after a circle
of hell. It’s so evil that it’s own designer was not able to write a hello world program! In
fact, the only way that anyone managed to write a “Hello World” was by designing a genetic algorithm
to create one. This monstrosity is so thoroughly twisted that I decided to put it in the “Brain and Behavior” category on ScienceBlogs, because it’s a demonstration of what happens when you take a brain, and twist it until it breaks.
Yet another reader sent me a great bad math link. (Keep ’em coming guys!) This one is an astonishingly nasty slight of hand, and a great example of how people misuse statistics to support a political agenda. It’s by someone
named “Dr. Deborah Schurman-Kauflin”, and it’s an attempt to paint illegal
immigrants as a bunch of filthy criminal lowlifes. It’s titled “The Dark Side of Illegal Immigration: Nearly One Million Sex Crimes Committed by Illegal Immigrants in the United States.”
With a title like that, you’d think that she has actual data showing that nearly one million sex crimes were committed by illegal immigrants, wouldn’t you? Well, you’d be wrong.
I’ve gotten complaints from a bunch of commenters about problems with comments getting thrown into the moderation queue by the spam filter. Things with too many links, or with certain text properties, were getting caught even though they are clearly not spam.
In order to get around this, I’ve re-enable typekey authentication. You don’t have to login via typekey to post comments – it’s entirely voluntary. But you’re welcome to if you want, and if you do, your posts will be almost guaranteed to get posted without being pushed into the mod queue. (If you write a post containing links to viagra-selling websites, you’ll still get trapped by the spamfilter. But anything less egregious than that should go right through.
When we look at a the data for a population+ often the first thing we do
is look at the mean. But even if we know that the distribution
is perfectly normal, the mean isn’t enough to tell us what we know to understand what the mean is telling us about the population. We also need
to know something about how the data is spread out around the mean – that is, how wide the bell curve is around the mean.
There’s a basic measure that tells us that: it’s called the standard deviation. The standard deviation describes the spread of the data,
and is the basis for how we compute things like the degree of certainty,
the margin of error, etc.
Another piece of junk that I received: “The Invisible Link
Between Mathematics and Theology”, by a guy named “Ladislav Kvasz”,
published in a rag called “Perspectives on Science and Christian Faith”. (I’m
not going to quote much from this, because the way that the PDF is formatted,
it requires a huge amount of manually editing.) This is a virtual masterwork of
goofy clueless Christian arrogance – everything truly good must be Christian, so
the author had to find some way of saying that mathematics is intrinsically tied to Christianity.
This article actually reminds me rather a lot of George
Shollenberger. His arguments are similar to George’s: that there’s some
intrinsic connection between the concept of infinity and the Christian god.
But Kvasz goes further: it’s the nature of monotheism in general, and
Christianity in particular, which gave us the idea of using
quantifiers in predicate logic. Because, you see, the idea of
quantifiers comes from the idea that existence is not a predicate, and the
idea that existence is not a predicate comes from a debate over an invalid
proof for the existence of god.
A reader sent me a link to this, thinking that it would be of interest to me, and he was absolutely right. I actually needed to let it sit overnight before writing anything because it made me so angry.
I’ve come to realize that probably one reason I struggled with algebra,
geometry et.al., was that it seemed to me that these were basically
reactionary academic disciplines, useful for designing weaponry or
potentially repressive computer technology, but not with any obvious
humanistic or social positive uses.
If I’m wrong about this, I’d appreciate it if people could show me how this
discipline can have progressive uses.
I also feel this could be useful in developing better ways of teaching
higher mathematics if such uses could be found.
Leaving aside the incredible irony of an alleged “progressive” participating in a discussion with a community of people he would never
have been able to reach without the products of that “reactionary” discipline, I have one basic response to this kind of babble.
One thing that we’ve seen already in Haskell programs is type
classes. Today, we’re going to try to take our first look real look
at them in detail – both how to use them, and how to define them. This still isn’t the entire picture around type-classes; we’ll come back for another look at them later. But this is a beginning – enough to
really understand how to use them in basic Haskell programs, and enough to give us the background we’ll need to attack Monads, which are the next big topic.
Type classes are Haskell’s mechanism for managing parametric polymorphism. Parametric polymorphism
is a big fancy term for code that works with type parameters. The idea of type classes is to provide a
mechanism for building constrained polymorphic functions: that is, functions whose type involves a type parameter, but which needs some constraint to limit the types that can be used to instantiate it, to specify what properties its type parameter must have. In essence, it’s doing very much the same thing that parameter type declarations let us do for the code. Type declarations let us say “the value of this parameter can’t be just any value – it must be a value which is a member of this particular type”; type-class declarations let us say “the type parameter for instantiating this function can’t be just any type – it must be a type which is a member of this type-class.”