Monthly Archives: July 2008

Nonsense Pretending: Probability as a Disguise

Once again, you, my readers, have come through with some really high-grade crackpottery. This one was actually sent to me by its author, but I didn’t really look at it until several readers sent me the same link because they thought it was my kind of material. With your recommendations, I took a look, and was rewarded. In a moment of hubris, the author titled it A Possible Proof of God’s Existence from Multiverse Assumptions.

This article is basically a version of the classic big-numbers probabilistic argument for God. What makes this different is that it doesn’t line up a bunch of fake numbers and saying “Presto! Look at that great big probability: that means that it’s impossible for the universe/life/everything to exist without God!”. Instead, it takes a more scientific looking approach. It dresses the probability argument up using lots of terms and ideas from modern physics, and presents it as “If we knew the values of these variables, we could compute the probability” – with a clear bias towards the idea that the unvalued variables must have values that produced the desired result of this being a created universe.

Aside from being an indirect version of the big-numbers argument, this is also a nice example of what I call obfuscatory mathematics. See, you want to make some argument. You’re dead sure that it’s right. But it doesn’t sound convincing. So you dress it up. Don’t just assume your axioms – make up explanations for them in terms of math, so that it sounds all formal and mathy. Then your crappy assumptions will look convincing!

With that said, on to his argument!

Continue reading

Solving Tic-Tac-Toe: Game Tree Basics

tic-tac-toe.png

Moving on from simple zero-sum games, there are a bunch of directions in which
we can go. So far, the games we’ve looked at are very restrictive. Beyond the
zero-sum property, they’re built on a set of fundamental properties which ultimately
reduce to the idea that no player ever has an information advantage over any other
player: the complete payoff matrix is known by all players; no player gets
to see the other players strategy before selecting their own; and so on.

Moving on to more interesting games, we can relax those assumptions, and allow information to be hidden. Perhaps each player can see a different part of the
payoff matrix. Perhaps they take turns, so that one player gets to see the others
strategy before selecting his own. Perhaps the game isn’t zero-sum.

Non-zero sum games turn out to be disappointing from a game theory point of
view. Given a suitable set of restrictions, you can convert a non-zero-sum game to a
zero-sum game with an additional player. In the cases where you can’t do that,
you’re pretty much stuck – the mathematical tools that work well for analyzing
zero-sum games often simply don’t work once you relax the zero-sum requirement.

The more interesting ways of exploring different realms of games comes when you
allow things to get more complex. This comes about when you allow a players strategy
selection to alter the game. This general takes place in a turn-taking
game, where each players strategy selection alters the game for the other player. A
simple example of this is the game of tic-tac-toe. The set of strategies of the game
for a given player at any point in time is the set of open squares on the board.
Each time a player makes a move, the game is altered for the other player.

This makes things much more interesting. The easiest way to think of it is
that now, instead of a simple matrix for the game, we end up with a tree. Each move
that a player can make creates a new game for the other player. By making each game position a tree node, and adding children nodes for each position that can follow it, you can build a tree describing the complete set of possible game positions, and thus the complete set of ways that the game could play out.

Continue reading

Teaching Multiplication: Is it repeated addition?

I’ve been getting peppered with requests to comment on a recent argument that’s
been going on about math education, particularly with respect to multiplication.
We’ve got a fairly prominent guy named Keith Devlin ranting that
“multiplication is not repeated addition”
. I’ve been getting mail from both
sides of this – from people who basically say “This guy’s an idiot – of
course it’s repeated addition”, and from people who say “Look how stupid
these people are that they don’t understand that multiplication isn’t repeated
addition”.

In general, I’m mostly inclined to agree with him, with some major caveats. But since he sidesteps the real fundamental issue here, I’m rather annoyed with him.

Continue reading

Numeric Pareidolia and God in Π

piproof.png

There’s one kind of semi-mathematical crackpottery that people frequently send to me, but which i generally don’t write about. Given my background, I call it gematria – but it covers a much wider range than what’s really technically meant by that term. Another good name for it would be numeric pareidolia. It’s been a long time since I’ve written about this kind of stuff, and someone just sent me a pretty typical example, so what the hell. It revolves around a mess that he put together as an image, which is pretty much a classic example of obsessive silliness.

The general idea of this kind of silliness is finding some kind of numeric
pattern, and convincing yourself that there’s some deep, profound truth behind that pattern. There are a couple of typical kinds of this: number/letter correspondence (classical gematria, which uses the fact that the hebrew characters are used both as letters and numbers, so a word can be interepreted as a number, and vice versa), distance coding (like the infamous “torah codes”,
where you find words “hidden” in a text by picking out characters according
to some pattern and using them to form words), and simple numeric patterning (where you take numbers – generally some sort of constant – and find
some sort of pattern supposedly hidden in its digits). Todays crackpottery
is the third kind – it’s written by a guy who believes that there are mystic secrets encoded into π and the square root of two that were put there by God, and that the existence of those patterns are proof of the existence of God.

This little bundle of rubbish – like all of the kinds of things I described
above – are examples of pareidolia involving numbers. As
I’ve written about before, we humans are amazingly good at finding patterns. We’ve
got a strong natural talent for looking at things, and finding structures and
patterns. That ability serves us well in many of our ordinary endeavors. The
problem with it is that there are apparent patterns in lots of things. In fact, if
you look at things mathematically, the odds of any text or constant not
containing interesting patterns is effectively nil. If you’re willing to consider
all sorts of patterns, then you can find patterns in absolutely everything. The question that you need to ask is whether or not the pattern is simple the result of our ability to find patterns in noise, or whether it’s something deliberate.

Continue reading

Utility Functions

Before we move beyond zero-sum games, it’s worth taking a deeper look
at the idea of utilities. As I mentioned before, in a game, the scores in
the matrix are given by something called a utility function.

Utility is an idea for how to mathematically describe preferences in terms
of a game or lottery. For a game to be valid (that is, for a game to have a meaningful analysis and solution), there must be a valid utility function that
describes the players’ preferences.

But what do we have to do to make a valid utility function? It’s
simple, but as usual, we’ll make it all formal and explicit.

Continue reading

Friday Random Ten, July 18

  1. The Flower Kings, “Underdog”: a neo-progressive track with the lead played by a bagpipe and a steel guitar. How can you not love that?
  2. Broken Social Scene, “Ibi Dreams of a Pavement”: A post-rock
    track with vocals. Very good stuff – very dense. Like I said it’s got vocals, but they’re not the dominant part of it – they’re actually almost in the background.
  3. Marillion, “Heart of Lothian”: a track off of my favorite Fish-era Marillion album. It’s hard to take this in isolation – the whole album is really one continuous piece of music – with recurring themes, lyrical motifs, etc. This is really just a continuation of what came before it – it’s not a standalone. But it’s amazing – the kind of music that can give you chills even the hundredth time you’ve listened to it.
  4. Naftule’s Dream, “Speed Klez”: Rollocking progressive Klezmer,
    played by a mix of instruments including clarinet, trombone, electric guitar,
    and who knows what else. Amazing, dazzling stuff.

  5. Magma, “Ork Alarm”: progressive rock, which really staddles the line between modern classical and progressive rock. Very complicated stuff. Not the easiest listen – it’s very strange, and takes a few listens before you really
    understand it enough to enjoy it. But like modern classical music, it’s worth the effort. This group is really one of the most amazing ensembles in progressive rock.
  6. Happy the Man, “Stepping Through Time”: a piece off of the reunion album of the great american progressive band. I’m a huge fan of HtMs old work. This new album isn’t bad – it’s interesting, complex music, with nice melodies, time changes, and amazing musicianship. But it’s strangely lacking something. It’s soulless. It just feels very mechanical.
  7. Isis, “All Out of Time, All Into Space”: more post-rock. Very atmospheric, dark.
  8. Lunasa, “Island Paddy”: quite a transition from the last one; this
    is a straightforward traditional Irish showoff piece. Bouncy fun that makes you want to get up and dance.
  9. Sonic Youth, “Queen Anne Chair”: a wonderful little snippet. This
    is off of Sonic Youth’s album “The Destroyed Room”, which is assembled from
    experimental studio clips. It sounds like Sonic Youth playing post-rock.
  10. Hawkwind, “You Shouldn’t Do That”: Amazing strange but wonderful
    stuff from early Hawkwind. Depending who you ask, this is either early progressive or psychedelia.

Back to Math: Solving Zero-Sum Games

When I last wrote about game theory, we were working up to how to find
the general solution to an iterated two-player zero-sum game. Since it’s been
a while, I’ll take a moment and refresh your memory a bit.

A zero-sum game is described as a matrix. One player picks a row, and one player picks a column. Those selections are called strategies. The intersection
of the strategies in the game matrix describes what one player pays to the other. The
matrix is generally written from the point of view of one player. So if we call
our two players A and B, and the matrix is written from the viewpoint of A, then
an entry of $50 means that B has to pay $50 to A; an entry of $-50 means that A has to pay $50 to B.

In an iterated game, you’re not going to just play once – you’re going to play it repeatedly. To maximize your winnings, you may want to change strategies. It turns out that the optimal strategy is probabilistic – you’ll assign probabilities to your
strategy choices, and use that probability assignment to randomly select your
strategy each iteration. That probability assignment that dictates how you’ll
choose a strategy each iteration is called a grand strategy.

Jon Von Neumann proved that for any two-player zero-sum game, there is an optimal grand strategy based on probability. In a game where both players know exactly what the payoffs/penalties are, the best strategy is based on constrained randomness – because any deliberate system for choosing strategies can be cracked by your opponent, resulting in his countering you. The best outcome comes from assessing potential wins and losses, and developing a probabilistic scheme for optimizing the way that you play.

Once you make that fundamental leap, realizing that it’s a matter
of probability, it’s not particularly difficult to find the grand strategy: it’s just a simple optimization task, solveable via linear programming. The solution is very elegant: once you see it, once you see how to
formulate the game as a linear optimization process, it just seems completely obvious.

Continue reading

Sizzle: A Review of the latest from Randy Olsen

Back in May, we here at ScienceBlogs got an offer to get an advance screener copy of Randy Olson’s new movie, “Sizzle”, if we promised to review it. I hadn’t seen any of Olson’s movies before, but I’ve been involved in a few discussions with him as part of the Great Framing Wars, and while I frequently disagree with him, he seemed to be a bright and interesting guy, so I was interesting in seeing what he’s been working on. So I signed up for the review, telling the people from the production company that I’d review it from the viewpoint of a mathy guy – expecting that it was really a science
movie, and knowing how badly a lot of popular science stuff really screws up the math. Little did I know what I was getting into….

After signing up for the review, his production company mailed me a DVD at the beginning of the month. The packaging makes it clear that what I saw is not the final version of the movie. The soundtrack, color balance, and editing are all likely to change before the real final cut of the movie, so what I saw is definitely a preliminary version.

Finally, last weekend, I sat down to watch it. I don’t think Randy is going to be terribly happy with this review, because I really didn’t like it.

From the title, you might think that it’s a movie about global warming. It’s definitely not that. At times, it wants to be a movie about the debate over global warming. But it doesn’t succeed at that. And at times, it wants to be a straightforward comedy. But it doesn’t even succeed at that. It does a dreadful job of balancing those different goals. It comes off as a mean-spirited, glib, pointless mess of a movie.

Continue reading

The PZ Cracker Mess

So my fellow SBer PZ is in all sorts of hot water with Catholics over a blog post. I didn’t really want to poke my nose into this, but there’s been so much noise about it, that it’s really unavoidable. But I think I’ve got a rather different opinion on this than most bloggers I’ve seen so far. And I’m pretty sure that I’m not going to be making any friends by posting this. But people keep asking, so I’m going to open my big mouth, and tell you what I think.

You see, I think that both sides are assholes. Obviously, the people making threats take the prize as the biggest assholes, but a huge margin. But this isn’t a situation where a bunch of wackos went on an unprovoked rampage against a blogger; PZ deliberately provoked this mess.

Continue reading

B-Trees – Balanced Search Trees for Slow Storage

Another cool, but frequently overlooked, data structure in the tree family is called the B-tree. A B-tree is a search tree, very similar to a BST in concept, but optimized differently.

BSTs provide logarithmic time operations, where the performance
is fundamentally bounded by the number of comparisons. B-trees also
provide logarithmic performance with a logarithmic number of
comparisons – but the performance is worse by a constant factor. The
difference is that B-trees are designed around a different tradeoff. The B-tree is designed to minimize the number of tree nodes that need to be examined, even if that comes at the cost of doing significantly more
comparisons.

Why would you design it that way? It’s a different performance tradeoff.
The B-tree is a da
ta structure designed not for use in memory, but instead for
use as a structure in hard storage, like a disk drive. B-trees are the
basic structure underlying most filesystems and databases: they provide
an efficient way of provide rapidly searchable stored structures. But
retrieving nodes from disk is very expensive. In comparison to
retrieving from disk, doing comparisons is very nearly free. So the design
goal for performance in a B-tree tries to minimize disk access; and when disk access is necessary, it tries to localize it as much as possible – to minimize the number of retrievals, and even more importantly, to minimize the number of nodes on disk that need to be updated when something is inserted.

Continue reading