Friday Random Ten, June 30

It’s that time of the week again, when I bore you with my bizzare taste in music. Quite an eclectic mix this week.

  1. Spock’s Beard, “Thoughts”. A track from an oldish Spock’s Beard album. SB is an American neoprog band, which sounds something like a blend of old Genesis, Kansas, and Rush. Very good band. This isn’t my favorite of their albums (that would be “V”).
  2. Gentle Giant, “Way of Life”. A classic song off of a classic album.
  3. Whirligig, “Mister Fox”. An interesting little ballad by a wonderful NYC based Irish band.
  4. Peter Gabriel, “San Jacinto”. Peter Gabriel at his absolute best. He’s never done anything to match the “Security” album, and this is one of my favorite tracks off of there. Starts off mellow and kind of mysterious sounding, and gradually builds, and then fades.
  5. The Clogs, “Lady Go”. A track with vocals from one of those “post-rock ensembles” that I love so much. Very strange sounding; partly a capella falsetto; lots of dark rythmic stiff in other parts.
  6. Broadside Electric, “Tam Lin”. The old traditional ballad performed by a really cool local electric folk band. (And one of the members of the band is actually a math professor at Suny Stonybrook! But she hadn’t joined yet on this album.)
  7. Mel Brooks & broadway cast of “The Producers”, “Springtime for Hitler”. The original producers is one of my all-time favorite comedy movies. I still haven’t managed to get in to see the show. But the soundtrack is absolutely brilliant.
  8. Psychograss, “Looks like a Duck”. Psychograss is a thoroughly amazing band: Tony Trischka, David Grier, Mike Marshall, Darol Anger, and Todd Phillips. They’re mostly bluegrass, but with various strange influences mixed in. This track has some of the most subtly amazing banjo playing you’ll ever hear, not to mention a knockout fiddle bit at the end.
  9. John Corigliano (performed by Stanley Drucker), “Clarinet Concerto, movement ii: Antiphonal Toccata”. I’m actually a classically trained clarinetist. I used to think that I didn’t like Stan Drucker’s playing. Then I heard this. I’ve since learned that while his performances of some of the old classical standards for Clarinet (Mozart’s Clarinet Concerto, the Weber concertos, etc.) are rather uninspired, he is utterly magnificent when it comes to modern music. He clearly loves playing the newer stuff, and it shows. This is also the most technically challenging piece for Clarinet that I’ve ever heard.
  10. Vasen, “Sluken”. Vasen is a Swedish folk band. The lead player plays a peculiar instrument called the Nyckelharpa – it’s a violin with a keyboard. They’re a great band, especially if you get to see them live.

One more plug for DonorsChoose

This is the last time I’m going to bug folks to remind them to donate to the SB challenges.
The DonorsChoose fundraiser here at ScienceBlogs is just about over. Three more days for you to help some kids get a good education in math and science. The GoodMath/BadMath challenge is here; and Janet has a rundown on the challenges that are close to their goals. (If the challenge is met, DonorsChoose will add in an extra 5% bonus.)
As an extra incentive, for the next 10 people who donate to the GM/BM challenge, if you send me a copy of your DonorsChoose receipt, I’ll let you pick one topic for me to write a post about. The only restriction is that the topic be related to some kind of math – good or bad – and that it’s legit. (So don’t send me some good math and ask me to write a bad math post about it.)
Please, do go over to DC, and donate whatever you can afford, As I said at the very beginning of our drive, I’ve taught kids from the kinds of schools that we’re trying to help. There’s something deeply wrong with a school where you have kids who want to learn, but they don’t really get the chance, because they don’t have the things they need – like textbooks.

Dishonest Dembski:the Universal Probability Bound

Dishonest Dembski:the Universal Probability Bound
One of the dishonest things that Dembski frequently does that really bugs me is take bogus arguments, and dress them up using mathematical terminology and verbosity to make them look more credible.
An example of this is Dembski’s *universal probability bound*. Dembski’s definition of the UPB from the [ICSID online encyclopedia][upb-icsid] is:
>A degree of improbability below which a specified event of that probability
>cannot reasonably be attributed to chance regardless of whatever
>probabilitistic resources from the known universe are factored in. Universal
>probability bounds have been estimated anywhere between 10-50 (Emile Borel)
>and 10-150 (William Dembski).
He’s quantified it in several different ways. I’ve found three different versions of the calculation of the UPB: two of them from wikipedia; one is from a message thread at ICSID which the author claims is a quote from one of Dembski’s books.
Let’s look at Dembski’s own words first:
>Specifically, within the known physical universe there are estimated to be no
>more than 1080 elementary particles. Moreover, the properties of matter are
>such that transitions from one state to another cannot occur at a rate faster
>that 1045 times per second. Finally, the universe itself is about a billion
>times younger than 1025 seconds (assuming the universe is around 10 to 20
>billion years old). ….these cosmological constraints imply that the total
>number of specified events throughout cosmic history cannot exceed
>1080 * 1045 x 1025 = 10150.
He goes on to assert that this is the “maximum number of trials” that could have occurred since the beginning of the universe, and that for anything less likely than that which is observed to occur, it is not reasonable to say it is caused by chance.
Wikipedia presents this definition, and a more recent one which lowers the UPB, but as they don’t provide all of the details of the equation, I’ll skip it for now. Wikipedia’s explanation of this original form of the UPB is:
>Dembski’s original value for the universal probability bound is 1 in 10150,
>derived as the inverse of the product of the following approximate
>quantities:[11]
>
> * 1080, the number of elementary particles in the observable
> universe.
> * 1045, the maximum rate per second at which transitions in
> physical states can occur (i.e., the inverse of the Planck time).
> * 1025, a billion times longer than the typical estimated age of
> the universe in seconds.
>
>Thus, 10150 = 1080 × 1045 × 1025.
>Hence, this value corresponds to an upper limit on the number of physical
>events that could possibly have occurred since the big bang.
Here’s the fundamental dishonesty: None of those numbers have *anything* to do with what he’s supposedly trying to prove. He’s trying to create a formal-sounding version of the big-number problem by throwing together a bunch of fancy-sounding numbers, multiplying them together, and claiming that they somehow suddenly have meaning.
But they don’t.
It’s actually remarkably easy to show what utter nonsense this is. I’ll do a fancy one first, and a trivial one second.
Let’s create an incredibly simplified model of a region of space. Let’s say we have a cube of space, 1 kilometer on a side. Further, let’s suppose that this space contains 1000 particles, and they are all electrons. And further, let’s suppose that each 1mm cube in this cubic kilometer can only have one electron in it.
This is a model which is so much simpler than reality that it’s downright silly. But everything about the real world would make it more complex, and it’s sufficient for our purposes.
Now: consider the probability of any *configuration* of the electrons in the region of space. A configuration is a selection of the set of 1mm cubes that contain electrons. The number of different configurations of this region of space is (109!)/((1000!)*(109-1000)!). That works out to (109*(109-1)*(109-2)*…*(109-1000))/(1000!).
1000! is roughly 4×102568 according to my scheme interpreter. We’ll be generous, and use 1×102569, to make things easier. To estimate the numerator, we can treat it as (109)*((108)999), which will be much smaller. That’s 107801. So the probability of any particular configuration within that cube is 1 in 105232.
So any state of particles within that cube is an event with probability considerably smaller than 1 in 105232. So what Dembski is saying is that *every* possible configuration of matter in space in the entire universe is impossible without intelligent intervention.
And the trivial one? Grab two decks of distinguishable cards. Shuffle them together, and lay them out for a game of spider solitaire. What’s the probability of that particular lay of cards? 104! , or, very roughly, something larger than 1×10166. Is god personally arranging ,my cards every time I play spider?
Anyone who’s ever taken any class on probability *knows* this stuff. One college level intro, and you know that routine daily events can have incredibly small probabilities – far smaller than his alleged UPB. But Dembski calls himself a mathematician, and he writes about probability quite frequently. As much as I’ve come to believe that he’s an idiot, things like this just don’t fit: he *must* know that this is wrong, but he continues to publish it anyway.
[upb-icsid]: http://www.iscid.org/encyclopedia/Universal_Probability_Bound

Skewing Statistics for Politics

As I’ve frequently said, statistics is an area which is poorly understood by most people, and as a result, it’s an area which is commonly used to mislead people. The thing is, when you’re working with statistics, it’s easy to find a way of presenting some value computed from the data that will appear to support a predetermined conclusion – regardless of whether the data as a whole supports that conclusion. Politicians and political writers are some of the worst offenders at this.
Case in point: over at [powerline][powerline], they’re reporting:
>UPI reports that Al Gore’s movie, An Inconvenient Truth, hasn’t done so well
>after a promising start:
>
>> Former U.S. vice-President Al Gore’s documentary “An Inconvenient Truth”
>>has seen its ticket sales plummet after a promising start.
>>
>>After Gore’s global warming documentary garnered the highest average per play
>>ever for a film documentary during its limited Memorial Day weekend opening,
>>recent theater takes for the film have been less than stellar, Daily Variety
>>reports.
>>
>> The film dropped from its record $70,333 per play to $12,334 during its
>>third week and its numbers have continued to fall as the film opens in smaller
>>cities and suburbs across the country.
>
>It’s no shock, I suppose, that most people aren’t interested in seeing
>propaganda films about the weather. But the topic is an interesting and
>important one which we wrote about quite a few years ago, and will try to
>return to as time permits.
So: they’re quoting a particular figure: *dollars per screen-showing*, as a measure of how the movie is doing. The thing is, that’s a pretty darn weird statistic. Why would they use dollars/screen-showing, instead of total revenue?
Because it’s the one statistic that lets them support the conclusion that they wanted to draw. What are the real facts? Official box office statistics for gross per weekend (rounded to the nearest thousand):
* May 26: $281,000 (in 4 theaters)
* June 2: $1,356,000 (in 77 theaters)
* June 9: $1,505,000 (in 122 theaters)
* June 16: $1,912,000 (in 404 theaters)
* June 23: $2,016,000 (in 514 theaters)
Each weekend, it has made more money than the previous weekend. (Those are per weekend numbers, not cumulative. The cumulative gross for the movie is $9,630,000.)
But the per showing gross has gone down. Why? Because when it was first released, it was being shown in a small number of showings in a small number of theaters. When it was premiered in 4 theaters, they sold out standing room only – so the gross per showing was very high. Now, four weeks later, it’s showing in over 500 theaters, and the individual showings aren’t selling out anymore. But *more people* are seeing it – every weekend, the number of people seeing it has increased!
The Powerline article (and the UPI article which it cites) are playing games with numbers to skew the results. They *want* to say that Al Gore’s movie is tanking in the theaters, so they pick a bizzare statistic to support that, even though it’s highly misleading. In fact, it’s one of the best performing documentaries *ever*. It’s currently the number seven grossing documentary of all time, and it’s about $600,000 off from becoming number 5.
What was the per-theater (note, not per showing, but per theater) gross for the last Star Wars movie four weeks into its showing? $4,500/theater (at 3,322 theaters), according to [Box Office Mojo][bom]. So, if we want to use reasoning a lot like powerline, we can argue that Al Gore’s movie is doing *as well as Star Wars* an a dollars/theater-weekend basis.
But that would be stupid, wouldn’t it.
[bom]: http://www.boxofficemojo.com/movies/?page=weekend&id=starwars3.htm
[powerline]: http://powerlineblog.com/archives/014530.php

Good Math, Bad Math might be in trouble later this week

I just received some email that would seriously worry me if it weren’t for the fact that I’m not an idiot.

WARNING!
TERRORISTS are going to ATTACK NEW YORK!
There is NO TIME to waste! Go read http://www.truebiblecode.com/nyc.html!!!!!!

Going there, I find:

We are now 98% confident that the UN Plaza will be hit by a terrorist nuclear bomb between Thursday evening June 29th and Tuesday evening July 4th, 2006
It is certainly true that: No nukes is good nukes! But just because we got the date wrong (3 times) does not mean that the scriptural threat has evaporated. It is still there in black and white in bible symbolism. So we still have the almost impossible task of persuading a typical New Yorker with faith in God, that the Bible predicts the very day and place of the first terrorist nuke. There is obviously a massive credibility gap between: “Here endeth the lesson” and “Here endeth NYC”. But every journey, however long, begins with one small step. So here is our attempt to fill that gap.
Firstly we again strongly advise anyone in New York City with any faith in God, whatever his religion or whatever his distrust of organised religion, to take the last Thursday in June off and to get out of NYC for that weekend and not come back until the evening of July 4 if nothing happens.
You can then study this fascinating article outside NYC at your leisure, during that weekend or more to the point, after that weekend! It is going to be hard to find the necessary time during the next few days, given the busy schedule of every New Yorker, to sit down and fully analyze the fruits of 14 years of bible decoding and reach a rational decision about such a momentous prediction. So the sensible course of action might be to judge for yourself whether we are sincere in our efforts to decode the bible. And if you see that we are sincere, then rather than taking an intellectual walk from basic faith to accurate bible prophecy, just rely on all of the work that we have so far done and on the basis of your faith and our sincerity, take the weekend out of the city.

You gotta love this. Not only is it a splendid example of worse kind of pseudo-numerological gibberish, but they admit to having been wrong three times already. But this time, this time!, they’ve really got it right! We should trust them and get the hell of of NYC this weekend, no matter what it costs!

Categories: Products, Exponentials, and the Cartesian Closed Categories

Before I dive into the depths of todays post, I want to clarify something. Last time, I defined categorical products. Alas, I neglected to mention one important point, which led to a bit of confusion in the comments, so I’ll restate the important omission here.
The definition of categorical product defines what the product looks like *if it’s in the category*. There is no requirement that a category include the products for all, or indeed for any, of its members. Categories are *not closed* with respect to categorical product.
That point leads up to the main topic of this post. There’s a special group of categories called the **Cartesian Closed Categories** that *are* closed with respect to product; and they’re a very important group of categories indeed.
However, before we can talk about the CCCs, we need to build up a bit more.
Cartesian Categories
——————–
A *cartesian category* C (note **not** cartesian *closed* category) is a category:
1. With a terminal object t, and
2. ∀ a, b ∈ Obj(C), the objects and arrows of the categorical product a×b are in C.
So, a cartesian category is a category closed with respect to product. Many of the common categories are cartesian: the category of sets, and the category of enumerable sets, And of course, the meaning of the categorical product in set? Cartesian product of sets.
Categorical Exponentials
————————
Like categorical product, the value of a categorical exponential is not *required* to included in a category. Given two objects x and y from a category C, their *categorical exponential* xy, if it exists in the category, is defined by a set of values:
* An object xy,
* An arrow evaly,x : xy×y → x, called an *evaluation map*.
* ∀ z ∈ Obj(C), an operation ΛC : (z✗y → x) → (z → xy). (That is, an operation mapping from arrows to arrows.)
These values must have the following properties:
1. ∀ f : z×y → x, g : z → xy:
* valy,x º (ΛC(f)×1y)
2. ∀ f : z×y → x, g : z → xy:
* ΛC(evaly,x *ordm; (z×1y)) = z
To make that a bit easier to understand, let’s turn it into a diagram.
exponent.jpg
You can also think of it as a generalization of a function space. xy is the set of all functions from y to x. The evaluation map is simple description in categorical terms of an operation that applies a function from a to b (an arrow) to a value from a, resulting in an a value from b.
(I added the following section after this was initially posted; a commenter asked a question, and I realized that I hadn’t explained enough here, so I’ve added the explanation.
So what does the categorical exponential mean? I think it’s easiest to explain in terms of sets and functions first, and then just step it back to the more general case of objects and arrows.
If X and Y are sets, then XY is the set of functions from Y to X.
Now, look at the diagram:
* The top part says, basically, that g is a function from Z to to XY: so g takes a member of Z, and uses it to select a function from Y to X.
* The vertical arrow says:
* given the pair (z,y), f(z,y) maps (z,y) to a value in X.
* given a pair (z,y), we’re going through a function. It’s almost like currying:
* The vertical arrow going down is basically taking g(z,y), and currying it to g(z)(y).
* Per the top part of the diagram, g(z) selections a function from y to x. (That is, a member of XY.)
* So, at the end of the vertical arrow, we have a pair (g(z), y).
* The “eval” arrow maps from the pair of a function and a value to the result of applying the function to the value.
Now – the abstraction step is actually kind of easy: all we’re doing is saying that there is a structure of mappings from object to object here. This particular structure has the essential properties of what it means to apply a function to a value. The internal values and precise meanings of the arrows connecting the values can end up being different things, but no matter what, it will come down to something very much like function application.
Cartesian Closed Categories
—————————-
With exponentials and products, we’re ready for the cartesian closed categories (CCCs):
A Cartesian closed category is a category that is closed with respect to both products and exponentials.
Why do we care? Well, the CCCs are in a pretty deep sense equivalent to the simply typed lambda calculus. That means that the CCCs are deeply tied to the fundamental nature of computation. The *structure* of the CCCs – with its closure WRT product and exponential – is an expression of the basic capability of an effective computing system.
We’re getting close to being able to get to some really interesting things. Probably one more set of definitions; and then we’ll be able to do things like show a really cool version of the classic halting problem proof.

A Great Quote about Methods

On my way home from picking up my kids from school, I heard a story on NPR that included a line from one of the editors of the New England Journal of Medicine, which I thought was worth repeating here.
They were discussing an article in this month’s NEJM about [vioxx/rofecoxib][nejm]. The article is a correction to an earlier NEJM article that concluded that the cardiac risks of Vioxx were not really significant until around 18 months of continued use. With more data available, it appears that the 18 month was just an artifact, not a real phenomenon, and the data appears to show that the cardiac risks of Vioxx start very quickly.
[nejm]: http://content.nejm.org/cgi/reprint/NEJMc066260v1.pdf
As usual for something like this, the authors and the corporate sponsors of the work are all consulted in the publication of a correction or retraction. But in this case, the drug maker objected to the correction, because they wanted the correction to include an additional analysis, which appears to show an 18-month threshold for cardiac risk.
The editors response, paraphrased (since I heard it on the radio and don’t have the exact quote):
“That’s not how you do science. You don’t get all of the data, and then
look for an analysis that produces the results you want. You agree on the
analysis you’re going to do *before* you start the study, and then you use
the analysis that you said you were going to use.”
Hey [Geiers][geier], you listening?
[geier]: http://goodmath.blogspot.com/2006/03/math-slop-autism-and-mercury.html