Monads and Programming Languages

One of the questions that a ton of people sent me when I said I was going to write about category theory was “Oh, good, can you please explain what the heck a monad is?”

The short version is: a monad is a category with a functor to itself. The way that this works in a programming language is that you can view many things in programming languages in terms of monads. In particular, you can take things that involve mutable state, and magically hide the state.

How? Well – the state (the set of bindings of variables to values) is an object in a category, State. The monad is a functor from State → State. Since the functor is a functor from a category to itself, the value of the state is implicit – they’re the object at the start and end points of the functor. From the viewpoint of code outside of the monad functor, the states are indistinguishable – they’re just something in the category. For the functor itself, the value of the state is accessible.

So, in a language like Haskell with a State monad, you can write functions inside the State monad; and they are strictly functions from State to State; or you can write functions outside the state monad, in which case the value inside the state is completely inaccessible. Let’s take a quick look at an example of this in Haskell. (This example came from an excellent online tutorial which, sadly, is no longer available.)

Here’s a quick declaration of a State monad in Haskell:

class MonadState m s | m -> s where
  get :: m s
  put :: s -> m ()

instance MonadState (State s) s where
  get   = State $ s -> (s,s)
  put s = State $ _ -> ((),s)

This is Haskell syntax saying we’re defining a state as an object which stores one value. It has two functions: get, which retrieves the value from a state; and put, which updates the value hidden inside the state.

Now, remember that Haskell has no actual assignment statement: it’s a pure functional language. So what “put” actually does is create a new state with the new value in it.

How can we use it? We can only access the state from a function that’s inside the monad. In the example, they use it for a random number generator; the state stores the value of the last random generated, which will be used as a seed for the next. Here we go:

getAny :: (Random a) => State StdGen a
getAny = do g <- get
  (x,g') <- return $ random g
  put g'
  return x

Now – remember that the only functions that exist *inside* the monad are "get" and "put". "do" is a syntactic sugar for inserting a sequence of statements into a monad. What actually happens inside of a do is that *each expression* in the sequence is a functor from a State to State; each expression takes as an input parameter the output from the previous. "getAny" takes a state monad as an input; and then it implicitly passes the state from expression to expression.

"return" is the only way *out* of the monad; it basically says "evaluate this expression outside of the monad". So, "return $ randomR bounds g" is saying, roughly, "evaluate randomR bounds g" outside of the monad; then apply the monad constructor to the result. The return is necessary there because the full expression on the line *must* take and return an instance of the monad; if we just say "(x,g') <- randomR bounds g", we'd get an error, because we're inside of a monad construct: the monad object is going be be inserted as an implicit parameter, unless we prevent it using "return". But the resulting value has to be injected back into the monad – thus the "$", which is a composition operator. (It's basically the categorical º). Finally, "return x" is saying "evaluate "x" outside of the monad – without the "return", it would treat "x" as a functor on the monad.

The really important thing here is to recognize that each line inside of the "do" is a functor from State → State; and since the start and end points of the functor are implicit in the structure of the functor itself, you don't need to write it. So the state is passed down the sequence of instructions – each of which maps State back to State.

Let's get to the formal part of what a monad is. There's a bit of funny notation we need to define for it. (You can't do anything in category theory without that never-ending stream of definitions!)

  1. Given a category C, 1C is the *identity functor* from C to C.
  2. For a category C, if T is a functor C → C, then T2 is the TºT. (And so on for tother )
  3. For a given Functor, T, the natural transformation T → T is written 1T.

Suppose we have a category, C. A *monad on C* is a triple (T,η,μ), where T is a functor from C → C, and η and μ are natural transformations; η: 1C → T, and μ: (TºT) → T. (1C is the identity functor for C in the category of categories.) These must have the following properties:

First, μ º Tμ = μ º μT. Or in diagram form:

monad-prop1.jpg

Second, μ º Tη = μ º ηT = 1T. In diagram form:

monad-prop2.jpg

Basically, what these really comes down to is an associative property ensuring that T behaves properly over composition, and that there is an identity transformation that behaves as we would expect. These two properties together add up to mean that any order of applications of T will behave properly, preserving the structure of the category underlying the monad.

PEAR yet again: the theory behind paranormal gibberish (repost from blogger)

This is a repost from GM/BMs old home; the original article appeared
[here][old]. I’m reposting because someone is attempting to respond to this
article, and I’d rather keep all of the ongoing discussions in one place. I also
think it’s a pretty good article, which some of the newer readers here may not
have seen. As usual for my reposts, I’ve fixed the formatting and made a few
minor changes. This article was originally posted on May 29.
I’ve been looking at PEAR again. I know it may seem sort of like beating a dead
horse, but PEAR is, I think, something special in its way: it’s a group of
people who pretend to use science and mathematics in order to support all sorts
of altie-woo gibberish. This makes them, to me, particularly important targets
for skeptics: if they were legit, and they were getting the kinds of results
that they present, they’d be demonstrating something fascinating and important.
But they’re not: they’re trying to use the appearance of science to undermine
science. And they’re incredibly popular among various kinds of crackpottery:
what led me back to them this time is the fact that I found them cited as a
supporting reference in numerous places:
1. Two different “UFOlogy” websites;
2. Eric Julien’s dream-prophecy of a disastrous comet impact on earth (which was supposed to have happened back in May; he’s since taken credit for *averting* said comet strike by raising consciousness);
3. Three different websites where psychics take money in exchange for psychic predictions or psychic healing;
4. Two homeopathy information sites;
5. The house of thoth, a general clearinghouse site for everything wacky.
Anyway, while looking at the stuff that all of these wacko sites cited from
PEAR, I came across some PEAR work which isn’t just a rehash of the random
number generator nonsense, but instead an attempt to define, in mathematical
terms, what “paranormal” events are, and what they mean.
It’s quite different from their other junk; and it’s a really great example of
one of the common ways that pseudo-scientists misuse math. The paper is called
“M* : Vector Representation of the Subliminal Seed Regime of M5“, and you can
find it [here][pear-thoth].
The abstract gives you a pretty good idea of what’s coming:
>A supplement to the M5 model of mind/matter interactions is proposed
>wherein the subliminal seed space that undergirds tangible reality and
>conscious experience is characterized by an array of complex vectors whose
>components embody the pre-objective and pre-subjective aspects of their
>interactions. Elementary algebraic arguments then predict that the degree of
>anomalous correlation between the emergent conscious experiences and the
>corresponding tangible events depends only on the alignment of these
>interacting vectors, i. e., on the correspondence of the ratios of their
>individual ”hard” and ”soft” coordinates. This in turn suggests a
>subconscious alignment strategy based on strong need, desire, or shared purpose
>that is consistent with empirical experience. More sophisticated versions of
>the model could readily be pursued, but the essence of the correlation process
>seems rudimentary.
So, if we strip out the obfuscation, what does this actually say?
Umm… “*babble babble* complex vectors *babble babble babble* algebra *babble babble* ratios *babble babble* correlation *babble babble*.”
Seriously: that’s a pretty good paraphrase. That entire paragraph is *meaningless*. It’s a bunch of nonsense mixed in with a couple of pseudo-mathematical terms in order to make it sound scientific. There is *no* actual content in that abstract. It reads like a computer-generated paper from
[SCIgen][scigen] .
(For contrast, here’s a SCIgen-generated abstract: “The simulation of randomized algorithms has deployed model checking, and current trends suggest that the evaluation of SMPs will soon emerge. In fact, few statisticians would disagree with the refinement of Byzantine fault tolerance. We confirm that although multicast systems [16] can be made homogeneous, omniscient, and autonomous, the acclaimed low-energy algorithm for the improvement of DHCP [34] is recursively enumerable.”)
Ok, so the abstract is the pits. To be honest, a *lot* of decent technical papers have really lousy abstracts. So let’s dive in, and look at the actual body of the paper, and see if it improves at all.
They start by trying to explain just what their basic conceptual model is. According to the authors, the world is fundamentally built on consciousness; and that most events start in a a pre-conscious realm of ideas called the “*seed region*”; and that as they emerge from the seed region into experienced reality, they manifest in two different ways; as “events” in the material domain, and as “experiences” or “perceptions” in the mental domain. They then claim that in order for something from the seed region to manifest, it requires an interaction of at least two seeds.
Now, they try to start using pseudo-math to justify their gibberish.
Suppose we have two of these seed beasties, S1, and S2. Now, suppose we have a mathematical representation of them as “vectors”. They write that as [S]).
A “normal” event, according to them, is one where the events combine in what they call a “linear” way (scare-quotes theirs): [S1] + [ S2] = [S1 + S2). On the other hand, events that are perceived as anomalous are events for which that’s not true: [S1] + [S2] ≠[S1 + S2].
We’re already well into the land of pretend mathematics here. We have two non-quantifiable “seeds”; but we can add them together… We’re pulling group-theory type concepts and notations, and applying them to things that absolutely do not have any of the prerequisites for those concepts to be meaningful.
But let’s skip past that for a moment, because it gets infinitely sillier shortly.
They draw a cartesian graph with four quadrants, and label them (going clockwise from the first quadrant): T (for tangible), I (for intangible – aka, not observable in tangible reality), U (for unconscious), and C (conscious). So the upper-half is what they consider to be observable, and the bottom half is non-observable; and the left side is mind and the right side is matter. Further, they have a notion of “hard” and “soft”; objective is hard, and subjective is soft. They proceed to give a list of ridiculous pairs of words which they claim are different ways of expressing the fundamental “hard/soft” distinction, including “masculine/feminine”, “particulate/wavelike”, “words/music”, and “yang/yin”.
Once they’ve gotten here, they get to my all-time favorite PEAR statement; one which is actually astonishingly obvious about what they’re really up to:
>It is then presumed that if we appropriate and pursue some established
>mathematical formalism for representing such components and their interactions,
>the analytical results may retain some metaphoric relevance for the emergence
>of anomalous mind/matter manifestations.
I love the amount of hedging involved in that sentence! And the admission that
they’re just “appropriating” a mathematical formalism for no other purpose than
to “retain some metaphoric relevance”. I think that an honest translation of
that sentence into non-obfuscatory english is: “If we wrap this all up in
mathematical symbols, we can make it look as if this might be real science”.
So, they then proceed to say that they can represent the seeds as complex numbers: S = s + iσ. But “s” and “sigma” can’t just be simply “pre-material” and “pre-mental”, because that would be too simple. Instead, they’re “hard” and “soft”; even thought we’ve just gone through the definition which categorized hard/soft as a better characterization of material and mental. Oh, and they have to make sure that this looks sufficiently mathematical, so instead of just saying that it’s a complex, they present it in *both* rectangular and polar coordinates, with the equation for converting between the two notations written out inside the same definition area. No good reason for that, other than have something more impressive looking.
Then they want to define how these “seeds” can propagate up from the very lowest reaches of their non-observable region into actual observable events, and for no particular reason, they decide to use the conjugate product equation randomly selected from quantum physics. So they take a random pair of seeds (remember that they claim that events proceed from a combination of at least two seeds), and add them up. They claim that the combined seed is just the normal vector addition (which they proceed to expand in the most complex looking way possible); and they also take the “conjugate products” and add them up (again in the most verbose and obfuscatory way possible); and then take the different between the two different sums. At this point, they reveal that for some reason, they think that the simple vector addition corresponds to “[S1] + [S2]” from earlier; and the conjugate is “[S1+S2]”. No reason for this correspondence is give; no reason for why these should be equal for “non-anomalous” events; it’s just obviously the right thing to do according to them. And then, of course, they repeat the whole thing in polar notation.
It just keeps going like this: randomly pulling equations out of a hat for no particular reason, using them in bizzarely verbose and drawn out forms, repeating things in different ways for no reason. After babbling onwards about these sums, they say that “Also to be questioned is whether other interaction recipes beyond the simple addition S1,2 = S1 + S2 could profitably be explored.”; they suggest multiplication; but decide against it just because it doesn’t produce the results that they want. Seriously! In their words “but we show that this doesn’t generate similar non-linearities”: that is, they want to see “non-linearities” in the randomly assembled equations, and since multiplying doesn’t have that, it’s no good to them.
Finally, we’re winding down and getting to the end: the “summary”. (I was taught that when you write a technical paper, the summary or conclusion section should be short and sweet. For them, it’s two full pages of tight text.) They proceed to restate things, complete with repeating the gibberish equations in yet another, slightly different form. And then they really piss me off. Statement six of their summary says “Elementary complex algebra then predicts babble babble babble”. Elementary complex algebra “predicts” no such thing. There is no real algebra here, and nothing about algebra would remotely suggest anything like what they’re claiming. It’s just that this is a key step in their reasoning chain, and they absolutely cannot support it in any meaningful way. So they mask it up in pseudo-mathematical babble, and claim that the mathematics provides the link that they want, even though it doesn’t. They’re trying to use the credibility and robustness of mathematics to keep their nonsense above water, even though there’s nothing remotely mathematical about it.
They keep going with the nonsense math: they claim that the key to larger anomalous effects resides in “better alignment” of the interacting seed vectors (because the closer the two vectors are, in their framework, the larger the discrepancy between their two ways of “adding” vectors); and that alignments are driven by “personal need or desire”. And it goes downhill from there.
This is really wretched stuff. To me, it’s definitely the most offensive of the PEAR papers. The other PEAR stuff I’ve seen is abused statistics from experiments. This is much more fundamental – instead of just using sampling errors to support their outcome (which is, potentially, explainable as incompetence on the part of the researchers), this is clear, deliberate, and fundamental misuse of mathematics in order to lend credibility to nonsense.
[old]: http://goodmath.blogspot.com/2006/05/pear-yet-again-theory-behind.html
[pear-thoth]: http://goodmath.blogspot.com/2006/05/pear-yet-again-theory-behind.html
[scigen]: http://pdos.csail.mit.edu/scigen/

Subtraction: Math Too Hard for a Conservative Blogger

This has been written about [elsewhere][lf], but I can’t let such a perfect example of the fundamental innumeracy of so many political pundits pass me by without commenting.
Captain Ed of [Captains Quarters][cq] complains about a speech by John Edwards in which Edwards mentions 37 million people below the poverty line:
>Let’s talk about poverty. Where did John Edwards get his numbers? The US Census
>Bureau has a ready table on poverty and near-poverty, and the number 37 million
>has no relation to those below the poverty line. If his basis is worry, well,
>that tells us nothing; what parent doesn’t worry about putting food on the
>table and clothes on the children, except for rich personal-injury attorneys?
>That threshold is meaningless.
Now, let’s look at the very figures that our brilliant captain links to:

Table 6. People Below 125 Percent of Poverty Level and the Near Poor:
1959 to 2004
(Numbers in Thousands)
____________________________________________________________
Below 1.25      Between 1.00 - 1.25
____________________ ____________________
Year      Total      Number   Percent     Number   Percent
____________________________________________________________
2004.....  290,605     49,666      17.1     12,669       4.4

Ok… So, approximately 50 million people below 1.25 * the poverty line… And approximately 13 million people above the poverty line, but below 1.25 times it…
Now, What kind of brilliant mathematician does it take to figure out how many people are below the poverty line from this table? What kind of sophisticated math do we need to use to figure it out? College calculus? No. High school algebra? No. 3rd grade subtraction? There we are.
50 – 13 = ?
My daughter, who is in *kindergarten* can do this using her *fingers*. But apparently math like this is completely beyond our Captain. (As much as I hate to admit it, this isn’t a phenomenon of people on the political right being innumerate; this kind of innumeracy is widespread on both ends of the political spectrum.)
[lf]: http://lawandpolitics.blogspot.com/2006_07_01_lawandpolitics_archive.html#115250909380587878
[cq]: http://www.captainsquartersblog.com/mt/archives/007436.php

Yoneda's Lemma

So, at last, we can get to Yoneda’s lemma, as I [promised earlier][yoneda-promise]. What Yoneda’s lemma does is show us how for many categories (in fact, most of the ones that are interesting) we can take the category C, and understand it using a structure formed from the functors from C to the category of sets. (From now on, we’ll call the category of sets **Set**.)
So why is that such a big deal? Because the functors from C to the **Set** define a *structure* formed from sets that represents the properties of C. Since we have a good intuitive understanding of sets, that means that Yoneda’s lemma
gives us a handle on how to understand all sorts of difficult structures by looking at the mapping from those structures onto sets. In some sense, this is what category theory is really all about: we’ve taken the intuition of sets and functions; and used it to build a general way of talking about structures. Our knowledge and intuition for sets can be applied to all sorts of structures.
As usual for category theory, there’s yet another definition we need to look at, in order to understand the categories for which Yoneda’s lemma applies.
If you recall, a while ago, I talked about something called *[small categories][smallcats]*: a small category is a categories for which the class of objects is a set, and not a proper class. Yoneda’s lemma applies to a a class of categories slightly less restrictive than the small categories, called the *locally small categories*.
The definition of locally small categories is based on something called the Hom-classes of a category. Given a category C, the hom-classes of C are a partition of the morphisms in the category. Given any two objects a and b in Obj(C), the hom-class **Hom**(a,b) is the class of all morphisms f : a → b. If **Hom**(a,b) is a set (instead of a proper class), then it’s called the hom-set of a and b.
A category C is *locally small* if/f all of the hom-classes of C are sets: that is, if for every pair of objects in Obj(C), the morphisms between them form a set, and not a proper class.
So, on to the lemma.
Suppose we have a locally small category C. Then for each object a in Obj(C), there is a *natural functor* corresponding to a mapping to **Set**. This is called the hom-functor of A, and it’s generally written: *h*a = **Hom**(a,-). *h*a is a functor which maps from a object X in C to the set of morphisms **Hom**(a,x).
If F is a functor from C to **Set**, then for all a ∈ Obj(C), the set of natural transformations from *h*a to F have a one-to-one correspondence with the elements of F(A): that is, the natural transformations – the set of all structure preserving mappings – from hom-functors of C to **Set** are isomorphic to the functors from C to **Set**.
So the functors from C to **Set** provide all of the structure preserving mappings from C to **Set**.
Yesterday, we saw a way how mapping *up* the abstraction hierarchy can make some kinds of reasoning easier. Yoneda says that for some things where we’d like to use our intuitions about sets and functions, we can also *map down* the abstraction hierarchy.
(If you saw my posts on group theory back at GM/BMs old home, this is a generalization of what I wrote about [the symmetric groups][symmetry]: the fact that every group G is isomorphic to a subgroup of the symmetric group on G.)
Coming up next: why computer science geeks like me care about this abstract nonsense? What does all of this gunk have to do with programming and programming languages? What the heck is a Monad? and more.
[symmetry]: http://goodmath.blogspot.com/2006/04/permutations-and-symmetry-groups.html
[yoneda-promise]: http://scienceblogs.com/goodmath/2006/06/category_theory_natural_transf.php
[smallcats]: http://scienceblogs.com/goodmath/2006/06/more_category_theory_getting_i.php

Using Natural Transformations: Recreating Closed Cartesian Categories

Today’s contribution on category theory is going to be short and sweet. It’s an example of why we really care about [natural transformations][nt]. Remember the trouble we went through working up to define [cartesian categories and cartesian closed categories][ccc]?
As a reminder: a [functor][functor] is a structure preserving mapping between categories. (Functors are the morphisms of the category of small categories); natural transformations are structure-preserving mappings between functors (and are morphisms in the category of functors).
Since we know that the natural transformation can be viewed as a kind of arrow, then we can take the definitions of iso-, epi-, and mono-morphisms, and apply them to natural transformations, resulting in *natural isomorphisms*, *natural monomorphisms*, and *natural epimorphisms*.
Expressed this way, a cartesian category is a category C where:
1. C contains a terminal object t; and
2. (∀ a,b ∈ Obj(C)), C contains a product object a×b; and
a *natural isomorphism* Δ, which maps each Functor over (C×C): ((x → a) → b) to (x → (a×b))
What this really says is: if we look at categorical products, then for a cartesian category, there’s a way of understanding the product as a
mapping within the category as a pairing structure over arrows.
structure-preserving transformation from arrows between the pairs of values (a,b) and the products (a×b).
The closed cartesian category is just the same exact trick using the exponential: A CCC is a category C where:
1. C is a cartesian category, and
2. (∀ a,b ∈ Obj(C)), C contains an object ba, and a natural isomorphism Λ, where (∀ y ∈ Obj(C)) Λ : (y×a → b) → (y → ab).
Look at these definitions; then go back and look at the old definitions that we used without the new constructions of the natural transformation. That will let you see what all the work to define natural transformations buys us. Category theory is all about structure; with categories, functors, and natural transformations, we have the ability to talk about extremely sophisticated structures and transformations using a really simple, clean abstraction.
[functor]: http://scienceblogs.com/goodmath/2006/06/more_category_theory_getting_i.php
[nt]: http://scienceblogs.com/goodmath/2006/06/category_theory_natural_transf.php
[ccc]: http://scienceblogs.com/goodmath/2006/06/categories_products_exponentia_1.php

Lying with Statistics: Abortion Rates

Via [Feministe][feministe], we see a wingnut named Tim Worstall [trying to argue something about sexual education][worstall]. It’s not entirely clear just what the heck he thinks his argument is; he wants to argue that sexual education “doesn’t work”; his argument about this is based on abortion rates. This
is an absolutely *classic* example of how statistics are misused in political arguments. So let’s take a look, and see what’s wrong.
[feministe]: http://www.feministe.us/blog/archives/2006/07/10/lies-damn-lies-and-statistics/
[worstall]: http://timworstall.typepad.com/timworstall/2006/07/sex_education_w.html#comment-19323490
He quotes an article from the Telegraph, a UK newspaper. The telegraph article cites statistics from the UK department of health. Here’s what Worstall has to say:
>Yup, gotta hand it to them, the campaigners are right. Sex education obviously works
>
>Abortions have reached record levels, and nearly a third of women who have an abortion have had one
>or more before.
>
>Department of Health statistics reveal that abortions in England and Wales rose by more than 700 in
>2005, from 185,713 in 2004 to 186,416.
>…
>Some 31 per cent of women had one or more previous abortions, a figure that rises to 43 per cent
>among black British women.
>
>The ever increasing amount of sex education, the ever easier provision of contraception is clearly >driving down the number of unwanted pregnancies.
Clearly, Worstall and the author of the telegraph piece want us to believe that there’s a significant *increase* in the number of abortions in the UK; and that this indicates some problem with the idea of sex-ed.
So what’s wrong with this picture?
First, let’s just look at those numbers, shall we? We’re talking about a year over year increase of *700* abortions from a base of *185,000*. How significant is that? Well, do the math: 0.37%. Yes, about one third of one percent. Statistically significant? Probably not. (Without knowing exactly how those numbers are gathered, including whether or not there’s a significant possibility of abortions being underreported, there’s no way to be absolutely sure, but 1/3 of 1% from a population of 185,000 or so is not likely to be significant.)
But it gets worse. Take a good look at those statistics: what do they measure? They’re a raw number of abortions. But what does that number actually mean? Statistics like that taken out of context are very uninformative. Let’s put them in context. From the [statistics for England and Wales][stats]:
[stats]: http://www.johnstonsarchive.net/policy/abortion/ab-ukenglandwales.html
In the year 2003, there were 621,469 live births, and 190,660 abortions. In 2004, there were 639,721 live births, and 194,179 abortions. Now, these stats from from the UK Office of National Statistics. Note that the numbers *do not match* the numbers cited earlier. In fact, taken as bare statistics, these numbers show a *much larger* increase in abortions: about 1.8%.
But, put in context… Take the number of abortions as a percentage of non-miscarried pregnancies (which we need to do because the miscarriage statistics for the years 2003 and 2004 are not available), and we find that
the number of abortions per 1000 pregnancies actually *declined* from 292/1000 in 2003 to 290/1000 in 2004. And that number from 2003 was a decline from 2002, which was a decline from 2001. So for the last four years for which statistics are available, the actual percentage of pregnancies ending in abortions has been nearly constant; but closely studying the numbers shows that the number has been *declining* for those four years.
In fact, if we look at abortion statistics overall, what we find is that from the legalization of abortion in the UK, there was a consistent increase until about 1973 (when the number of abortions reached 167,000), and since then, the number has ranged upwards and downwards with no consistent pattern.
So – what we’ve got here is a nut making an argument that’s trying to use statistics to justify his political stance. However, the *real* statistics, in context, don’t say what he wants them to say. So – as usual for a lying slimebag – he just selectively misquotes them to make it *look like* they say what he wants them to.

Comments, Typekey, Etc.

Just so folks know:
ScienceBlogs is experimenting with some new anti-spam stuff, which should do away with the need
for typekey. I’ve disabled typekey for Goodmath/Badmath, and we’ll how it goes. If you’ve got cookies or cached data for the site, you might have a bit of trouble with comments for a day or two; if you do, please drop me an email (see the contact tab), and I’ll see what I can do.
I’m also trying to figure out the right settings for the spam filter on the blog; if you post a comment and it doesn’t appear immediately, it’s probably because I don’t have the settings right. Don’t worry; it just means your message is sitting in the moderation queue until I get around to releasing it.

An Update on the Bible Code Bozos

About 10 days ago, I wrote a post about a group of bozos who believe they’ve found a secret code in the bible, and that according to them, there was going to be a nuclear attack on the UN building in NYC by terrorists. This was their fourth attempt to predict a date based on their oh-so-marvelous code.
Well, obviously, they were wrong again. But, do they let that stop them? Of course not! That’s the beauty of using really bad math for your code: you can always change the result when it doesn’t work. If you get the result you want, then you can say your code was right; if you don’t get things right, it’s never the fault of the code: it’s just that you misinterpreted. I thought it would be amusing to show their excuse:

We made another mistake. The monthly Sabbath of 2006Tammuz is not 30 individual daily Sabbaths, but is one month long Sabbath. Our new predicted date for a Nuclear Attack on the UN in New York City launched from the Sea or a great River is Sundown Tuesday July 25th – Sundown Thursday July 27th.

Yeah, they got days and months mixed up. That’s the ticket! So now it’s another three weeks off. But they’re still right! They still know the truth, and want to save us!

Who would have guessed? Dick Cheney can do the math

Or at least his financial advisers can.
Kiplinger’s, via MSN Money, are [reporting that Dick Cheney is betting that the economy is going to tank][cheney-invest]. When you take a look at the numbers: the deficit, the state of the dollar, the price of energy, stagnant wages, and the way that the economy is only being propped up by consumer spending, it’s hard to be optimistic about the economy. And apparently, despite what he says, Cheney’s not betting his own money on the success of he and George’s economic policies.
[cheney-invest]: http://articles.moneycentral.msn.com/Investing/Extra/CheneysBettingonBadNews.aspx
He’s put *at least* 10 million in a municipal bonds fund that will only do really well if interest rates keep rising; at least another mil in a money market fund that also depends on rising interest rates; and at least 2 million in “inflation protected” securities. Inflation protected securities are basically bonds and bond-like securities that pay a low interest rate, but that are structured to ensure that the principal grows with inflation. They’re really on only a good investment if you believe that inflation is on the rise and the dollar is going to sink.
Overall, our vice president has somewhere between 13 and 40 million dollars invested in things whose performance is based on interest rates and inflation rising, and the dollar tanking.
According to the same public disclosure documents from which this information was originally taken, his net worth is somewhere between 30 and 100 million. What that means is that it looks like the majority of his fluid money is solidly bet against the success of the policies of the government he is a part of.
Not pretty. But what did you really expect from a corrupt, power-hungry
asshole who considers the government to be a great big racket for rewarding
his buddies?
(See also [Attu sees All][attu]’s take on this.)
[attu]: http://attu.blogspot.com/2006/07/cheneys-betting-on-bad-news.html

Friday Random Ten, July 7

It’s friday again, so it’s time for a random ten. So out comes my iPod, and the results are:
1. **Bela Fleck and the Flecktones, “Latitude”**: mediocre tune off of the latest Flecktones album. This album was a 3-CD set. Unfortunately, it really should have been a single CD; they just didn’t bother to separate the good stuff from the not-so-good stuff. Very disappointing – they’re an amazing group of guys (well, except for Jeff…), and this just isn’t up to the quality they should be able to produce.
2. **Marillion, “Man of a Thousand Faces”**: a really fantastic Marillion tune. It ends with a very Yes-like layering buildup.
3. **Tony Trischka Band, “Sky is Sleeping”**: a track off of the TTBs first album. Tony doesn’t disappoint: brilliant playing, great chemistry between the band members. Features some truly amazing back-and-forth between banjo and sax.
4. **Sonic Youth, “Helen Lundeberg”**: something from Sonic Youth’s latest. I love this album.
5. **Peter Hammill, “Our Oyster”**: live Hammill, wonderful, strange, dark, depressing. It’s a tune about Tianamen Square.
6. **Flower Kings, “Fast Lane”**: typical FK – aka amazing neo-progrock.
7. **Broadside Electric, “Sheath and Knife”**: a modern rendition of a very gruesome old medieval ballad about incest.
8. **Stuart Duncan, “Thai Clips”**: a nice little bluegrass tune by one of the best bluegrass fiddlers around. Don’t ask why it’s called “Thai Clips”, nothing about it sounds remotely Thai.
9. **Dirty Three, “Ember”**: how many times do I need to rave about how much I love the Dirty Three?
10. **Lunasa, “Spoil the Dance”**: nice flute-heavy traditional Irish by Lunasa. For once, it’s not played so insanely fast. I’d guess around 130bpm, rather than the usual 170 to 180 of Lunasa. Lunasa’s a great band, and I love all their recordings; but Irish music like this is supposed to be *dance* music; you can’t dance at 180bpm.