There is at least a little bit of interesting bath math
to learn from in the whole financial mess going on now. A couple
of commenters beat me to it, but I’ll go ahead and write about
One of the big questions that comes up again and again is: how did they get away with this? How could they find any way of
taking things that were worthless, and turn them into something that could be represented as safe?
The answer is that they cheated in the math.
The way that you assess risk for something like a mortgage bond is based on working out the probability of the underlying loans failing, and using that to compute the likelihood of the
entire bond package to end up losing.
The biggest problem is that the whole system of ratings and
insurance for mortgage (and other) bonds is based on probability
computations of how likely it is for the underlying loans to
default. The problem is in how they computed the probability of
default. They made the same mistake that we constantly see
creationists making in some of their stupid arguments: false
independence. They build up assessments of risk based on the the
assumption that for a given set of loans, the probabilities of
different loans failing are completely independent of one another.
Quick refresher on probability. Take two events – like
two loans defaulting. If the probability of the first
loan defaulting is p0, and the probability of the second
loan defaulting is p1. If the two events are independent, then the probability of both occuring – the probability of both loans defaulting – is p1×p2. But if they’re not
independent, that doesn’t work. Then the computation gets a
lot messier – because you need to work out the math describing
the relationship – which generally involves lots of analysis,
and lots and lots of use of Bayes’ theorem.
It’s easiest to describe how this works by using an example.
Suppose we’ve got a package of 100 mortgages, each of which
borrowed $100,000. So we’ve got 10 million dollars worth of
mortgages. Suppose, for simplicity, that the total interest earned
on the loans was going to be 150% – so at the end of the 30 year
term of the loan, the bonds were expected to pay $25 million.
Now, suppose that these were really lousy loans – they expected
that each loan had a 10% probability of defaulting, and that they’d lose the entire amount of the loan on a default.
By assuming that the probability of default for each of the loans is independent of the probability of default for any other loan, they can say that the probability of any particular loan defaulting is 10%. Assuming that defaults for loans are all independent, they can build an argument that probability predicts that only 10% of the loan will fail, and that it’s incredibly unlikely for the rate of failure to reach higher than 20%.
Based on that, they say that by using tranching to separate risk, they can claim that they’re being extra careful, and put 80% of
the mortgage bonds into a top tranch fund, which is supposed to be
But the probability of defaults aren’t independent. Sure, there’s random failures, where someone gets sick and can’t work, and ends up defaulting on a loan. That kind of default generally
really is an independent event. But that’s not the story behind most defaults in low-quality loans. The garbage loans are almost always variable interest rate, and the most common cause of default is interest rate changes, which cause the loan payments
to become too large for the borrowers to pay. When that happens,
they’re not independent events. The same thing that causes one
loan to fail causes others to fail. Huge numbers fail at the same time, for the same cause. Depending on how the numbers work out,
you can get very different results for what’s likely to happen.
It’s not simple – there’s no one answer. And I don’t have the information that I would need to do the math. This isn’t back of
the envelope stuff.
But without doing anything complicated, I can say that they did it wrong. As I argued above, when you’ve got a huge collection of high-risk loans with variable interest rates, the pattern of default isn’t going to be independent. While the probability of any given loan defaulting, looked at individually, is only around 10%, the probability of 50% of the loans defaulting
isn’t 1 in 10something really big – because the same condition that causes one loan to fail is likely to cause many others to fail.
As a result of this, tranching didn’t work. They set up the tranches so that they partitioned things so that the top tranch was safe given the assumption of independence. But if independence
doesn’t hold – and it doesn’t in these – then even an extremely conservatively structured tranching package doesn’t guarantee
the safety of the top tier.
Plenty of people knew that this was wrong. But they were able to write up impressive looking risk assessments, full of pretty math showing how unlikely it was for anything bad to happen. The math was dreadful – but it looked good. And the fact that it looked good provided enough of an excuse for everyone to pretend that they believed it. (And for lots of people to actually really believe it; there’s no shortage of people who invested in this stuff without understanding it, who assumed that a “AAA” rating actually meant that someone had really checked to make sure that it was safe.)