# Truth in Type Theory

Now, we’re getting to the heart of type theory: judgements. Judgements are the basic logical statements that we want to be able to prove about computations in type theory. There’s a basic set of judgements that we want to be able to make.

I keep harping on this, but it’s the heart of type theory: type theory is all about understanding logical statements as specifications of computations. Or, more precisely, in computer science terms, they’re about understanding true logical statements as halting computations.

In this post, we’ll see the ultimate definition of truth in type theory: every logical proposition is a set, and the proposition is true if the set has any members. A non-halting computation is a false statement – because you can never get it to resolve an expression to a canonical value.

So remember as we go through this: judgements are based on the idea of logical statements as specifications of computations. So when we talk about a predicate P, we’re using its interpretation as a specification of a computation. When we look at an expression 3+5, we understand it not as a piece of notation that describes the number 8, but as a description of a computation that adds 3 to 5. “3+5″ not the same computation as “2*4″ or “2+2+2+2″, but as we’ll see, they’re equal because they evaluate to the same thing – that is, they each perform a computation that results in the same canonical value – the number 8.

In this post, we’re going to focus on a collection of canonical atomic judgement types:

$A \text{set}$
This judgement says that A is a set.
$A = B$
A and Bare equal sets.
$a \in A$
a is an element of the set A
$a_1 == a_2 \in A$
$a_1$ and $a_2$ are equal members of the set $A$.
$A$ is a proposition.
The proposition $A$ is true.

The definition of the meanings of judgements is, necessarily, mutually recursive, so we’ll have to go through all of them before any of them is complete.

An object $A$ is a Set
When I say that $A$ is a set in type theory, that means that:

• I know the rules for how to form the canonical expressions for the set;
• I’ve got an equivalence relation that says when two canonical members of the set are equal.
Two Sets are Equal

When I say that $A$ and $B$ are equal sets, that means:

• $A$ and $B$ are both sets.
• If $a$ is a canonical member of $A$, then $a$ is also a canonical member of $B$, and vice versa.
• If $a$ and $b$ are canonical members of $A$, and they’re also equal in $A$, then $a$ and $b$ are also equal canonical members of $B$ (and vice versa).

The only tricky thing about the definition of this judgement is the fact that it defines equality is a property of a set, not of the elements of the set. By this definition, tt’s possible for two expressions to be members of two different sets, and to be equal in one, but not equal in the other. (We’ll see in a bit that this possibility gets eliminated, but this stage of the definition leaves it open!)

An object is a member of a set
When I say $a \in A$, that means that if I evaluate $a$, the result will be a canonical member of $A$.
Two members of a set are equal
If $a \in A$ and $b \in A$ and $a = b$, that means when if you evaluate $a$, you’ll get a canonical expression $a$; and when you evaluate $b$, you’ll get a canonical expression $b$. For $a == b$ to be true, then $a$, that is, the canonical expressions resulting from their evaluation must also be equal.

This nails down the problem back in set equivalence: since membership equivalence is defined in terms of evaluation as canonical values, and every expression evaluates to exactly one canonical expression (that’s the definition of canonical!), then if two objects are equal in a set, they’re equal in all sets that they’re members of.

An object $A$ is a proposition
Here’s where type theory really starts to depart from the kind of math that we’re used to. In type theory, a proposition is a set. That’s it: to be a proposition, $A$ has to be a set.
The proposition $A$ is true
And the real meat of everything so far: if we have a proposition $A$, and $A$ is true, what that means is that $a$ has at least one element. If a proposition is a non-empty set, then it’s true. If it’s empty, it’s false.

Truth in type theory really comes down to membership in a set. This is, subtly, different from the predicate logic that we’re familiar with. In predicate logic, a quantified proposition can, ultimately, be reduced to a set of values, but it’s entirely reasonable for that set to be empty. I can, for example, write a logical statement that “All dogs with IQs greater that 180 will turn themselves into cats.” A set-based intepretation of that is the collection of objects for which it’s true. There aren’t any, because there aren’t any dogs with IQs greater than 180. It’s empty. But logically, in terms of predicate logic, I can still say that it’s “true”. In type theory, I can’t: to be true, the set has to have at least one value, and it doesn’t.

In the next post, we’ll take these atomic judgements, and start to expand them into the idea of hypothetical judgements. In the type-theory sense, that means statements require some other set of prior judgements before they can be judged. In perhaps more familiar terms, we’ll be looking at type theory’s equivalent of the contexts in classical sequent calculus – those funny little $\Gamma$s that show up in all of the sequent rules.

# Weekend Recipe: 3-cup chicken

This is a traditional chinese dish that my wife grew up eating in Taiwan. For some reason, she never told me about it, until she saw an article with a recipe in the NY Times. Of course, I can’t leave recipes alone; I always put my own spin on it. And the recipe in the article had some (in my opinion) glaring problems. For example, it called for cooking with sesame oil. Sesame oil is a seasoning, not a cooking oil. It’s got a very strong flavor, and it burns at stir-fry temperature, which makes any dish cooked in it taste absolutely awful. You cook in neutral oils with high smoke points, like peanut, canola, or soybean; and then you add a drop of sesame as part of the sauce, so that it’s moderated and doesn’t burn. Anyway, below is my version of the dish.

• 2 pounds of chicken thighs, cut into bite-sized pieces.
• About 8 large cloves of garlic, thickly sliced.
• About a 1-inch section of fresh ginger, cut into disks.
• 5 whole dried szechuan chili peppers (or more, if you like those lovely things!)
• A good bunch of thai basil leaves, removed from the stems, but left whole. (About a cup, if it’s packed pretty tight. Don’t skimp – these are the best part of the dish!)
• 4 scallions, thinly sliced, whites and greens separated.
• 1/3 cup soy sauce.
• 1/4 cup mirin
• 1/2 cup sake
• 1 tablespoon sugar
• 1 teaspoon cornstarch, dissolved in water.
• 1/4 teaspoon sesame oil (just a drop, for flavor).
• Enough canola oil (or similarly bland, high-smoke-point cooking oil) to took – a couple of tablespoons at most.
1. Get your wok smoking hot. Add enough oil to coat the bottom, and swirl it around.
2. Add in half of the chicken, and cook until it’s nicely browned, then remove it. (It won’t be cooked all the way through yet, don’t worry!)
3. Repeat with the other half of the chicken.
4. Make sure there’s enough oil in the bottom of the wok, then toss in the garlic, ginger, chili peppers, and scallion whites. Stir fry them until the garlic starts to get just a little bit golden.
5. Add the chicken back in, and add the soy, mirin, sake, and sugar. Get it boiling, and keep stirring things around until the chicken is cooked through.
6. Add the basil and scallions, and keep stirring until the basil wilts, and the whole thing smells of that wonderful thai basic fragrance.
7. Add the cornstarch and sesame oil, and cook until the sauce starts to thicken.
8. Remove it from the heat, and serve on a bed of white rice, along with some simple stir-fried vegetables. (I used a batch of beautiful sugar-snap peas, quickly stir fried with just a bit of garlic, and a bit of soy sauce.)

A couple of notes on ingredients:

• This is a dish where the soy sauce matters. Don’t use cheap generic american soy sauce; that stuff is just saltwater with food coloring. For some things, that’s actually OK. But in this dish, it’s the main flavor of the sauce, so it’s important to use something with a good flavor. Get a good quality chinese soy (I like Pearl River Bridge brand), or a good japanese shoyu.
• For the sugar, if you’ve got turbinado (or even better, real chinese rock sugar), use that. If not, white sugar is Ok.
• Definitely try to get thai basil. It’s very different from italian basil – the leaves are thinner (which makes them much easier to eat whole, as you do in this dish), and they’ve got a very different flavor – almost like Italian basic mixed with a bit of anise and a bit of menthol. It’s one of my favorite herbs, and it’s actually gotten pretty easy to find.
• Szechuan peppers can be hard to find – you pretty much need to go to an Asian grocery. They’re worth it. They’ve got a very distinctive flavor, and I don’t know of any other dried pepper that works in a sauce like them. You don’t actually eat the peppers – the way you cook them, they actually burn a bit – but they bloom their flavor into the oil that you use to cook the rest of the dish, and that totally changes the sauce.

# Canonical Expressions in Type Theory

Sorry for the gap in posts. I’ve been trying to post more regularly, and was just hitting a rhythm, when my son brought home a particularly vicious bug, and I got sick. I’ve spent the last couple of weeks being really, really sick, and then trying to get caught up at work. I’m mostly recovered, except for some lingering asthma, so I’m trying to get back to that twice-per-week posting schedule.

In the last couple of posts, we looked at Martin-Löf’s theory of expressions. The theory of expressions is purely syntactic: it’s a way of understanding the notation of expressions. Everything that we do in type theory will be written with expressions that follow the syntax, the arity, and the definitional equivalency rules of expression theory.

The next step is to start to understand the semantics of expressions. In type theory, when it comes to semantics, we’re interested in two things: evaluation and judgements. Evaluation is the process by which an expression is reduced to its simplest form. It’s something that we care about, but it’s not really a focus of type theory: type theory largely waves its hands in the air and says “we know how to do this”, and opts for normal-order evaluation. Judgements are where things get interesting.

Judgements are provable statements about expressions and the values that they represent. As software people, when we think about types and type theory, we’re usually thinking about type declarations: type declarations are judgements about the expressions that they apply to. When you write a type declaration in a programming language, what you’re doing is asserting the type theory judgement. When the compiler “type-checks” your program, what it’s doing in type theory terms is checking that your judgements are proven by your program.

For example, we’d like to be able to make the judgement A set – that is, that A is a set. In order to make the judgement that A is a set in type theory, we need to know two things:

1. How are canonical instances of the set A formed?
2. Given two canonical instances of A, how can we determine if they’re equal?

To understand those two properties, we need to take a step back. What is a canonical instance of a set?

If we think about how we use predicate logic, we’re always given some basic set of facts as a starting point. In type theory, the corresponding concept is a primitive constant. The primitive constants include base values and primitive functions. For example, if we’re working with lispish expressions, then cons(1, cons(2, nil)) is an expression, and cons, nil, 1 and 2 are primitive constants; cons is the head of the expression, and 1 and cons(2, nil) are the arguments.

A canonical expression is a saturated, closed expression whose head is a primitive constant.

The implications of this can be pretty surprising, because it means that a canonical expression can contain unevaluated arguments! The expression has to be saturated and closed – so its arguments can’t have unbound variables, or be missing parameters. But it can contain unevaluated subexpressions. For example, if we were working with Peano arithmetic in type theory, succ(2+3) is canonical, even though “2+3″ hasn’t be evaluated.

In general, in type theory, the way that we evaluate an expression is called normal order evaluation – what programming language people call lazy evaluation: that’s evaluating from the outside in. Given a non-canonical expression, we evaluate from the outside in until we get to a canonical expression, and then we stop. A canonical expression is considered the result of a computation – so we can see succ(2+3) as a result!

A canonical expression is the evaluated form of an expression, but not the fully evaluated form. The fully evaluated form is when the expression and all of its saturated parts are fully evaluated. So in our previous example, the saturated part 2+3 wasn’t evaluated, so it’s not fully evaluated. To get it to be fully evaluated, we’d need to evaluate 2+3, giving us succ(5); then, since succ(5) is saturated, it’s evaluated to 6, which is the fully evaluated form of the expression.

Next post (coming monday!), we’ll use this new understanding of canonical expressions, and start looking at judgements, and what they mean. That’s when type theory starts getting really fun and interesting.

# The Bad Logic of Good People Can’t be Sexists

One of my constant off-topic rants around here is about racism and sexism. This is going to be a nice little post that straddles the line. It’s one of those off-topic-ish rants about sexism in our society, but it’s built around a core of bad logic – so there is a tiny little bit of on-topicness.

We live in a culture that embodies a basic conflict. On one hand, racism and sexism are a deeply integrated part of our worldview. But on the other wand, we’ve come to believe that racism and sexism are bad. This puts us into an awkward situation. We don’t want to admit to saying or doing racist things. But there’s so much of it embedded in every facet of our society that it takes a lot of effort and awareness to even begin to avoid saying and doing racist things.

The problem there is that we can’t stop being racist/sexist until we admit that we are. We can’t stop doing sexist and racist things until we admit that we do sexist and racist things.

And here’s where we hit the logic piece. The problem is easiest to explain by looking at it in formal logical terms. We’ll look at it from the viewpoint of sexism, but the same argument applies for racism.

1. We’ll say $\text{Sexist}(x)$ to mean that “x” is sexist.
2. We’ll say $\text{Bad}(x)$ to mean that x is bad, and $\text{Good}(x)$ to mean that x is good.
3. We’ll have an axiom that bad and good are logical opposites: $\text{Bad}(x) \Leftrightarrow \lnot \text{Good}(x)$.
4. We’ll have another axiom that sexism is bad: $\forall x: \text{Sexist}(x) \Rightarrow \text{Bad}(x)$.
5. We’ll say $\text{Does}(p, x)$ means that person $p$ does an action $x$.

The key statement that I want to get to is: We believe that people who do bad things are bad people: $\forall p, x: \text{Does}(p, x) \land \text{Bad}(x) \Rightarrow \text{Bad}(p)$.

That means that if you do something sexist, you are a bad person:

• $s$ is a sexist action: $\text{Sexist}(s)$.
• I do something sexist: $\text{Does}(\textbf{markcc}, s)$.
• By rule 5 above, that means that I am sexist.
• If I am sexist, then by rule 4 above, I am bad.

We know that we aren’t bad people: I’m a good person, right? So we reject that conclusion. I’m not bad; therefore, I can’t be sexist, therefore whatever I did couldn’t have been sexist.

This looks shallow and silly on the surface. Surely mature adults, mature educated adults couldn’t be quite that foolish!

Now go read this.

If his crime was to use the phrase “boys with toys”, and that is your threshold for sexism worthy of some of the abusive responses above, then ok – stop reading now.

My problem is that I have known Shri for many years, and I don’t believe that he’s even remotely sexist. But in 2015 can one defend someone who’s been labeled sexist without a social media storm?

Are people open to the possibility that actually Kulkarni might be very honourable in his dealings with women?

In an interview a week or so ago, Professor Shri Kulkarni said something stupid and sexist. The author of that piece believes that Professor Kulkarni couldn’t have said something sexist, because he knows him, and he knows that he’s not sexist, because he’s a good guy who treats women well.

The thing is, that doesn’t matter. He messed up, and said something sexist. It’s not a big deal; we all do stupid things from time to time. He’s not a bad person because he said something sexist. He just messed up. People are, correctly, pointing out that he messed up: you can’t fix a problem until you acknowledge that it exists! When you say something stupid, you should expect to get called on it, and when you do, you should accept it, apologize, and move on with your life, using that experience to improve yourself, and not make the mistake again.

The thing about racism and sexism is that we’re immersed in it, day in and day out. It’s part of the background of our lives – it’s inescapable. Living in that society means means that we’ve all absorbed a lot of racism and exism without meaning to. We don’t have to like that, but it’s true. In order to make things better, we need to first acklowledge the reality of the world that we live in, and the influence that it has on us.

In mathematical terms, the problem is that good and bad, sexist and not sexist, are absolutes. When we render them into pure two-valued logic, we’re taking shades of gray, and turning them into black and white.

There are people who are profoundly sexist or racist, and that makes them bad people. Just look at the MRAs involved in Gamergate: they’re utterly disgusting human beings, and the thing that makes them so despicably awful is the awfulness of their sexism. Look at a KKKer, and you find a terrible person, and the thing that makes them so terrible is their racism.

But most people aren’t that extreme. We’ve just absorbed a whole lot of racism and sexism from the world we’ve lived our lives in, and that influences us. We’re a little bit racist, and that makes us a little bit bad – we have room for improvement. But we’re still, mostly, good people. The two-valued logic creates an apparent conflict where none really exists.

Where do these sexist/racist attitudes come from? Picture a scientist. What do you see in your minds eye? It’s almost certainly a white guy. It is for me. Why is that?

1. In school, from the time that I got into a grade where we had a dedicated science teacher, every science teacher that I had was a white guy. I can still name ’em: Mr. Fidele, Mr. Schwartz, Mr. Remoli, Mr. Laurie, Dr. Braun, Mr. Hicken, etc.
2. On into college, in my undergrad days, where I took a ton of physics and chemistry (I started out as an EE major), every science professor that I had was a white guy.
3. Growing up, my brother and I used to watch a ton of trashy sci-fi movies on TV. In those movies, every time there was a character who was a scientist, he was a white guy.
4. My father was a physicist working in semiconductor manufacturing for satellites and military applications. From the time I was a little kid until they day he retired, he had exactly one coworker who wasn’t a white man. (And everyone on his team complained bitterly that the black guy wasn’t any good, that he only got and kept the job because he was black, and if they tried to fire him, he’d sue them. I really don’t believe that my dad was a terrible racist person; I think he was a wonderful guy, the person who is a role model for so much of my life. But looking back at this? He didn’t mean to be racist, but I think that he was.)

In short, in all of my exposure to science, from kindergarten to graduate school, scientists were white men. (For some reason, I encountered a lot of women in math and comp sci, but not in the traditional sciences.) So when I picture a scientist, it’s just natural that I picture a man. There’s a similar story for most of us who’ve grown up in the current American culture.

When you consider that, it’s both an explanation of why we’ve got such a deeply embedded sexist sense about who can be a scientist, and an explanation how, despite the fact that we’re not deliberately being sexist, our subconscious sexism has a real impact.

I’ve told this story a thousand times, but during the time I worked at IBM, I ran the intership program for my department one summer. We had a deparmental quota of how many interns each department could pay for. But we had a second program that paid for interns that were women or minority – so they didn’t count against the quota. The first choice intern candidate of everyone in the department was a guy. When we ran out of slots, the guy across the hall from me ranted and raved about how unfair it was. We were discriminating against male candidates! It was reverse sexism! On and on. But the budget was what the budget was. Two days later, he showed up with a resume for a young woman, all excited – he’d found a candidate who was a woman, and she was even better than the guy he’d originally wanted to hire. We hired her, and she was brilliant, and did a great job that summer.

The question that I asked my office-neighbor afterwards was: Why didn’t he find the woman the first time through the resumes? He went through the resumes of all of the candidates before picking the original guy. The woman that he eventually hired had a resume that was clearly better than the guy. Why’d he pass her resume to settle on the guy? He didn’t know.

That little story demonstrates two things. One, it demonstrates the kind of subconscious bias we have. We don’t have to be mustache-twirling black-hatted villains to be sexists or racists. We just have to be human. Two, it demonstrates the way that these low-level biases actually harm people. Without our secondary budget for women/minority hires, that brilliant young woman would never have gotten an internship at IBM; without that internship, she probably wouldn’t have gotten a permanent job at IBM after graduation.

Professor Kulkarni said something silly. He knew he was saying something he shouldn’t have, but he went ahead and did it anyway, because it was normal and funny and harmless.

It’s not harmless. It reinforces that constant flood of experience that says that all scientists are men. If we want to change the culture of science to get rid of the sexism, we have to start with changing the deep attitudes that we aren’t even really aware of, but that influence our thoughts and decisions. That means that when someone says we did something sexist or racist, we need to be called on it. And when we get called on it, we need to admit that we did something wrong, apologize, and try not to make the same mistake again.

We can’t let the black and white reasoning blind us. Good people can be sexists or racists. Good people can do bad things without meaning to. We can’t allow our belief in our essential goodness prevent us from recognizing it when we do wrong, and making the choices that will allow us to become better people.

# Bad Comparisons with Statistics

When a friend asks me to write about something, I try do it. Yesterday, a friend of mine from my Google days, Daniel Martin, sent me a link, and asked to write about it. Daniel isn’t just a former coworker of mine, but he’s a math geek with the same sort of warped sense of humor as me. He knew my blog before we worked at Google, and on my first Halloween at Google, he came to introduce himself to me. He was wearing a purple shirt with his train ticket on a cord around his neck. For those who know any abstract algebra, get ready to groan: he was purple, and he commuted. He was dressed as an Abelian grape.

Anyway, Daniel sent me a link to this article, and asked me to write about the error in it.

The real subject of the article involves a recent twitter-storm around a professor at Boston University. This professor tweeted some about racism and history, and she did it in very blunt, not-entirely-professional terms. The details of what she did isn’t something I want to discuss here. (Briefly, I think it wasn’t a smart thing to tweet like that, but plenty of white people get away with worse every day; the only reason that she’s getting as much grief as she is is because she dared to be a black woman saying bad things about white people, and the assholes at Breitbart used that to fuel the insatiable anger and hatred of their followers.)

But I don’t want to go into the details of that here. Lots of people have written interesting things about it, from all sides. Just by posting about this, I’m probably opening myself up to yet another wave of abuse, but I’d prefer to avoid and much of that as I can. Instead, I’m just going to rip out the introduction to this article, because it makes a kind of incredibly stupid mathematical argument that requires correction. Here are the first and second paragraphs:

There aren’t too many African Americans in higher education.

In fact, black folks only make up about 4 percent of all full time tenured college faculty in America. To put that in context, only 14 out of the 321—that’s about 4 percent—of U.S. astronauts have been African American. So in America, if you’re black, you’ve got about as good a chance of being shot into space as you do getting a job as a college professor.

Statistics and probability can be a difficult field of study. But… a lot of its everyday uses are really quite easy. If you’re going to open your mouth and make public statements involving probabilities, you probably should make sure that you at least understand the first chapter of “probability for dummies”.

This author doesn’t appear to have done that.

The most basic fact of understanding how to compare pretty much anything numeric in the real world is that you can only compare quantities that have the same units. You can’t compare 4 kilograms to 5 pounds, and conclude that 5 pounds is bigger than 4 kilograms because 5 is bigger than four.

That principle applies to probabilities and statistics: you need to make sure that you’re comparing apples to apples. If you compare an apple to a grapefruit, you’re not going to get a meaningful result.

The proportion of astronauts who are black is 14/321, or a bit over 4%. That means that out of every 100 astronauts, you’d expect to find four black ones.

The proportion of college professors who are black is also a bit over 4%. That means that out of every 100 randomly selected college professors, you’d expect 4 to be black.

So far, so good.

But from there, our intrepid author takes a leap, and says “if you’re black, you’ve got about as good a chance of being shot into space as you do getting a job as a college professor”.

Nothing in the quoted statistic in any way tells us anything about anyone’s chances to become an astronaut. Nothing at all.

This is a classic statistical error which is very easy to avoid. It’s a unit error: he’s comparing two things with different units. The short version of the problem is: he’s comparing black/astronaut with astronaut/black.

You can’t derive anything about the probability of a black person becoming an astronaut from the ratio of black astronauts to astronauts.

Let’s pull out some numbers to demonstrate the problem. These are completely made up, to make the calculations easy – I’m not using real data here.

Suppose that:

• the US population is 300,000,000;
• black people are 40% of the population, which means that there are are 120,000,000 black people.
• there are 1000 universities in America, and there are 50 faculty per university, so there are 50,000 university professors.
• there are 50 astronauts in the US.
• If 4% of astronauts and 4% of college professors are black, that means that there are 2,000 black college professors, and 2 black astronauts.

In this scenario, as in reality, the percentage of black college professors and the percentage of black astronauts are equal. What about the probability of a given black person being a professor or an astronaut?

The probability of a black person being a professor is 2,000/120,000,000 – or 1 in 60,000. The probability of a black person becoming an astronaut is just 2/120,000,000 – or 1 in 60 million. Even though the probability of a random astronaut being black is the same as a the probability of a random college professor being black, the probability of a given black person becoming a college professor is 10,000 times higher that the probability of a given black person becoming an astronaut.

This kind of thing isn’t rocket science. My 11 year old son has done enough statistics in school to understand this problem! It’s simple: you need to compare like to like. If you can’t understand that, if you can’t understand your statistics enough to understand their units, you should probably try to avoid making public statements about statistics. Otherwise, you’ll wind up doing something stupid, and make yourself look like an idiot.

(In the interests of disclosure: an earlier version of this post used the comparison of apples to watermelons. But given the racial issues discussed in the post, that had unfortunate unintended connotations. When someone pointed that out to me, I changed it. To anyone who was offended: I am sorry. I did not intend to say anything associated with the racist slurs; I simply never thought of it. I should have, and I shouldn’t have needed someone to point it out to me. I’ll try to be more careful in the future.)

# Expressions and Arity (Part 2): Equivalence

Continuing where I left off: we were talking about arity in Martin-Löf’s theory of expressions. There are two basic problems that arity solves: it makes certain kinds of impossible-to-evaluate expressions be invalid in the theory; and it helps enable some way of comparing expressions for equality. Arity solves both of those problems by imposing a simple type system over expressions.

At the end of the last post, I started giving a sketch of what arities look like. Now we’re going to dive in, and take a look at how to determine the arity of an expression. It’s a fairly simple system of rules.

Before diving in, I want to stress the most important thing about the way that these rules work is that the expressions are totally syntactic and static. This is something that confused me the first time I tried to read about expression theory. When you see an expression, you think about how it’s evaluated. But expression theory is a purely syntactic theory: it’s about analyzing expressions as syntactic entities. There are, deliberately, no evaluation rules. It’s about understanding what the notations mean, and how to determine when two expressions are equivalent.

If, under the rules of Martin-Löf’s expression theory, two expressions are equivalent, then if you were to chose a valid set of evaluation rules, the two expressions will evaluate to the same value. But expression equivalence is stronger: expressions are equivalent only if you can prove their equivalence from their syntax.

That clarified, let’s start by looking at the rules of arity in expressions.

Variables and Constants
Every variable and every primitive constant has a pre-defined arity; if $x$ is a variable or primitive constant with arity $\alpha$, then the expression $x$ has arity $\alpha$.
Definitions
In a definition $D := e$, the arity of the defined name $D$ is the same as the arity of the expression $e$.
Applications
If $a$ is an expression of arity $\alpha \twoheadrightarrow \beta$, and $b$ is a expression of arity $\alpha$, then $a(b)$ is an expression of arity $\beta$.
Abstractions
If $e$ is an expression of arity $\beta$ and $x$ is a variable of arity $\alpha$, then $(x)e$ is an expression of arity $\alpha \twoheadrightarrow \beta$.
Combinations
If $e_1$ is an expression af arity $\alpha_1$, $e_2$ is an expression of arity $\alpha_2$, …, and $e_n$ is an expression of arity $\alpha_n$, then a combination expression $e_1, e_2, ..., e_n$ is an expression of arity $\alpha_1 \otimes \alpha_2 \otimes \ldots \otimes \alpha_n$.
Selections
If $e$ is an expression of arity $\alpha_1 \otimes \alpha_2 \otimes \ldots \otimes \alpha_n$ where $n \ge 2$, then $(e).i$ is an expression af arity $e_i$.

Let’s try working through an example: $x^2 + 3x + 7$.

1. As we saw in this post, this is equivalent to the simple AST-form: $(x)+(+(*(x,x), *(3, x)),7)$.
2. $x$” is a variable of arity 0; “3” and “7” are constants of arity 0; “$+$” and “$*$” are constants of arity $(0 \otimes 0) \twoheadrightarrow 0$.
3. From the combination rule, since $x$ and $3$ both have arity 0, $(x, x)$ and $(3, x)$ each have arity $0 \otimes 0$.
4. Since $(x, x)$ has arity $0 \otimes 0$, and $*$ has arity $(0 \otimes 0) \twoheadrightarrow 0$, $*(x, x)$ has arity 0. The same thing works for $*(3, x)$.
5. Since the arities of the $*(x, x)$ and $*(3, x)$ are both 0, the combination of the pair (the arguments to the inner “+”) are $0 \otimes 0$, and the arity of the inner sum expression is thus 0.
6. Since 7 has arity 0, the combination of it with the inner sum is $0 \otimes 0$, and the arity of the outer sum is 0.
7. Since $x$ is a variable of arity 0, and outer sum expression has arity 0, the abstract has arity $0 \twoheadrightarrow 0$.

If you’re familiar with type inference in the simply typed lambda calculus, this should look pretty familiar; the only real difference is that the only thing that arity really tracks is applicability and parameter counting.

Just from this much, we can see how this prevents problems. If you try to compute the arity of $3.1$ (that is, the selection of the first element from 3), you find that you can’t: there is no arity rule that would allow you to do that. The selection rule only works on a product-arity, and 3 has arity 0.

The other reason we wanted arity was to allow us to compare expressions. Intuitively, it should be obvious that the expression $e$ and the expression $(x)e(x)$ are in some sense equal. But we need some way of being able to actually precisely define that equality.

The kind of equality that we’re trying to get at here is called definitional equality. We’re not trying to define equality where expressions $a$ and $b$ evaluate to equal values – that would be easy. Instead, we’re trying to get at something more subtle: we want to capture the idea that the expressions are different ways of writing the same thing.

We need arity for this, for a simple reason. Let’s go back to that first example expression: $(x)+(+(*(x,x), *(3, x)),7)$. Is that equivalent to $(y)+(+(*(y,y), *(3, y)),7)$? Or to $8x+1$? If we apply them to the value 3, and then evaluate them using standard arithmetic, then all three expressions evaluate to 25. So are they all equivalent? We want to be able to say that the first two are equivalent expressions, but the last one isn’t. And we’d really like to be able to say that structurally – that is, instead of saying something evaluation-based like “forall values of x, eval(f(x)) == eval(g(x)), therefore f == g”, we want to be able to do something that says $f \equiv g$ because they have the same structure.

Using arity, we can work out a structural definition of equivalence for expressions.

In everything below, we’l write $a: \alpha$ to mean that $a$ has arity $\alpha$, and $a \equiv b : \alpha$ to mean that $a$ and $b$ are equivalent expressions of arity $\alpha$. We’ll define equivalence in a classic inductive form by structure:

Variables and Constants
If $x$ is a variable or constant of arity $\alpha$, then $x \equiv x : alpha$. This is the simplest identity rule: variables and constants are equivalent to themselves.
Definitions
If $a := b$ is a definition, and $b: \alpha$, then $a \equiv b : \alpha$. This is a slightly more complex form of an identity rule: if there’s a definition of $b$ as the value of $a$, then $a$ and $b$ are equivalent.
Application Rules
1. If $a \equiv a$ and $b \equiv b$, then $a(b) \equiv a$. If an applyable expression $a$ is equivalent to another applyable expression $a$, then applying $a$ to an expression $b$ is equivalent to applying $a$ to an expression $b$ if the argument $b$ is equivalent to the argument $b$. That’s a mouthful, but it’s simple: if you have two function application expressions, they’re equivalent if both the function expressions and the argument expressions are equivalent.
2. If $x$ is a variable of arity $\alpha$, and $a$ is an expression of arity $\alpha$ and $b$ is an expression of arity $\beta$, then $((x)b)(a) b[x := a]: \beta$. This is arity’s version of the classic beta rule of lambda calculus: applying an abstraction to an argument means substituting the argument for all references to the abstracted parameter in the body of the abstraction.
Abstraction Rules
1. If $x$ is a variable of arity $\alpha$, and $b \equiv b: \beta$, then $(x)b \equiv (x)b: \alpha \twoheadrightarrow \beta$. If two expressions are equivalent, then two abstractions using the same variable over the same expression is equivalent.
2. If $x$ and $y$ are both variables of arity $\alpha$, and $b$ is an expression of arity $\beta$, then $(x)b \equiv (y)(b[x := y]): \alpha \twoheadrightarrow \beta$, provided $y$ is not free in $b$.
Basically, the renaming variables in abstractions don’t matter, as long as you aren’t using the variable in the body of the abstraction. So $(x)(3+4y)$ is equivalent to $(z)(3+4y)$, but it’s not equivalent to $(y)(3+4y)$, because $y$ is a free variable in $3+4y$, and the abstraction would create a binding for $y$.

3. This is arities version of the eta-rule from lambda calculus: If $x$ is a variable of arity $\alpha$, and $b$ is an expression of arity $\alpha \twoheadrightarrow \beta$, then $(x)(b(x)) \equiv b: \alpha \twoheadrightarrow \beta$. This is a fancy version of an identity rule: abstraction and application cancel.

Combination Rules
1. If $a_1 \equiv a_1$, $a_2 \equiv a_2$, …, $a_n \equiv a_n$, then $a_1, a_2, ..., a_n \equiv a_1$. This one is simple: if you have two combination expressions with the same arity, then they’re equivalent if their elements are equivalent.
2. If $e: \alpha_1 \otimes \alpha_2 \otimes ... \otimes \alpha_n$, then $e.1, e.2, ..., e.n \equiv: e : \alpha_1 \otimes \alpha_2 \otimes ... \otimes \alpha_n$. Another easy one: if you take a combination expression, and you decompose it using selections, and then recombine those selection expressions into a combination, it’s equivalent to the original expression.
Selection Rules
1. If $a \equiv a$, then $a.i \equiv a$. This is the reverse of combinations rule one: if you have two equivalent tuples, then their elements are equivalent.
2. If $a_1: \alpha_1, a_2: \alpha_2, ..., a_n: \alpha_n$, then $(a_1, a_2, ..., a_n).i \equiv a_i$. An element of a combination is equivalent to itself outside of the combination.
Symmetry
If $a: \alpha$, then $a \equiv a: \alpha$.
Symmetry
If $a \equiv b: \alpha$, then $b \equiv a: \alpha$.
Transitivity
If $a \equiv b: \alpha$, and $b \equiv c: \alpha$, then $a \equiv c: \alpha$.

Jumping back to our example: Is $x^2 + 3x + 7$ equivalent to $x^2 + 3x + 7$? If we convert them both into their canonical AST forms, then yes. They’re identical, except for one thing: the variable name in their abstraction. By abstraction rule 2, then, they’re equivalent.

# Free-riding Insurance

Pardon me, while I go off on a bit of a rant. There is a bit of
math content to this, but there’s more politics.

There’s a news story that’s been going around this week about a guy who’s bitterly angry. His name is Luis Lang. Mr. Lang is going blind because of complications of diabetes. The only way to save his eyesight is to get very expensive eye surgery, but Mr. Lang can’t afford it. He doesn’t have insurance, and now that he needs it, he can’t buy it. According to him, this is all the fault of President Obama and the unjust Obamacare insurance system which is denying him access to insurance when he really needs it.

In the US, we’re in the early days of some big changes to our health-care system. Up until a couple of years ago, most people got insurance via their employers. If they didn’t get it through work, then they needed to buy policies on their own. Privately purchased policies were typically extremely expensive, and they came with a “pre-existing condition” exclusion (PECE). The pre-existing exclusion meant that if you had a medical condition that required care before you purchased the policy, the policy wouldn’t pay for the care.

In the new system, most people still get insurance through work. But the people who don’t get to go to government-run insurance exchanges to buy policies. The policies in the exchanges are much cheaper than the old private coverage used to be, and rules like the pre-existing condition exclusion are prohibited in policies on the exchange. In addition, if you make less than a certain income, the government will subsidize your coverage to make it affordable. Under this system, you’re required to buy insurance if it’s possible for you to buy it with the government subsidies; if you choose to go without, you have to pay a penalty. If you’re too poor to buy even with the subsidies, then you’re supposed to be able to get medicare under an expanded medicare program. But the medicare expansions needed to be done by the states, and many states refused to do it, even though the federal government would cover nearly all of the costs (100% for the first few years; 95% indefinitely thereafter.)

I’m not a fan of the new system. Personally, I believe that for-profit insurance is fundamentally immoral. But that’s not the point of this post. We’ve got a system now that should make it possible for people to get coverage. So why is this poor guy going blind, and unable to get insurance?

The answer is simple: because he very deliberately put himself into a terrible situation, and now he’s paying the price for that. And there are very good reasons why people who put themselves into his situation can’t get covered when they really need it.

First, I’ll run through how he got into this mess. Then, I’ll explain why, as sad as it is for this dumbass, he’s stuck.

Our alleged victim of unjust government policies is around 50 years old. He owns a nice home, and runs his own business. He had the opportunity to buy insurance, but he chose not to, because he has a deeply held philosophical/political belief that he should pay his own bills, and so he always paid for his medical care out of his own pocket. When Obamacare came down the pike, he was strongly opposed to it because of that philosophy, and so he paid the penalty rather than buy insurance, and stayed uninsured. It’s important to note here that he made a deliberate choice to remain uninsured.

Mr Lang isn’t a paragon of good health. He’s a smoker, and he’s had diabetes for a couple of years. He admits that he hasn’t been very good about managing his diabetes. (I’m very sympathetic to his trouble managing diabetes: there’s a lot of diabetes in my family – my mother, her brother, my grandfather, and every one of his siblings, and his father all had diabetes. I’ve seen members of my family struggle with it. Diabetes is awful. It’s hard to manage. Most people struggle with it, and many don’t ultimately do it well enough before they wind up with complications. That’s what happened to Mr. Lang: he developed complications.

Specifically, he had a series of small strokes, ruptured blood vessels in his cornea, and a detached retina. Combined, these conditions will cause him to go blind without surgery. (This is exactly what happened to my uncle – he lost his vision due to diabetes.) Mr. Lang’s condition has gotten bad enough that he’s unable to work because of these problems, so he can’t afford to pay for the surgery. So now, he wants to buy insurance. And he can’t.

Why not?

To really see why, we need to take a step back, and look at just what insurance really is.

Reduced to its basics, the idea of insurance is that there’s a chance of an unlikely event happening that you can’t afford to pay for. For example, say that there’s a 1 in 1000 chance of you needing major surgery that will cost $100,000. You can’t afford to pay$100,000 if it happens. So, you get together with 999 other people. Each of you put $100 into the pot. Then if you end up being unlucky, and you need the surgery, you can draw on the$100,000 in the pot to pay for your surgery.

The overwhelming majority of people who put money into the pot are getting nothing concrete for their money. But the people who needed medical care that they couldn’t afford on their own were able to get it. You and the other people who all bought in to the insurance pot were buying insurance against a risk, so that in case something happened, you’d be covered. You know that you’re probably going to lose money on the deal, but you do it to cover the unlikely case that you’ll need it. You’re sharing your risks with a pool of other people. In exchange for taking on a share of the risk (putting your money into the pool without knowing whether you’ll get it back), you take a share of the resource (the right to draw money out of the pool if you need it).

In the modern insurance system, it’s gotten a lot more complicated. But the basic idea is still the same. You’ve got a huge number of people all putting money into the pot, in the form of insurance premiums. When you go to the doctor, the insurance company pays for your care out of the money in that pot. The way that the insurance company sets premiums is complicated, but it comes down to collecting more from each buyer than it expects to need to pay for their medical care. It does that by mathematically analyzing risks.

This system is very easy to game if you can buy insurance whenever you want. You simply don’t buy insurance until something happens, and you need insurance to pay for it. Then you buy coverage. So you weren’t part of the shared risk pool until you knew that you needed more than you were going to pay in to the pool. You’re basically taking a share of the community resources in the insurance pool, without taking a share of the community risk. In philosophical circles, that’s called the free-rider problem.

Insurance can’t work without doing something to prevent free-riders from exploiting the system.

Before Obamacare, the way that the US private insurance system worked was that you could buy insurance any time you want, but when you did, you were only covered for things that developed after you bought it. Any medical condition that required care that developed before you bought insurance wasn’t covered. PECEs prevented the free-rider problem by blocking people from joining the benefits pool without also joining the risk pool: any conditions that developed while you were outside the risk pool weren’t covered. So before Obamacare, Mr. Lang could have gone out and bought insurance when he discovered his medical problems – but that insurance wouldn’t cover the surgery that he needs, because it developed while he was uninsured.

Without PECEs, it’s very easy to exploit the insurance system by free-riding. If you allowed some people to stay out of the insurance system until they needed the coverage, then you’d need to get more money from everyone else who bought insurance. Each year, you’d still need to have a pool of money big enough to cover all of the expected medical care costs for that year. But that pool wouldn’t just need to be big enough to cover the people who bought in at the beginning of the year – it would need to be large enough to cover everyone who bought insurance at the beginning of the year, and everyone who jumped in only when they needed it.

Let’s go back to our example. There’s only one problem that can happen, and it happens to 1 person in 1000 per year, and it costs $100,000 to treat. We’ve got a population of 2000 people. 1000 of them bought into the insurance system. In an average year, 2 people will become ill: one with insurance, and one without. The one with insurance coverage becomes ill, and they get to take the$100,000 they need to cover their care. The person without insurance is stuck, and they need to pay $100,000 for their own care, or go without. In order to cover the expenses, each of the insured people would need to have paid$100.

If people can buy in to the insurance system at any time, without PECEs, then the un-insured person can wait until he gets sick, and buy insurance then. Now the insurance pool needs to cover $200,000 worth of expenses; but they’ve only got one additional member. In order to cover, they need to double the cost per insured person per year to$200. Everyone in the pool needs to pay double premiums in order to accomodate the free-riders!

This leads to a situation that some economists call a death spiral: You need to raise insurance premiums on healthy people in order to have enough money to cover the people who only sign up when they’re unhealthy. But raising your premiums mean that more people can’t afford to buy coverage, and so you have more people not buying insurance until they need it. And that causes you to need to raise your premiums even more, and so it goes, circling around and around.

The only alternative to PECEs that really works to prevent free-riders is to, essentially, forbid people from being free-riders. You can do that by requiring everyone to be covered, or by limiting when they can join the pool.

In the age of PECEs, there was one way of getting insurance without a PECE, and it’s exactly what I suggested in the previous paragraph. Large companies provided their employees with insurance coverage without PECEs. The reason that they could do it was because they were coming to an insurance company once a year with a large pool of people. The costs of the employer-provided insurance were determined by the average expected cost of coverage for that pool of people, divided by the size of the pool. But in the employer-based non-PECE coverage, you still couldn’t wait to get coverage until you needed it: each year, at the beginning of the year, you needed to either opt in or out of coverage; if, in January, you decided to decline coverage, and then in July, you discovered that you needed surgery, you couldn’t change your mind and opt in to insurance in order to get that covered. You had to wait until the following year. So again, you were avoiding free-riders by a combination of two mechanisms. First, you made it so that you had to go out of your way to refuse coverage – so nearly everyone was part of the company’s insurance plan. And second, you prevent free-riding by making it much harder to delay getting insurance until you needed it.

The Obamacare system bans PECEs. In order to avoid the free-rider problem, it does two things. It requires everyone to either buy insurance, or pay a fine; and it requires that you buy insurance for the whole year starting at the beginning of the year. You might think that’s great, or you might think it’s terrible, but either way, it’s one way of making insurance affordable without PECEs.

Mr. Lang wants to be a free-rider. He’s refused to be part of the insurance system, even though he knew that he had a serious medical condition that was likely to require care. Even though he was a regular smoker, and knew of the likelihood of developing serious medical problems as a result. He didn’t want to join the risk pool, and he deliberately opted out, refusing to get coverage when he had the chance.

That was his choice, and under US law, he had every right to make it for himself.

What Mr. Lang does not have the right to do is to be a free-rider.

He made a choice. Now he’s stuck with the results of that choice. As people like Mr. Lang like to say when they’re talking about other people, it’s a matter of personal responsibility. You can’t wait until you need coverage to join the insurance system.

Mr. Lang can buy insurance next year. And he’ll be able to get an affordable policy with government subsidies. And when he gets it, it will cover all of his medical problems. Before the Obamacare system that he loathes and blames, that wouldn’t have been true.

It’s not the fault of President Obama that he can’t buy insurance now. It’s not the fault of congress, or the democratic party, or the republican party. There’s only one person who’s responsible for the fact that he can’t get the coverage that he needs in order to get the surgery that would save his eyesight. And that’s the same person who he can’t see in the mirror anymore.