As usual, I’m going in several different directions. I’m going to continue doing data structure posts, but at the same time I also want to get back to the type theory stuff that I was writing about before I put the blog on hiatus.
So let’s get back to a bit of Martin-Loff type theory! (The stuff I’m writing about today corresponds, roughly, to chapter 2 of the Nordstrom/Petersson/Smith text.)
One of the key ideas of Martin-Loff’s type theory is that a logical statement is exactly the same thing as a specification of a computation. When you define a predicate like “Even”, the definition specifies both the set of numbers that satisfy the predicate, and the computation that tests a number for membership in the set of even numbers. If you haven’t provided enough information to fully specify the computation, then in Martin-Loff type theory, you haven’t defined a predicate or a set.
When you say “2 is even”, what you’re really saying in terms of the type theory is that “The computation for ‘even(2)’ will succeed”. The computation and the logical statement are the same thing.
In functional programming, we like to say that the program is the proof. Martin-Loff type theory is where that came from – and today we’re going to take a first look in detail at exactly what it means. In the world of type theory, the program is the proof, and the proof doesn’t exist without the program.
This creates an interesting set of equivalent interpretations. When you see a statement like “x : T” (or ), that could be interpreted in the following ways, all of which are really equivalent in type theory.
- Set theoretic: is a member of the set .
- Intuitionistic: is a proof object for the proposition .
- Computational: is a program that satisfies the specification .
- Abstract: is a solution for the problem .
In the rest of this post, I’m going to focus on those four interpretations, and explain how each of them makes sense in this version of type theory.
The set theoretic interpretation is obvious – as the name suggests, it’s nothing but what we all learned from basic set theory. An object is a member of a set – which means, from set theory, that the object satisfies some predicate in first order predicate logic – because that’s what it means to be a member of a set.
The intuitionistic interpretation is almost the same as the set theoretic, but rewritten for intuitionistic logic. In intuitionistic logic, the predicate over the set is written as a proposition , and if we know that is a member of the set , then that means that we have a proof that demonstrates that is true.
The computational interpretation takes the intuitionistic one, and rephrases it in computational terms. A logical proposition, rendered into a computational setting, is just a specification of a program; and a proof of the proposition is a program that satisfies the specification.
Finally, the abstract interpretation just rephrases the computational one into terms that aren’t tied to a computing device. A predicate is a problem that needs to be solved; anything that provides a solution to the problem is demonstrating a member of the set.
The key takeaway though is the basic idea here of what a type is. What we’re talking about as a type here is something that goes far beyond any programming language’s idea of what a type is. In intuitionistic type theory, a type is a specification of a computation. If we had this in a real language, that would mean that any program that typechecked would be guaranteed to work: asserting that has type means, precisely, that is a computation that matches the specification!
(Of course, that’s not the panacea that you might think the first time you hear that. The catch is simple: the type is a specification of the computation. That means that just writing a type is a form of programming! And that means that your type descriptions are going to have bugs. But we’ll come back to that in a much later post.)
What type theory is doing is taking something like set theory, and re-rendering it entirely in a computational world. It still has a mapping from the computations to the abstract concepts that we use when we’re thinking, but if we can talk about those abstract concepts in type theory, we’ll always do it by mapping them into some kind of computation.
In type theory, we’re not dealing in a world of pure mathematical objects that exist if we can describe them; instead, we’re building a world where everything is at least theoretically computable. That might seem constraining, but every proof already corresponds to a computation of some sort; the only additional constraint here is that we can’t play tricks like the axiom of choice, where we can “prove” the existence of some unattainable, intangible, nonsensical object.
To make that work, we’re going to take all of the construct that we’re used to seeing in intuitionistic logic, and give them a meaning in terms of computations.
For example, in set theory, we can have a statement – meaning that is a superset of , that every element of is also necessarity an element of . In type theory, since and are specifications of computations, that means that a member (or proof) of is a computation that given a proof of , generates a proof of – in short, that implies .
Now, suppose that we want to prove . How could we do that? We need a program that given a proof of generates a proof of . That is, we need an implementation of the identity function: .
In fact, using the computation interpretation of things, we can interpret as being the type of a function that takes an instance of , and generates an instance of – that is, that if , then is a function from an instance of to an instance of !
The only trick to that is understanding that in type theory, saying that is an element of means that is a proof of . Using the same interpretation, that means that means that is a proof of – which means the same thing as saying that given an example of an element of (a proof of ), will produce an element of (a proof of ). The statement is exactly the same thing as the logical implication , which is exactly the same thing as the type of a function from to .
Notes on sources: I’m working from two main references in this series of posts, both of which are available in PDF form online.
- “Programming in Martin-Lof’s Type Theory”, by Nordstrom, Petersson, and Smith, which you can download here.
- “Type Theory and Functional Programming” by Simon Thompson (available here).
In addition, I first learned a lot of this from reading some papers by Phil Wadler and Simon Peyton Jones. The exactly references but the exact references to those are long-lost in the shadows of my memory. But any of their papers are well-worth reading, so just read them all!)