The Program is the Proof: Propositions in Type Theory

As usual, I’m going in several different directions. I’m going to continue doing data structure posts, but at the same time I also want to get back to the type theory stuff that I was writing about before I put the blog on hiatus.

So let’s get back to a bit of Martin-Loff type theory! (The stuff I’m writing about today corresponds, roughly, to chapter 2 of the Nordstrom/Petersson/Smith text.)

One of the key ideas of Martin-Loff’s type theory is that a logical statement is exactly the same thing as a specification of a computation. When you define a predicate like “Even”, the definition specifies both the set of numbers that satisfy the predicate, and the computation that tests a number for membership in the set of even numbers. If you haven’t provided enough information to fully specify the computation, then in Martin-Loff type theory, you haven’t defined a predicate or a set.

When you say “2 is even”, what you’re really saying in terms of the type theory is that “The computation for ‘even(2)’ will succeed”. The computation and the logical statement are the same thing.

In functional programming, we like to say that the program is the proof. Martin-Loff type theory is where that came from – and today we’re going to take a first look in detail at exactly what it means. In the world of type theory, the program is the proof, and the proof doesn’t exist without the program.

This creates an interesting set of equivalent interpretations. When you see a statement like “x : T” (or x \in T), that could be interpreted in the following ways, all of which are really equivalent in type theory.

  1. Set theoretic: x is a member of the set T.
  2. Intuitionistic: x is a proof object for the proposition T.
  3. Computational: x is a program that satisfies the specification T.
  4. Abstract: x is a solution for the problem T.

In the rest of this post, I’m going to focus on those four interpretations, and explain how each of them makes sense in this version of type theory.

The set theoretic interpretation is obvious – as the name suggests, it’s nothing but what we all learned from basic set theory. An object is a member of a set – which means, from set theory, that the object satisfies some predicate in first order predicate logic – because that’s what it means to be a member of a set.

The intuitionistic interpretation is almost the same as the set theoretic, but rewritten for intuitionistic logic. In intuitionistic logic, the predicate over the set is written as a proposition T, and if we know that x is a member of the set T, then that means that we have a proof that x demonstrates that T is true.

The computational interpretation takes the intuitionistic one, and rephrases it in computational terms. A logical proposition, rendered into a computational setting, is just a specification of a program; and a proof of the proposition is a program that satisfies the specification.

Finally, the abstract interpretation just rephrases the computational one into terms that aren’t tied to a computing device. A predicate is a problem that needs to be solved; anything that provides a solution to the problem is demonstrating a member of the set.

The key takeaway though is the basic idea here of what a type is. What we’re talking about as a type here is something that goes far beyond any programming language’s idea of what a type is. In intuitionistic type theory, a type is a specification of a computation. If we had this in a real language, that would mean that any program that typechecked would be guaranteed to work: asserting that x has type T means, precisely, that x is a computation that matches the specification!

(Of course, that’s not the panacea that you might think the first time you hear that. The catch is simple: the type is a specification of the computation. That means that just writing a type is a form of programming! And that means that your type descriptions are going to have bugs. But we’ll come back to that in a much later post.)

What type theory is doing is taking something like set theory, and re-rendering it entirely in a computational world. It still has a mapping from the computations to the abstract concepts that we use when we’re thinking, but if we can talk about those abstract concepts in type theory, we’ll always do it by mapping them into some kind of computation.

In type theory, we’re not dealing in a world of pure mathematical objects that exist if we can describe them; instead, we’re building a world where everything is at least theoretically computable. That might seem constraining, but every proof already corresponds to a computation of some sort; the only additional constraint here is that we can’t play tricks like the axiom of choice, where we can “prove” the existence of some unattainable, intangible, nonsensical object.

To make that work, we’re going to take all of the construct that we’re used to seeing in intuitionistic logic, and give them a meaning in terms of computations.

For example, in set theory, we can have a statement A \supset B – meaning that A is a superset of B, that every element of B is also necessarity an element of A. In type theory, since A and B are specifications of computations, that means that a member (or proof) of A \supset B is a computation that given a proof of A, generates a proof of B – in short, that A implies B.

Now, suppose that we want to prove A \supset A. How could we do that? We need a program that given a proof of A generates a proof of A. That is, we need an implementation of the identity function: \lambda a . a.

In fact, using the computation interpretation of things, we can interpret A \supset B as being the type of a function that takes an instance of a, and generates an instance of b – that is, that if f : (A \supset B), then f is a function from an instance of A to an instance of B!

The only trick to that is understanding that in type theory, saying that a is an element of A means that a is a proof of A. Using the same interpretation, that means that f: A \supset B means that f is a proof of A \supset B – which means the same thing as saying that given an example of an element of A (a proof of A), f will produce an element of B (a proof of B). The statement A \supset B is exactly the same thing as the logical implication A \rightarrow B, which is exactly the same thing as the type of a function from A to B.


Notes on sources: I’m working from two main references in this series of posts, both of which are available in PDF form online.

  1. “Programming in Martin-Lof’s Type Theory”, by Nordstrom, Petersson, and Smith, which you can download here.
  2. “Type Theory and Functional Programming” by Simon Thompson (available here).

In addition, I first learned a lot of this from reading some papers by Phil Wadler and Simon Peyton Jones. The exactly references but the exact references to those are long-lost in the shadows of my memory. But any of their papers are well-worth reading, so just read them all!)

23 thoughts on “The Program is the Proof: Propositions in Type Theory

  1. Brian Slesinsky

    I’m confused about the relationship between the superset operator and logical inference. It seems like a function can map from one type to another without it being a superset.

    For example, it’s my understanding that a function from booleans to ints should be interpreted as proving that if a boolean exists, then an int exists. But we wouldn’t say that booleans are a superset of ints; it makes more sense to say these sets are disjoint. Furthermore, the sizes are wrong: there are more ints than booleans. This doesn’t seem much like a subset at all.

    Of course the return type is rather loose; a particular function can only map a boolean to at most two integers. Perhaps functions from boolean to int don’t actually exist in this language?

    Reply
    1. markcc Post author

      I think the problem is that you’re trying to think of these types as type declarations in a real programming language. But they’re far, far more expressive than that.

      Each type is a full specification of a computation. So a type isn’t, “Boolean” – it’s much more than that. When you say A \supset B, you’re describing a computation that converts every proof of A to a proof of B. If you think of it in that sense, you can make a sort of sense out of Boolean -> Int. “Int” describes a computation that produces an integer. If you’ve got a computation that produces an boolean, then you can use it to generate a computation that produces an integer.

      I started writing a long explanation – but decided that it should probably be a post of its own – I’ll post in when I finish it, either today or tomorrow. But a short version: think about a function like the sum of integers from 1 to N. In type theory, we wouldn’t describe that as a function from Int -> Int. We’d describe it as a function from a natural number to \Sigma_{x=1}^{n} x. The type fully specifies the expected computation. There are a bunch of different ways that you could implement that, but they only prove the type if they match the specification.

      Reply
      1. John Armstrong

        I still think you have this backwards; if [set theory] every B is an A [/set theory] then [type theory] given a proof of B we can get a proof of A [/type theory].

        Shouldn’t this go the same direction as the classic correspondence between naïve set theory and FOPL? Containment is implication, but in the other direction from the one you’ve stated.

        Reply
      2. John Armstrong

        Here’s an example: let A be “is a natural number” and B be “is an even natural number”. On the set theory level we clearly have B contained in A. On the type theory level we have a proof that if B then A; that is, if we have a proof that (even natural number) then we can convert it into a proof that (natural number). The implication clearly doesn’t go the other way!

        Reply
        1. markcc Post author

          Ok, so having gone back to the text and re-read and pondered it, I’m still convinced that I have it right 🙂

          Saying “a : A” means that “a is a proof of A”. Saying A \supset B means that I have a program that, given a proof of A, generates a proof of B.

          Note that A and B are, for the moment, just simple propositions – that they don’t take any kind of parameters. I think that this is where the confusion comes from

          So A superset B means that a proof of A is effectively also a proof of B. The implication runs that direction: IF I have a proof of A, THEN I have a proof of B. It doesn’t work the opposite direction: a proof of B does not give me a proof of A.

          If you think of it in parametric terms, then A \supset B means is something like \forall x \in A: x \in B – the membership of x in A implies the membership of x in B. That’s not quite correct, because it skips the fact that the statement A \supset B is a specification of a computation, and so the object that’s in B isn’t necessarily the same object that’s in A. But we’ll get to that in later posts.

          Reply
          1. John Armstrong

            Yeah, that’s the part that still feels backwards. Set-containment means that membership in B implies membership in A, not the other way around. I’m willing to accept that this isn’t set theory and doesn’t mean the same thing as it does in set theory, but the intuition is REALLY screwey and confusing then.

  2. Michael Norrish

    Yeah, I’m also confident you have it backwards. If I have a proof of A, I can’t always turn that into a proof of B when A is a superset of B, because there will be members of A that aren’t members of B. I think the symbol you’re using for implication is causing part of the confusion.

    If you like, you could also think about how you define subset. A is a subset of B, if

    !x. x \in A ==> x \in B.

    So, notice how we have A \subseteq B iff an implication with the arrow going from A to B

    Reply
  3. Brian Slesinsky

    Perhaps the confusion is between “for all” and “there exists”.

    Suppose we interpret “a proof of A” meaning “a proof that a member of A exists”. Then a function from A -> B proves that if a member of A exists then a member of B exists.

    But this says nothing about whether A is a superset of B. If the function were onto then we could say that if all members of A exist then all members of B exist, but that doesn’t seem to be where this is going?

    Reply
    1. markcc Post author

      We’re *not* talking about quantified statements. Saying A \supset B is *not* the same thing as saying \forall x \in A: x \in B.

      We’re *not* talking about a true superset relation – but about a super-*proof* relationship. We’ll get to quantified statements later, and that’s where the classical set-theoretic relationships will be defined in set theory.

      When we say A \supset B in set theory, we’re saying that A is a logical statement, which is a stronger statement that B. It is exactly the proof-based definition of implication: if A is true, then B is true. If a proves A, and A logically implies B, then a also proves B.

      Again, the key thing to remember here is that we are not making a quantified statement, nor are we making a set theoretic statement. We’re saying that if we have a proof of a logical statement A, and A implies B, then a proof of A provides a proof of B.

      Reply
    2. markcc Post author

      “a proof of A” means, exactly, a proof of A. We’re *not* talking about sets and set membership here: we’re talking about proofs and the statements that they prove.

      Reply
  4. Brian Slesinsky

    Okay, got it. In that case, how does the “set theoretic” interpretation make any sense? It seems that making an analogy to set theory got us off on entirely the wrong track and we would have been better without it?

    Reply
    1. markcc Post author

      In retrospect, it was a mistake.

      I’m trying to follow the explanation in the text that I’m working from, so that you guys can easily look into that text for details.

      Reply
  5. Michael Norrish

    I think you had it right when you wrote that

    If you think of it in parametric terms, then A \supset B means is something like \forall x \in A: x \in B – the membership of x in A implies the membership of x in B.

    The real source of confusion is the use of the superset symbol for implication, precisely because it’s pointing in the wrong direction. If you just use a proper implication arrow, I think everything would be better.

    Reply
    1. markcc Post author

      I disagree for two reasons.

      (a) There is a set-like relationship there, and it’s important to understand that. The proof-set of A is a superset of the proof-set of B. In this context, when I say a: A, I’m asserting that a is a proof of the truth of A. If, taken as a logical statement, A implies B, then the set of proofs of A is a superset of the set of proofs of B: in this kind of constructivist/computational setting, that’s what implication means. For any possible proof of A, A \Rightarrow B means that every proof of A provides a proof of B. But the classical set-interpretation isn’t true: there may be proofs of B that do not provide proofs of A.

      (b) I’m not the person inventing type theory – it’s a rich field of math that people smarter than me have been working on for decades. I want people who read my blog to be able to use what they learned here to start looking at the actual literature from the type theory community. If I change the notation because I don’t like it, that’s not going to work.

      Reply
  6. Michael Norrish

    If, taken as a logical statement, A implies B, then the set of proofs of A is a superset of the set of proofs of B

    Draw the Venn diagram. By your assertion, the circle containing proofs of A will completely enclose the circle containing proofs of B. Right? That’s what being a superset would require graphically.

    But then, just by looking at the circle, you will see that there exist proofs of A that aren’t proofs of B. But that’s impossible because A implies B. To get the implication to be right, the proofs of A are a subset of the proofs of B.

    So, your interpretation can’t be right.

    Reply
    1. markcc Post author

      A venn diagram of this doesn’t make sense, because we’re not doing traditional set theory.

      Let me try a slightly different angle, and see if this helps.

      In classical set theory, I say a \in A to mean that a is a member of the set A. In logical terms, that means that there is a predicate P_A such that P_A(a) is true.

      In type theory, we’re looking not at a set, but at a closed logical proposition, P. When I say p: P, what I’m saying is not that p is a member of a set. Most important, I am not saying that P(p) is true. P is an atomic proposition – it does not take P is a parameter: it doesn’t take parameters at all.

      For example, let’s let P could be the statement that “For all n, the sum of the integers from 1 to N is N*(N+1)/2.” We could have a proof of P, which we’ll call i, which is the classic proof by induction.

      Now, we can have a second statement, Q, that “the sum of the integers from 1 to 10 is 55”.

      In classic predicate logic, P implies Q. In type theory, i: P, because i is the inductive proof of the statement. i is also a proof of Q. In fact, anything which is a proof of P is also a proof of Q – no matter what, any statement that proves \Sigma_{i=1}^n = n(n+1)/2 is also going to prove that \Sigma_{i=1}^10} = 55.

      But the opposite direction – the direction that would be true if we were talking about a set theoretic statement like a \in A instead of a proof-driven statement like i: P – it doesn’t work. A proof of Q is not a proof of P. I can prove that \Sigma_{i=1}^10} = 55 by expanding the sigma to 1+2+3+4+5+6+7+8+9+10, and then doing the arithmetic 6 + 15  + 24 + 10 == 55$. That proves Q - but it tells us nothing about P$$. The inclusion direction for sets of proofs is different from the inclusion direction for sets of values, because the underlying meaning is different.

      Reply
      1. Michael Norrish

        Inclusion in sets doesn’t care what the elements are. A statement like inclusion on sets of proofs is different from inclusion on sets of values is fundamentally bogus. Your example demonstrates it perfectly.

        The proof of Q via arithmetic is not a proof of P: that’s absolutely right. Therefore: the set of proofs of P is not a superset of the set of the proofs of Q. In fact, the converse holds: the set of proofs of P is a subset of the proofs of Q.

        To deny the above does unspeakable violence to the notion of set and I refuse to believe that the source material really makes that claim.

        Reply
  7. Brian Slesinsky

    Less arguing, more reading. I’ve only just started, but found this:

    “A proposition is interpreted as a set whose elements represent the proofs of the proposition. Hence, a false proposition is interpreted as the empty set and a true proposition as a non-empty set. Chapter 2 contains a detailed explanation of how the logical constants correspond to sets, thus explaining how a proposition could be interpreted as a set.”

    http://www.cse.chalmers.se/research/group/logic/book/book.pdf

    Reply
  8. Brian Slesinsky

    I think I figured it out. In “Programming in Martin-Lof ’s Type Theory” 2.1, they say:

    For implication and conjunction we get, in view of the explanations
    above,
    A ⊃ B is identified with A → B, the set of functions from A to B.
    and
    A & B is identified with A × B, the cartesian product of A and B.

    The symbol “⊃” is not defined but seems not to be intended as a superset operator. There is a set here, but it’s a set of functions, any of which could be used as a proof object for implication.

    Reply
    1. Seth Buehler

      Indeed, the symbol $latex\superset$ was the traditional notation for implication from at least Russell and Whitehead. As far as I am aware, the now-ubiquitous arrow notation got its start in the very paper you’re citing.

      Reply
  9. Michael Norrish

    My internet connection may have flaked out just as I attempted a previous post. (Moderator: feel free to delete either if you see both).

    Just to say that I’m sorry for the unnecessarily hostile tone in my previous message. Type theory is a great subject to be introducing this way, and the set stuff is a minor tangent.

    Reply
  10. EMF

    The problem here is that the symbol ⊃ is used for both implication and superset, and the two meanings are opposite: if A⊃B, then x∈B⊃x∈A.
    As such, the problem is that Russell and Whitehead gave us some confusing notation.

    Reply

Leave a Reply