In the next step of our exploration of type theory, we’re going to take a step away from the set-based stuff. There are set-based intepretations of every statement in set theory. But what we really want to focus on is the interpretation of statements in computational terms.

What that means is that we’re going to take logical statements, and view them as computation tasks – that is, as formal logical specifications of a computation that we’d like to do. Under that interpretation, a proof of a statement $S$ is an implementation of the task specified by $S$.

This interpretation is much, much easier for computer science types like me than the set-based interpretations. We can walk through the interpretations of all of the statements of our intuitionistic logic in just a few minutes.

Conjunction
$A \land B$ is a specification for a program that produces a pair $(a, b)$ where $a$ is a solution for $A$, and $b$ is a solution for $B$.
Disjunction
$A \lor B$ is a specification for a program that produces either a solution to $A$ or a solution to $B$, along with a way of identifying which of $A$ and $B$ it solved. We do that using a version of the classical projection functions: $A \lor B$ produce either $\text{inl}(A)$ (that is, the left projection), or $\text{inr}(B)$ (the right projection).
Implication
$A \supset B$ is a specification for a program that produces a solution to $B$ given a solution to $A$; in lambda calculus terms, it’s a form like $\lambda x: b(x)$.

Now, we can move on to quantified statements. They get a bit more complicated, but if you read the quantifier right, it’s not bad.

Universal
$(\forall x \in A) B(x)$ is a program which, when executed, yields a program of the form $\lambda x.b(x)$, where $b(x)$ is an implementation of $B$, and $x$ is an implementation of $A$. In other words, a universal statement is a program factory, which produces a program that turns one program into another program.

To me, the easiest way to understand this is to expand the quantifier. A quantified statement $\forall x \in A: B(x)$ can be read as $\forall x: x \in A \Rightarrow B(x)$. If you read it that way, and just follow the computational interpretation of implication, you get precisely the definition above.

Existential
Existential quantification is easy. An existential statement $\exists x \in A: B(x)$ is a two part problem: it needs a value for $a$ (that is, a value of $x$ for which a proof exists that $x \in A$), and a proof that for that specific value of $x$, $x \in B$. A solution, then, has two parts: it’s a pair $(a, b)$, where $a$ is a value in $A$, and $b$ is a program that computes the problem $B(a)$.

This is the perspective from which most of Martin-Loff’s type theory pursues things. There’s a reason why ML’s type theory is so well-loved by computer scientists: because what we’re really doing here is taking a foundational theory of mathematics, and turning it into a specification language for describing computing systems.

That’s the fundamental beauty of what Martin-Loff did: he found a way of formulating all of constructive mathematics so that it’s one and the same thing as the theory of computation.

And that’s why this kind of type theory is so useful as a component of programming languages: because it’s allowing you to describe the semantics of your program in terms that are completely natural to the program. The type system is a description of the problem; and the program is the proof.

With full-blown Martin-Loff type system, the types really are a full specification of the computation described by the program. We don’t actually use the full expressiveness of type theory in real languages – among other things, it’s not checkable! But we do use a constrained, limited logic with Martin-Loff’s semantics. That’s what types really are in programming languages: they’re logical statements! As we get deeper into type theory, well see exactly how that works.