Inference with Sets in Type Theory

It’s been quite a while since I did any meaningful writing on type theory. I spent a lot of time introducing the basic concepts, and trying to explain the intuition behind them. I also spent some time describing what I think of as the platform: stuff like arity theory, which we need for talking about type construction. But once we move beyond the basic concepts, I ran into a bit of a barrier – no so much in understanding type theory, but in finding ways of presenting it that will be approacha ble.

I’ve been struggling to figure out how to move forward in my exploration of type theory. The logical next step is working through the basics of intuitionistic logic with type theory semantics. The problem is, that’s pretty dry material. I’ve tried to put together a couple of approaches that skip over this, but it’s really necessary.

For someone like me, coming from a programming language background, type theory really hits its stride when we look at type systems and particularly type inference. But you can’t understand type inference without understanding the basic logic. In fact, you can’t describe the algorithm for type inference without referencing the basic inference rules of the underlying logic. Type inference is nothing but building type theoretic proofs based on a program.

So here we are: we need to spend some time looking at the basic logic of type theory. We’ve looked at the basic concepts that underlie the syntax and semantics, so what we need to do next is learn the basic rules that we use to build logical statements in the realm of type theory. (If you’re interested in more detail, this is material from chapter 5 of “Programming in Martin-Löof’s Type Theory”, which is the text I’m using to learn this material.)

Martin Löoff’s type theory is a standard intuitionistic predicate logic, so we’ll go through the rules using standard sequent notation. Each rule is a sequence which looks sort-of like a long fraction. The “numerator” section is a collection of things which we already know are true; the “denominator” is something that we can infer given those truths. Each statement has the form A[\Gamma], where A is a statement, and B is a set of assumptions. For example, F(a) [a \in A, \Delta] means that F(a) is true, provided we’re in a context that includes a \in A.

Personally, I find that this stuff feels very abstract until you take the logical statements, and interpret them in terms of programming. So throughout this post, I’ll do that for each of the rules.

With that introduction out of the way, let’s dive in to the first set of rules.

Simple Introductions

We’ll start off with a couple of really easy rules, which allow us to introduce a variable given a type, or a type given a variable.

Introducing Elements

\frac{A \,\text{type}}{a \in A \,[a \in A]}

This is an easy one. It says that if we know that A is a type, then we can introduce the statement that a \in A, and add that as an assumption in our context. What this means is also simple: since our definition of type says that it’s only a type if it has an element, then if we know that A is a type, we know that there must be an element of A, and so we can write statements using it.

If you think of this in programming terms, the statement A \text{type} is saying that A is a type. To be a valid type, there must be at least one value that belongs to the type. So you’re allowed to introduce a variable that can be assigned a value of the type.

Introducing Propositions as Types

 \frac{a \in A \, []}{A \, \text{true}}

This is almost the mirror image of the previous. A type and a true proposition are the same thing in our type theory: a proposition is just a type, which is a set with at least one member. So if we know that there’s a member of the set A, then A is both a type and a true proposition.

Equality Rules

We start with the three basic rules of equality: equality is reflexive, symmetric, and transitive.

Reflexivity

 \frac{a \in A}{a = a \in A}

 \frac{A \, \text{type}}{A = A}

If a is an element of a type A, then a is equal to itself in type A; and if A is a type, then A is equal to itself.

The only confusing thing about this is just that when we talk about an object in a type, we make reference to the type that it’s a part of. This makes sense if you think in terms of programming: you need to declare the type of your variables. “3: Int” doesn’t necessarily mean the same object as “3: Real”; you need the type to disambiguate the statement. So within type theory, we always talk about values with reference to the type that they’re a part of.

Symmetry

\frac{a = b \in A}{b = a \in A}

\frac{A = B}{B = A}

No surprises here – standard symmetry.

Transitivity

\frac{a = b \in A \quad b = c \in A}{a = c \in A}

\frac{A = B \quad B = C}{A = C}

Type Equality

\frac{a \in A \quad A = B}{a \in B}

\frac{a = b \in A \quad A = B}{a = b \in B}

These are pretty simple, and follow from the basic equality rules. If we know that a is a member of the type A, and we know that the type A equals the type B, then obviously a is also a member of B. Along the same lines, if we know that a=b in type A, and A equals B, then a=b in the type B.

Substitution Rules

We’ve got some basic rules about how to formulate some simple meaningful statements in the logic of our type theory. We still can’t do any interesting reasoning; we haven’t built up enough inference rules. In particular, we’ve only been looking at simple, atomic statements using parameterless predicates.

We can use those basic rules to start building upwards, to get to parametric statements, by using substitution rules that allow us to take a parametric statement and reason with it using the non-parametric rules we talked about above.

For example, a parametric statement can be something like C(x) \, \text{type} [x \in A], which says that applying C to a value x which is a member of type A produces a value which is a type. We can use that to produce new inference rules like the ones below.

 \frac{C(x) \, \text{type} [x \in A] \quad a \in A}{C(a) \, \text{type}}

This says that if we know that given a of type A, C will produce a type; and we know that the value a is of type A, then C(a) will be a type. In logical terms, it’s pretty straightforward; in programming terms it’s even clearer: if C is a function on type A, and we pass it a value of type A, it will produce a result. In other words, C(a) is defined for all values of type A.

 \frac{C(x) \, \text{type}[x \in A] \quad a = b \in A}{C(a) = C(b)}

This is even simpler: if C is a function on type A, then given two values that are equal in type A, C will produce the same result for those values.

Of course, I’m lying a bit. In this stuff, C isn’t really a function. It’s a logical statement; C(x) isn’t quite a function. It’s a logical stamement which includes the symbol x; when we say C(a), what we mean is the logical statement C, with the object a substituted for the symbol x. But I think the programming metaphors help clarify what it means.

Using those two, we can generate more:

 \frac{c(x) \in C(x) [x \in A] \quad a \in A}{c(a) \in C(A)}

This one becomes interesting. C(x) is a proposition which is parametric in x. Then c(x) is a proof-element: it’s an instance of C(x) which proves that C(X) is a type, and we can see c as a computation which, given an element of a produces a instance of C. Then what this judgement says is that given an instance a of type A, we know that c(a) is an instance of type C(a). This will become very important later on, when we really get in to type inference and quantification and parametric types.

\frac{c(x) \in C(x) [x \in A] \quad a = b}{c(a)=c(b)\in C(a)}

This is just a straightforward application of equality to proof objects.

There’s more of these kinds of rules, but I’m going to stop here. My goal isn’t to get you to know every single judgement in the intuitionistic logic of type theory, but to give you a sense of what they mean.

That brings us to the end of the basic inference rules. The next things we’ll need to cover are ways of constructing new types or types from existing ones. The two main tools for that are enumeration types (basically, types consisting of a group of ordered values), and cartesian products of multiple types. With those, we’ll be able to find ways of constructing most of the types we’ll want to use in programming languages.

18 thoughts on “Inference with Sets in Type Theory

  1. Omar

    I don’t think this is true:

    “To be a valid type, there must be at least one value that belongs to the type.”

    It’s perfectly valid for a type to have no terms at all. In terms of sets, the empty set is a valid set. In terms of programming, “void” is a valid type.

    Reply
    1. markcc Post author

      In terms of type theory, it’s only a type if it has a member.

      Type theory is different from set theory – it’s a different foundation for mathematical reasoning. The empty set makes sense in set theory – but the empty type does not make sense in type theory. If it’s empty, it’s not a type.

      In terms of programming, void is a valid type – but it means something slightly different than you might think: the “void” type is a type with exactly one element. So any place you see a void, you know exactly what belongs there – the one, unique void value – and therefore, in the actual implementation, you don’t need to pass it around.

      That’s why in many functional languages, instead of void, they call it something like “unit”.

      Reply
      1. Anton

        I am fairly sure you are mistaken. Type theory has a close correspondence to category theory, with (for example) type theories with unit, binary product, and function types corresponding to Cartesian closed categories. The type corresponding to the initial object of a category would be a type with no members (closed terms).

        What you might mean is that when we interpret the type theory as a programming language, if the language is to be Turing complete, then every type will have a member. (I haven’t seen a formal proof of this, but it seems to be the case.) However, there are other programming languages like Agda which do not have this requirement, and where there are empty types.

        Reply
        1. markcc Post author

          Entirely possible. I’m still jetlagged (I was in Sweden for work last week), but once I’m fully back on my feet, I’ll do some more research, and try to talk to some people who know more about this material than I do. WHen I’ve done that, I’ll either update the post or write a new one explaining where I went wrong. Hopefully it’ll be both interesting and entertaining for you all!

          Reply
        1. markcc Post author

          I understand how it would be used. Think of it like “bottom” in Haskell: it’s an assertion that the function will never terminate.

          Reply
      1. markcc Post author

        In programming, when we talk about “empty” types, we usually mean one of two different things.

        One is that the we didn’t create any instances of it. I created a “student” type with subtype “frosh”, “soph”, “junior”, and “senior”, and then in my data, I don’t actually have any frosh. In terms of type theory, that’s not an empty type. The type “Integer” contains every possible integer, regardless of whether a program actually uses any integer values.

        The other case is the “unit” type, or void. We think of it as empty, because it doesn’t contain any data. But in terms of type theory, there’s exactly one instance of it. Theoretically, there is a single, unique value named “void” or “unit” or “nil”.

        Reply
        1. Omar

          The only your theory I’m actually familiar with is Homotopy Type Theory. There having no terms is definitely allowed. They call a type that does have a term, inhabited. They call types that are either empty or contractible, propositions (and empty is allowed: it corresponds to “false”).

          I don’t think allowing empty toes is special to HoTT. Martin-Lof type Theory in general must allow them, otherwise things must be very awkward indeed. For example, given any two functions f, g : A–>B you should be allowed to form the equalizer, the type of a:A such that f(a)=g(a). That can certainly be empty (take f to be the identity on the natural numbers, and g to be the function that adds one).

          As I said, I’m only really familiar with that one type theory, and I guess it is conceivable you are studying an impoverished theory that does not allow for empty types, but then it must not allow many, many constructions which risk producing an empty type (another example is intersecting two subtypes of a given type). I really doubt it though, outlawing empty toes is very unnatural.

          Reply
          1. Omar

            Damn autocorrect! All the “toes” in my last reply, should be “type”, of course.

            I’m very curious about why you think having a type with no terms makes no sense. Could you explain that? Or point to a reference (this would surprise me)?

            In the Curry-Howard correspondence, types are thought of as propositions, and terms of a type are thought of as proofs of a proposition. Are you claiming it makes no sense to have a proposition P with no proofs? If P is false I would certainly hope P has no proofs!

            For example, in some type theories there are equality types: given a type A and two terms a : A, b : A, there is a type a =_A b, which you can think of as the “collection of proofs that a is equal to b”. If you take, say the natural numbers N, and two terms like 3 and 4, the type 3 =_N 4, because as you’d expect there is no proof that 3 is equal to 4!

            The nLab has a page about the empty type: https://ncatlab.org/nlab/show/empty+type

            So does Wikipedia, though there it is called the bottom type (another common name for it): https://en.wikipedia.org/wiki/Bottom_type

          2. markcc Post author

            For a proposition, the definition of a false proposition is a proposition for which there can be no proofs. But a false proposition doesn’t define a type.

            For a reference, look at the textbook that I linked in my post. For Martin-Löf type theory,
            a type is never empty. In fact, in the next chapter that I’ll be covering, when we talk about enumeration types, we’ll discuss this a bit: bottom is “absurdity”, the empty set, which isn’t a type. In fact, in ML type theory, in the formal inference rules, there is no rule that allows you to introduce an empty type; there’s just one that allows you to eliminate it, because any time you arrive at an empty, you’ve landed on an absurdity – a falsehood, which needs to be dropped.

        2. Steve Downey

          It was my understanding that unit and void were distinct. Unit being equivalent to the empty product type, of which there is exactly one element, and void being the empty sum type, which is uninhabited.
          Might, however, be a different branch of type theory.

          Reply
  2. Omar

    I’m even more confused now. The book you linked to clearly states in Chapter 6 that the judgement “{} set” is valid, and that {} has no introduction rule because {} is not supposed to have any members. It also explains, as you and I have both said, that when thought of as a proposition it corresponds to absurdity or falsehood.

    I still have no idea what you mean when you say it is not a type.

    Reply
  3. Robert Harper

    There seems to be some confusion here, may I contribute? Empty types most assuredly exist in type theory. A type does not have to be inhabited. The type void, the sum of no summands, is empty; it corresponds to falsehood as a proposition. The type void is internally known to be empty in that the nullary case maps it to every type. There can be types for which we don’t know an inhabitant, but they are not empty. The type corresponding to the P/NP conjecture would be such a one.

    Reply
    1. markcc Post author

      OK, so I’m confused, and I’d appreciate help from those who know more than me. Once I understand this, I’ll either update the post, or add an entirely new post explaining where I went wrong.

      As I’ve said, I’m learning this stuff as I go. My reading of the Martin-Lof text was that a type is a true proposition, and that a true proposition, it must have elements.

      Similarly, my reading of the text is that the statement (in their notation) “A set” (asserting that A is a set) is proven by showing that A has at least one element.

      Are both of those readings incorrect?

      Reply
      1. Robert Harper

        The general principle as regards propositions and types is that a proposition is identified with the type of evidence for it. False propositions have no evidence for them, and hence correspond to empty types; true propositions have evidence, corresponding to elements of the corresponding type. Because evidence is generally to be thought of as computable, one cannot expect that every proposition is empty or not; there are many for which we have neither evidence for not a refutation of it. Thus, we do not have evidence for the decidability of every proposition, the law of the excluded middle. However, it is consistent to postulate an oracle that does decide any given proposition, or even all propositions. Notice that Boolean true/false thinking is not suitable as an account of propositions in this setting, because a function from a type to the Booleans would be a decision procedure for that type/proposition, which you would not in general have. Thus the emphasis is on inhabitation, not Boolean decisions. A type may have an element (proof); you might be able to show that it has no element (refutation); or you may simply not know (open problem).

        Reply
      2. Omar

        As I explained before, yes, both readings are incorrect.

        Let me explain again the “A set” bit with page numbers from the book you linked to. On page 41 it introduces the formation rule for sets: if you have n constants i_1, …, i_n, you can claim “{i_1, …, i_n} set”. It really should say *before* the formation rule the range of n allowed, but it is a little sloppy in that regard. However, immediately after, when it states the introduction rules for elements of the set, it does say “(n >= 0)”.

        Then on page 43 it says “If n=0 we get the empty set {} which, of course, has no introduction rule.” It’s saying explicitly you are allowed to take n=0 to get the empty set and its formation rule, but that, of course, there are no introduction rules for its elements (simply because it has no elements).

        Reply
  4. Rimaan

    In your second to last conclusion in the text you write c(a) is element of C(A), shouldn’t this be c(a) is element of C(a) ?

    Reply

Leave a Reply