Types and Lambda Calculus

Most programmers are familiar with λ-calculus. It’s one of the most widely used tools in theoretical computer science and logic. It’s the basis of many programming languages, including Lisp, Haskell, ML, and Scala. It’s had a strong influence on many other programming languages that aren’t strictly based on it, including Java, C++, and Javascript.

Motivation: the Problem

Modern programmers love the λ-calculus. A lot of the mathematicians who worked in what became computer science also loved it. But if you go back to its roots, there was a problem.

The λ-calculus in its non-typed form, as it was originally defined, was inconsistent.

Haskell Curry eventually reduced the paradox to something quite simple:

 r = (\lambda x. ((x x) \Rightarrow y))

That is a function which returns y if (x x) is true.

To see the paradox, we need to change how we think of λ-calculus. Most people today who know λ-calculus are programmers, and we think of it primarily as something like a programming language. But the roots of λ-calculus were laid before computers as we know them existed. At the time, λ-calculus was a tool used by logicians. To them it wasn’t a computing machine, it was a logical model of computation.

When we look at the expression (r r) \Rightarrow y, what we see is executable logic that reads as “If applying r to r returns true, then return y”. And by using computational reasoning, we’d conclude that (r r) is a non-terminating computation, because to get the value of (r r), we need to evaluate (r r), and then apply r to that result. But to logicians like Haskell Curry, it read is “The statement (r r) implies y”. Rendered into simple english, it’s a statement like: “If this statement is true, then my hair is purple”. It’s purely a logical implication, and so even though we can’t actually evaluate (r r), in logical terms, we can still say “This is a well-formed logical statement, so is it true?”.

Is that true? Suppose it is. If it’s true, then that means that it says that whatever y is must be true. Without knowing what y is, we can’t be sure if this is a true statement.

Suppose that it’s false. If (r r) is false, that means that the implication (r r) must be true, because FOPL says that if the antecedent of an implication is false, the entire implication is true.

It’s clearer when you look at the english: If this sentence is true, then my hair is purple”. My hair isn’t purple, which means that the statement can’t be true. But if the statement isn’t true, then the implication is true, so the statement is true. We’re caught in a self-reference loop, just like what we saw in Russell’s paradox.

This was considered a very bad thing. It’s a bit subtler than the problem with naive set theory. In set theory, we had an unambiguous inconsistency: we had a set that was well-defined under the axioms of set theory, and without any question, that set was inconsistent. Here, we’ve got an expression which might be consistent, and then again, it might not. It depends on what the consequent – the “my hair is purple” part – of the implication is. If it’s something true, then we’re fine. If it’s not, then we’re stuck.

The problem is, no matter what you put into that “my hair is purple” part, you can use this to produce a proof that it’s true, even if it isn’t. And that’s a fundamental inconsistency, which can’t be tolerated.

Curry’s paradox meant that the logicians working with λ-calculus had a problem that they needed to solve. And following the fashion of the time, they solved it using types. Much like ST type theory attempted to preserve as much of set theory as possible while fixing the inconsistency, typed λ-calculus tried to fix the inconsistency of λ-calculus. The basic approach was also similar: the typed λ-calculus introduced a stratification which made it impossible to build the structures that led to inconsistency.

In ST, the solution was to build a logical system in which it was impossible to express self-reference. In λ-calculus, the basic goal was the same: to eliminate self-reference. But in λ-calculus, that restriction is stated differently. What the type system in λ-calculus does is make it impossible to construct a statement that is a fixed-point combinator.

If we look carefully at the paradoxical statement up above, it’s not really pure λ-calculus. It relies on the fact that we’re defining a function named r, and then applying r to itself using the name r. But that’s just syntactic sugar: in fact, r can’t reference r. You need some tool to let you apply an expression to itself: that tool is called a fixed-point combinator. The most common fixed-point in λ-calculus is the Y combinator, which underlies how recursive computations usually work in λ-calculus.

The way that the simply typed λ-calculuss gets around the Curry paradox is by making it impossible to build a well-typed fixed-point combinator. Without that, you can’t build the self-referential constructs that cause the inconsistency. The downside is that the simply typed λ-calculus, without a fixed point combinator, is not Turing complete. The evaluation of every simple typed λ-calculus expression will eventually terminate.

(As an aside: this isn’t really a problem in practice. The self-referential expressions that cause the Curry paradox turn into non-terminating computations. So they don’t produce a paradox; they just don’t produce anything. Logical inconsistencies don’t produce results: they’re still an error, instead of terminating with an inconsistent result, they just never terminate. Again, to the logicians at the time, the idea of non-termination was, itself, a deep problem that needed a solution.)

The Solution: Stratification by Types

The way that the simply typed λ-calculus fixed things was by creating a stratification using types. The type system created a restriction on the set of statements that were valid, well-formed logical statements. That restriction made it impossible to express a fixed point combinator or a general recursive computation of any kind.

It’s helpful to once again refer back to ST. In ST, we started the type of atoms at level 0. To get to level 1, we defined predicates over level-0 objects, and the set of objects that matched the predicate was a level-1 type. Then we could define predicates over level-1 objects, and the set of level-1 types that satisfied the predicate was a level-2 type, and so on. In the simply typed λ-calculus, we do the same thing, but with functions: we can build functions that operate on primitive atoms (also called base values), or on other functions. When we define a function, it must be assigned a type, and that type must be something at a lower level than the function being defined. You can’t ever pass a function to itself, because the function is an object at a higher level than the type of its parameter.

We start with base types. Every simply-typed lambda calculus starts with a collection of primitive atomic values. The set of atomic values is partitioned into a collection of types, which are called the base types. Base types are usually named by single lower-case greek letters: So, for example, we could have a type \sigma which consists of the set of natural numbers; a type \tau which corresponds to boolean true/false values; and a type \gamma which corresponds to strings.

Once we have basic types, then we can talk about the type of a function. A function maps from a value of one type (the type of parameter) to a value of a second type (the type of the return value). For a function that takes a parameter of type \gamma, and returns a value of type \delta, we write its type as “\gamma \rightarrow \delta“. The “\rightarrow” is called the function type constructor; it associates to the right, so \gamma \rightarrow \delta \rightarrow \epsilon is equivalent to \gamma \rightarrow (\delta \rightarrow \epsilon).

In every function declaration, we need to specify the type of the parameters and the type of the result. So:

  1. \sigma \rightarrow \sigma is a function which takes natural number as a parameter, and returns a natural number as a result.
  2. (\sigma \rightarrow \sigma) \rightarrow \sigma is a function which takes a \sigma \rightarrow \sigma function as a parameter, and produces a natural number as a result.

As usual in λ-calculus, we don’t have multi-parameter functions – all functions are curried, so a function like addNatural would be a function that takes a natural number as a paramater, and returns a function that takes a natural number and returns a natural number. So the type of addNatural is \sigma \rightarrow \sigma \rightarrow \sigma.

How does this get around the self-reference problem? A function like the one in the Curry paradox takes an object of its own type as a parameter. There’s no way to write that in a type system. It’s a significant restriction which makes it impossible to write general recursive expressions – it limits us to something close to primitive recursion, but it avoids the inconsistency. All valid expressions written with this system of types in place is guaranteed to terminate with a consistent result.

Extending λ-calculus with types

Now, it’s time to get at least a little bit formal, to see how we integrate a stratified type system into the lambda calculus. There’s two facets to that: the syntactic, and the analytic. The syntactic part shows how we extend λ-calculus to include type declarations, and the analytic part shows how to determine whether or not an expression with type declarations is valid.

The syntax part is easy. We add a “:” to the notation; the colon has an expression or variable binding on its left, and a type specification on its right. It asserts that whatever is on the left side of the colon has the type specified on the right side. A few examples:

\lambda x: \nu . x + 3
This asserts that the parameter, x has type nu, which we’ll use as the type name for the natural numbers. (In case it’s hard to tell, that’s a greek letter “nu” for natural.) There is no assertion of the type of the result of the function; but since we know that “+” is a function with type \nu  \rightarrow \nu \rightarrow \nu, we can infer that the result type of this function will be \nu.
(\lambda x . x + 3): \nu \rightarrow \nu
This is the same as the previous, but with the type declaration moved out, so that it asserts the type for the lambda expression as a whole. This time we can infer that x : \nu because the function type is \nu \rightarrow \nu, which means that the function parameter has type \nu.
\lambda x: \nu, y:\delta . \text{if}\hspace{1ex} y\hspace{1ex} \text{then}\hspace{1ex} x * x \hspace{1ex}\text{else} \hspace{1ex} x
This is a two parameter function; the first parameter has type ν, and the second has type δ. We can infer the return type, which is ν. So the type of the full function is ν → δ → ν. This may seem surprising at first; but remember that λ-calculus really works in terms of single parameter functions; multi-parameter functions are a shorthand for currying. So really, this function is: λ x : ν . (λ y : δ . if y then x * x else x); the inner lambda is type δ → ν; the outer lambda is type ν → (δ → ν).

To talk about whether a program is valid with respect to types (aka well-typed), we need to introduce a set of rules for checking the validity of the type declarations. Using those rules, we can verify that the program is type-consistent.

In type analysis, we’ll talk about judgements. When we can infer the type of an expression using an inference rule, we call that inference a type judgement. Type analysis allows us to use inference and judgements to reason about types in a lambda expression. If any part of an expression winds up with an inconsistent type judgement, then the expression is invalid. If we can show that all of the components of an expression have consistent type judgements, then we can conclude that the expression is well-typed, meaning that it’s valid with respect to the type system.

Type judgements are usually written in a sequent-based notation, which looks like a fraction where the numerator consists of statements that we know to be true; and the denominator is what we can infer from the numerator. In the numerator, we normally have statements using a context, which is a set of type judgements that we already know;it’s usually written as an uppercase greek letter. If a type context includes the judgement that x : \alpha, I’ll write that as \Gamma :- x : \alpha.

Rule 1: Type Identity

\frac{\mbox{}}{x : \alpha :- x: \alpha}

This is the simplest rule: if we have no other information except a declaration of the type of a variable, then we know that that variable has the type it was declared with.

Rule 2: Type Invariance

\frac{ \Gamma :- x:\alpha, x != y}{\Gamma + y:\beta :- x:\alpha}

This rule is a statement of non-interference. If we know that x:\alpha, then inferring a type judgement about any other term cannot change our type judgement for x.

Rule 3: Function Type Inference

\frac{\Gamma + x:\alpha :- y:\beta}{\Gamma :- (\lambda x:\alpha . y):\alpha \rightarrow \beta}

This statement allows us to infer function types given parameter types. Ff we know the type of the parameter to a function is \alpha; and if, with our knowledge of the parameter type, we know that the type of term that makes up the body of the function is \beta, then we know that the type of the function is \alpha \rightarrow \beta.

Rule 4: Function Application Inference

\frac{\Gamma :- x: \alpha \rightarrow \beta, \Gamma :- y:\alpha}{\Gamma :- (x y): \beta}

This one is easy: if we know that we have a function that takes a parameter of type \alpha and returns a value of type \beta, then if we apply that function to a value of type \alpha, we’ll get a value of type \beta.

These four rules are it. If we can take a lambda expression, and come up with a consistent set of type judgements for every term in the expression, then the expression is well-typed. If not, then the expression is invalid.

So let’s try taking a look at a simple λ-calculus expression, and seeing how inference works on it.

\lambda x y . y x

Without any type declarations or parameters, we don’t know the exact type. But we do know that “x” has some type; we’ll call that “α”; and we know that “y” is a function that will be applied with “x” as a parameter, so it’s got parameter type α, but its result type is unknown. So using type variables, we can say “x:α,y:α→β”. We can figure out what “α” and “β” are by looking at a complete expression. So, let’s work out the typing of it with x=”3″, and y=”λ a:ν.a*a”. We’ll assume that our type context already includes “*” as a function of type “ν→ν→ν”, where ν is the type of natural numbers.

  • “λ x y . y x) 3 (λ a:ν . a * a)”: Since 3 is a literal integer, we know its type: 3:ν.
  • By rule 4, we can infer that the type of the expression “a*a” where “a:ν” is “ν”, and *:ν→ν→ν so therefore, by rule 3 the lambda expression has type “ν→ν”. So with type labelling, our expression is now: “(λ x y . y x) (3:ν) (λ a:ν.(a*a):ν) : ν→ν”.
  • So – now, we know that the parameter “x” of the first lambda must be “ν”; and “y” must be “ν→ν”; so by rule 4, we know that the type of the application expression “y x” must be “ν”; and then by rule 3, the lambda has type: “ν→(ν→ν)→ν”.
  • So, for this one, both α and β end up being “ν”, the type of natural numbers.

So, now we have a simply typed λ-calculus. The reason that it’s simply typed is because the type treatment here is minimal: the only way of building new types is through the unavoidable \rightarrow constructor. Other typed lambda calculi include the ability to define parametric types, which are types expressed as functions ranging over types.

Programs are Proofs

Now we can get to the fun part. The mantra of type theory is the program is the proof. Here’s where we get our first glimpse of just what that really means!

Think about the types in the simple typed language calculus. Anything which can be formed from the following grammar is a λ-calculus type:

  • type ::= primitive | function | ( type )
  • primitive ::= α | β | δ | …
  • function ::= type→type

The catch with that grammar is that you can create type expressions which, while they are valid type definitions, you can’t write a single, complete, closed expression which will actually have that type. (A closed expression is one with no free variables.) When there is an expression that has a type, we say that the expression inhabits the type; and that the type is an inhabited type. If there is no expression that can inhabit a type, we say it’s uninhabitable. Any expression which either can’t be typed using the inference rules, or which is typed to an uninhabitable type is a type error.

So what’s the difference between inhabitable type, and an uninhabitable type?

The answer comes from something called the Curry-Howard isomorphism. For a typed λ-calculus, there is a corresponding intuitionistic logic. A type expression is inhabitable if and only if the type is a provable theorem in the corresponding logic.

The type inference rules in λ-calculus are, in fact, the same as logical inference rules in intuitionistic logic. A type \alpha \rightarrow \beta can be seen both as a statement that this is a function that maps from a value of type \alpha to a value of type \beta, and as a logical statement that if we’re given a fact \alpha, we could use that to infer the truth of a fact \beta.

If there’s a logical inference chain from an axiom (a given type assignment) to an inferred statement, then the inferred statement is an inhabitable type. If we have a type \alpha \rightarrow \alpha, then given a inhabited type \alpha, we know that \alpha \rightarrow \alpha is inhabitable, because if \alpha is a fact, then \alpha \rightarrow \alpha is also a fact.

On the other hand, think of a different case \alpha \rightarrow \beta. That’s not a theorem, unless there’s some other context that proves it. As a function type, that’s the type of a function which, without including any context of any kind, can take a parameter of type α, and return a value of a different type β. You can’t do that – there’s got to be some context which provides a value of type β – and to access the context, there’s got to be something to allow the function to access its context: a free variable. Same thing in the logic and the λ-calculus: you need some kind of context to establish “α→β” as a theorem (in the logic) or as an inhabitable type (in the λ-calculus).

What kind of context would make a type \alpha \rightarrow \beta inhabitable? A definition of a valid function that takes an α, and returns a β. If such a function exists, then that function is a proof of the inhabitility of the type. Literally, the program is the proof.

30 thoughts on “Types and Lambda Calculus

  1. Tim

    Before I start nitpicking, thanks for the nice post! On to the nitpicking. Rule 1-3 don’t render very well as “:-” is hard to read once it has been typeset into something like “x:a:-x:a”. The way latex spaces it this looks like it should be grouped as “(x:a):(-x:a)”, which is nonsense. Maybe use “\vdash”? Also “This asserts that the parameter, x has type nu” is maybe missing a backslash “\nu”. Also “calculuss”. “λ x y . y x)” is missing a “(“. “Ff” to If. But I liked the post and hope you keep going on this subject.

    Reply
  2. John Armstrong

    Would it be more accurate to say that simply-typed Lambda calculus prevents a general fixed-point combinator? We should still be able to write

    Y_nu: ((nu -> nu) -> (nu -> nu)) -> (nu -> nu)

    so that Y_nu(factorial_maker) = factorial

    and maybe more gener(ic)ally

    Y_t: ((t -> t) -> (t -> t)) -> (t -> t)

    Reply
  3. John Armstrong

    Wouldn’t it be the case that a -> a is always inhabited (by the identity function on a), regardless of whether a is itself inhabited or not? Or is that an example of my classical training leaking over into an intuitionistic setting where it doesn’t apply?

    Reply
    1. markcc Post author

      No: a -> a is only inhabited if a is inhabited.

      In intuitionism, nothing is true unless there’s a proof for it. So even a -> a, you need to be able to show an example that proves it. If there are no a’s, then there is no proof.

      Reply
      1. John Armstrong

        And “the function that sends (would send) every a to itself” doesn’t count?

        I’m rusty, but trying to work this out in a simple topos.. there’s obviously an arrow a->a but there may not be an arrow from the terminal object 1 to the exponential object a^a without there being an arrow from 1 to a…

        Reply
        1. markcc Post author

          The function that sends every a to itself can only exist if there is an a for it to operate on.

          A way of making that clearer would be to say that the type a doesn’t exist unless it contains at least one value.

          Reply
          1. dtm0

            This is confusing because the standard (set theory) way of viewing functions says that the empty set is an initial object in category Set; that is, that there is exactly one function from the empty set to any given set.

          2. markcc Post author

            Yes, but this isn’t set theory. That sounds like a trite response, but it’s a really important point. Set theory is built on a set of axioms that pre-supposes a lot of structure. Those axioms create results that the inventors of type theory thought were unacceptable.

            In the world where type theory exists, nothing exists unless it’s shown to exist. You don’t get to assume that must exist because they should. You have to show that they exist.

            It’s not a problem, because you can always show that a function like that exists. As long as a type a is inhabited, you can show that a function that produces an a exists. But if a is uninhabited, according to type theory (or at least the version we’re talking about here), then you can’t have a function that returns a value of type a, because it can’t exist.

          3. dtm0

            Except that we weren’t talking about functions that _returned_ a value of type a. We were talking about functions that _accepted_ a value of type a. It isn’t clear to me that such a function doesn’t exist, even in type theory.

            After all, what does it mean to say that a function f exists from type a to type b? It means that for any value a_0 of a, you can come up with a value of b called f(a_0). If a has a finite number of elements, one permitted way to define f should be to have two parallel lists, one of all the elements of a and one of elements of b. Now, if the type a is uninhabited, then that first list is empty. But an empty list is still a list, so we still have a function: the one defined by two parallel lists, that are both empty. This is a function from the uninhabited type a to the (inhabited or not) type b. (And in fact, it is the only function from a to b)

          4. John Armstrong

            Also, if you want to bring categories into it you’ve got to bring in topoi. Saying a function (for example) from a to b exists is not saying that there is an arrow in the category from a to b; it’s saying that there is an arrow from the terminal object to the exponential b^a.

      2. Carl Witty

        I suppose it would be possible to define a type system that worked that way, but I don’t think it’s standard — and if you do that, you lose the programs-are-proofs interpretation.

        Suppose that F (“False”) is an empty type (that’s the way to deal with false in a programs-are-proofs setting). The statement F->a (where a is an arbitrary type/proposition, or a type/proposition variable) is constructively true, so you want there to be an inhabitant of the type F->a .

        You can think of this function as a promise: if you give me an element of F, I’ll give you an element of a. This is a safe promise to make, since there are no elements of F.

        People usually don’t work much with programs-are-proofs and simply-typed lambda calculus, because that only gives propositional logic, which is boring. But see section 3.3.1 of http://www.labri.fr/perso/casteran/RecTutorial.pdf for examples of how this works in coq, a proof verification system based on type theory.

        Reply
        1. John Armstrong

          Sorry, Carl, I’m not sure which comment you’re replying to. In case it was mine, that’s not quite the situation I was describing.

          I’m suggesting something more like the “identity function” F -> F where F is an uninhabited type that sends “every (nonexistent) member” of F to itself. Does this function exist and inhabit the type F -> F? Mark is asserting (and I’m not really contesting here) that it doesn’t.

          Reply
          1. Carl Witty

            tl;dr version: Yes, it exists.

            More complicated answer: I’m mostly a formalist in mathematics, so the question of whether a particular mathematical object exists depends on the logic you’re working in (the axioms you’re working from, etc.) There probably are logics with functions, function types, and empty types where there is no function of type F->F.

            In coq, though, there is a function of that type, and such a function exists in any system where you’re looking at programs-are-proofs and you want your logic to include at least constructive propositional logic.

            In fact, when I said above that F->a should be inhabited for arbitrary types a, I was explicitly thinking of the case where a is F, so F->F is inhabited as an instance of what I said before. I guess I should have said that explicitly.

          2. markcc Post author

            I think that the answer is, there are many different variants of STLC. Many systems work with extended versions.

            But my understanding of STLC remains: unless the type A is inhabited, the type A\rightarrow A cannot be inhabited. A function that maps an A to an A doesn’t really exist, unless there is something for it to operate on.

            If A is uninhabited, then to the STLC, A doesn’t actually exist. What does it mean, semantically, to say that there’s a function from A to A, but that A doesn’t really exist?

          3. Carl Witty

            “What does it mean, semantically, to say that there’s a function from $A$ to $A$, but that $A$ doesn’t really exist?”

            We can discuss a simple set-theoretic denotational semantics. Denotational semantics for STLC would map types to sets, and would map functions to set-theoretic functions (sets of ordered pairs).

            Then an uninhabited type $A$ would map to the empty set, and a function of type $A \rightarrow A$ would map to a set-theoretic function whose domain and codomain were both the empty set. There is exactly one such function, the empty set. And the function type $A \rightarrow A$ would map to the set of all such functions, so the type maps to the set containing the empty set.

            So there’s no difficulty in assigning a semantics for uninhabited types, function types involving uninhabited types, or functions whose domain is an uninhabited type. In fact, all these things just work automatically… you would have to go to extra effort to modify the semantics so that they didn’t exist.

          4. markcc Post author

            But we are not doing set theory. We’re doing intuitionistic type theory.

            In set theory, an empty function is fine: it’s just an empty set.

            In the simple intuitionistic type theory of the simply typed lambda calculus, it’s *not* fine. It’s prohibited by the semantics.

            You can certainly define an alternate typed lambda calculus with semantics where you don’t worry about type inhabitability – where it’s perfectly valid to have empty, uninhabited types, and uninhabitable function types. But it will be a very different semantics than what we’re looking at in the STLC, and those differences will have a lot of effects on the overall theory. And that’s not what we’re talking about here: this post is just trying to explain the basics of the STLC – not of a set-theoretic variant, not of a FOPL-based logical framework.

          5. Carl Witty

            First, I’m curious. If you don’t want a set-theoretic denotational semantics here, what sort of answer would you accept for “what does it mean, semantically”? In other words, suppose that $B$ is an inhabited type. What does it mean for there to be a function from $B$ to $B$? Why can’t it mean the same thing for a function from $A$ to $A$?

            Second, if you go back to your original post, you say: “If we have a type $\alpha \rightarrow \alpha$, then given a inhabited type $\alpha$, we know that $\alpha \rightarrow \alpha$ is inhabitable, because if $\alpha$ is a fact, then $\alpha \rightarrow \alpha$ is also a fact.”

            But $\alpha \rightarrow \alpha$ is a fact whether $\alpha$ is a fact or not. So $\alpha \rightarrow \alpha$ should be inhabited whether or not $\alpha$ is inhabited.

          6. markcc Post author

            I don’t actually want *any* answer.

            What I’m trying to get across is that we aren’t doing set theory. For most of us who are modern math students, that’s a really hard idea to wrap our heads around. Everything we’ve been taught, literally from elementary school on, is built on sets. Every answer is always sets.

            But we’re not doing set theory. Type theory is a fundamentally different approach to math. It’s a different foundation, with different concepts.

            In type theory, the question “What does an inhabited function type over an uninhabited argument type mean semantically?” is a nonsense question. It doesn’t make any sense: the type theoretic definition of what it means for a type to actually exist is that it’s inhabited.

            When you try to argue that a function with type A \rightarrow A is inhabited when A is inhabited is exactly the same as arguing that a function with type A \rightarrow A exists when there’s no such thing as type A. It’s not just arguing that the function is a set of ordered pairs and therefore since the domain set is empty, the function is empty. It’s talking nonsense.

            The closest I can get in terms of set theory would be saying that you’ve got a set {x | Foo(x)}, where Foo(x) is undefined.

            You can define the predicate Foo, and then that set is defined and meaningful and valid. But if Foo isn’t defined – if it’s not a predicate at all, then what does that set mean? It doesn’t mean anything, because it’s not a valid definition in set theory.

            A \rightarrow A, where the type A is uninhabited – that is, where the type A doesn’t exist, is the same kind of thing – in the domain of type theory, it’s not a valid definition – it’s something ill-defined and invalid.

          7. Carl Witty

            “… [T]he type theoretic definition of what it means for a type to actually exist is that it’s inhabited.”

            Not according to the type theories I’m familiar with. In theories with first-class types, there is a collection of universes, and a type “exists” if it is an element of some universe. In theories without first-class types, there is a judgment in the logic “$A$ is a Type” (abbreviated “$A$ Type”).

            A type theory where every type that “existed” was inhabited would be entirely boring from a programs-are-proofs point of view — it corresponds to a logic where every statement you can make is provable.

            I do realize that type theory is different than set theory. I’ve read multiple books on type theory (Implementing Mathematics with The Nuprl Proof Development System, Type Theory and Functional Programming, and large parts of the coq documentation), proved (minor) theorems in two computer implementations of type theory (MetaPRL and coq), and also contributed (minor) amounts of code and documentation to MetaPRL.

        2. Brian Slesinsky

          “You can think of this function as a promise: if you give me an element of F, I’ll give you an element of a. This is a safe promise to make, since there are no elements of F.”

          Thanks, that’s a very clear way to explain it. So if I understand correctly, a partial function isn’t a valid promise or proof since it doesn’t work in all cases. At best it can work as a proof for specific cases, provided those cases are checked by actually executing the function.

          If we know it’s a total function then we don’t need to execute it; we already know that we could construct the result (in principle; it may not be practical). If we can compile the function (that is, check the proof) and prove that an input exists, we’ve proved that its output can exist.

          But since there are few interesting total functions in practical programming languages and we’re usually interested in proving more than just that an output exists, the ability to compile a program is not enough; we must test our programs.

          Reply
          1. Carl Witty

            Yes, that’s all basically correct.

            In the coq programming language, all functions are total (so the language is not Turing-complete). So in coq, it is true that you don’t need to execute the function, and so you usually don’t.

            I’d revise your last paragraph, though. Something more like:
            Since common programming languages allow non-total functions, and there’s no easy check to say that a certain function is total, you would need to execute the function to prove that a type is inhabited. Plus, the type systems of common languages are simple enough that proving a type is inhabited isn’t useful — in fact, for many languages, it’s trivial: all types are inhabited.

  4. Brian Slesinsky

    It seems rather odd that a function type that looks like a->b can’t exist in simply typed lambda calculus. In most programming languages you could write a function that takes an integer and returns a string, for example, and a function mapping numbers to strings seems fine mathematically. So what is this really saying? Are constants always part of the context?

    Reply
    1. markcc Post author

      It’s not that it can’t exist. It’s that it doesn’t exist until there is at least one function that takes an a and returns a b.

      So even if the system didn’t include any functions of type a \rightarrow b, that wouldn’t matter, because as soon as you implement a function that takes an a and returns a b, the type is inhabited.

      Also, remember that while functional programming languages are based on the typed lambda calculus, that doesn’t mean that they strictly follow it. General recursion doesn’t exist in the simply typed lambda calculus, but nearly every programming language based on LC does. Most LC-based programming languages allow you to write things using types that don’t yet have exemplars.

      Reply
  5. Brian Slesinsky

    So, perhaps this is like saying that you can’t write down a function from numbers to strings unless you have some legal way of generating or referring to a string in the body of the function? So a programming language that supports string literals is essentially saying that the existence of strings is an axiom in that language. We could say that a string constructor can be used to prove that a string exists provided that all its arguments exist, since we can then write an expression that returns a particular string. But it seems like then we need to either execute the expression to show it will terminate or use a language where we don’t have to worry about nontermination?

    Reply
    1. markcc Post author

      That’s partly right.

      WRT the first point: A programming that supports string literals has an inhabited string type, with string literals as a witness. The string literals aren’t an axiom: they’re a proof. The fact that string literals exist forms the proof that the string type is inhabited. (Remember: the program is the proof.)

      WRT the second point: you don’t need to execute the expression. In simply typed lambda calculus, all valid expressions terminate. Non-termination is not an issue, because STLC is *not* Turing complete. (Here, remember that we’re in the simplest piece of type-based stuff. The simply typed lambda calculus without any general fixed-point combinator is an extremely simplified, stripped down theoretical model. This isn’t what we really use to program.)

      Reply
  6. Brian Slesinsky

    Yep, understood that this isn’t “real” programming, but I’m trying to understand the math using programming as a metaphor.

    It seems like this using a strong assumption (programs can be statically guaranteed to terminate) to prove something weak (a type has at least one member). It’s not obvious why this might be an interesting thing to prove.

    Also, we could prove that a type has a member by evaluating the function and seeing what the answer is. If it terminates, that seems like it might be a good proof. So for type-checking to be useful to prove that an instance of a type exists, it has to be easier to type-check a program than to run it. Since a type-checker is just another program (and a perhaps a complicated one) this seems a bit questionable. So perhaps requiring all functions to be total functions isn’t really needed?

    Reply
    1. markcc Post author

      Yeah, I understand. The problem is that understanding this with programming as a metaphor is misleading. You’re trying to make things concrete using the program metaphor – but the process of making it concrete changes it.

      In STLC, every function terminates. So if you can write a function of type a \rightarrow b, and there’s at least one instance of a, then a \rightarrow b is inhabited. The point I’m trying to make here is that this statement is not that if you write a function of type a \rightarrow b and it terminates for some value of a, then a \rightarrow b is inhabited. If there exists any expression of type a \rightarrow b, and there exists any value of type a, then a \rightarrow b is inhabited.

      In order to mix the idea of termination and partiality into this, you need to extend the theoretical framework to include the ideas of termination and partiality. We haven’t done that. This framework just won’t work with those concepts.

      In the STLC, the assumption that all programs terminate isn’t a strong assumption. In fact, it’s a simple one. To a programmer, that’s ridiculous – but you’ve almost certainly never programmed in a language that’s as limited as STLC.

      But even in a real programming language, like Haskell, we don’t really play the game if inhabitability-by-termination. If you write a function of type a \rightarrow b, then we assume that a is inhabited, and therefore that a \rightarrow b is inhabited. Similarly, if you write an expression that takes a parameter of type a \rightarrow b, we assume that a \rightarrow b is inhabited.

      The “proof” that they really are inhabited is the program that you write using them.

      Reply

Leave a Reply to markcc Cancel reply