In the last post, we started looking at expressions. In this post, we’re going to continue doing that, and start looking at a simple form of expression typing called *arity*.

Before we can do that, we need to introduce a couple of new formalisms to complete our understanding of the elements that form expressions. The reason we need another formalism is because so far, we’ve defined the meaning of expressions in terms of function calls. But we’ve still got some ambiguity about our function calls – specifically, how do parameters work?

Suppose I’ve got a function, . Can I call it as ? Or ? Both? Neither? It depends on exactly what function calls mean, and what it means to be a valid parameter to a function.

There are several approaches that you can take:

- You can say that a function application expression takes an arbitrary number of arguments. This is, roughly, what we do in dynamically typed programming languages like Python. In Python, you can write
`f(x) + f(x,y) + f(x, y, z, a, b, c)`

, and the language parser will accept it. (It’ll complain when you try to execute it, but at the level of expressions, it’s a perfectly valid expression.) - You can say that every function takes exactly one argument, and that a multi-argument function like is really a shorthand for a curried call sequence – and thus, the “application” operation really takes exactly 2 parameters – a function, and a single value to apply that function to. This is the approach that we usually take in lambda calculus.
- You can say that application takes two arguments, but the second one can be a
*combination*of multiple objects – effectively a tuple. That’s what we do in a lot of versions of recursive function theory, and in programming languages like SML.

In the theory of expressions, Martin-Löf chose the third approach. A function takes a single parameter, which is a *combination* of a collection of other objects. If , , , and are expressions, then is an expression called the *combination* of its four elements. This is very closely related to the idea of *cartesian products* in set theory, but it’s important to realize that it’s not the same thing. We’re not defining elements of a set here: we’re defining a syntactic construct of a pseudo-programming language, where one possible interpretation of it is cartesian products.

In addition to just multi-parameter functions, we’ll use combinations for other things. In type theory, we want to be able to talk about certain mathematical constructs as first-class objects. For example, we’d like to be able to talk about *orderings*, where an ordering is a collection of objects combined with an operation . Using combinations, we can write that very naturally as .

For combinations to be useful, we need to be able to extract the component values from them. In set theory, we’d do that by having projection functions associated with the cross-product. In our theory of expressions, we have *selection expressions*. If is a combination with four elements, then is a selection expression which extracts the first element from .

In programming language terms, combinations give us a way of writing tuple values. Guess what’s next? Record types! Or rather, combinations with named elements. We can write a combination with names: , and then write selection expressions using the names, like .

Now we can start getting to the meat of things.

In combinatory logic, we’d just start with a collection of primitive constant values, and then build whatever we wanted with them using abstractions, applications, combinations, and selections. Combinatory logic is the parent of computation theory: why can’t we just stick with that foundation?

There are two answers to that. The first is familiar to programming language people: if you don’t put any restrictions on things, then you lose type safety. You’ll be able to write “valid” expressions that don’t mean anything – things like , even though “1” isn’t a combination, and so calling a selector on it makes no sense. Or you’d be able to call a function like `factorial(27, 13)`

, even though the function only takes one parameter.

The other answer is equality. One thing that we’d really like to be able to do in our theory of expressions is determine when two expressions are the same. For example, we’d really like to be able to do things like say that . But without some form of control, we can’t really do that: the problem of determining whether or not two expressions are equal can become non-decidable. (The canonical example of this problem involves y combinator: . If we wanted to try to work out whether an expression involving this was equivilant to another expression, we could wind up in an infinite loop of applications.)

The way that Martin-Löf worked around this is to associate an *arity* with an expression. Each *arity* of expressions is distinct, and there are rules about what operations can be applied to an expression depending on its arity. You can’t call .2 on “1+3”, because “1+3” is a *single* expression, and selectors can only be applied to *combined* expressions.

To most of us, arity sounds like it should be a numeric value. When we we talk about the arity of a function in a program, what we mean is how many parameters it takes. In Martin-Löf expressions, arity is more expressive than that: it’s almost the same thing as types in the simply typed lambda calculus.

There are two dimensions to arity: single/combined and saturated/unsaturated.

*Single* expressions are atomic values, where you can’t extract other values from them by selection; *multiple* expressions are combinations of multiple other expressions.

Saturated expressions are expressions that have no holes in them that can be filled by other expressions – that is, expressions with no free variables. Unsaturated expressions have open holes – which means that they can be *applied* to other expressions.

Saturated single expressions have arity 0. An expression of arity 0 can’t be applied, and you can’t be the target of selection expressions.

An unsaturated expression has an arity , where both and are arities. For example, the integer addition function has arity . *(Erik Eidt pointed out that I made an error here. I originally wrote addition as , where it should have been .)*

A combined expression has arity , where each of the s are the arities of the expression .

Sadly, I’m out of time to work on this post, so we’ll have to stop here. Next time, we’ll look at the formal rules for arities, and how to use them to define equality of expressions.

Erik EidtThe addition function consumes a composition of two operands, right? Wouldn’t an expression of the addition function alone without application be

( 0 x 0 ) ->> 0 ?

P.S. Love your posts, thanks!

markccPost authorYou are correct. Thanks for pointing it out!