# Understanding Expressions

I’m going to be trying something a bit different with this blog.

What I’ve tried to do here on GM/BM is make each post as self-contained as possible. Obviously, many things take more than one post to explain, but I’ve done my best to take things, and divide them into pieces where there’s a basic concept or process that’s the focus of each post.

I’m finding that for this type theory stuff, I’m having a hard time doing that. Or rather, given my work schedule right now when I’m trying to write about type theory, I’m finding it hard to find enough time to do that, and still be posting regularly. (That sounds like a complaint, but it’s not meant to be. I started a new job at Dropbox about three months ago, and I absolutely love it. I’m spending a lot of time working because I’m having so much fun, not because some big mean boss guy is leaning over me and threatening.)

Anyway, the point of this whole diversion is that I really want to get this blog back onto a regular posting schedule. But to do that, I’m going to have to change my posting style a bit, and make the individual posts shorter, and less self-contained. I’m definitely interested in what you, my readers, think of this – so please, as I roll into this, let me know if you think it’s working or not. Thanks!

In this post, we’re going to start looking at expressions. This might seem like it’s a diversion from the stuff I’ve been writing about type theory, but it really isn’t! Per Martin-Löf developed a theory of expressions which is used by type theorists and many others, and we’re going to be looking at that.

We’ve all seen arithmetic expressions written out since we were in first grade. We think we understand what they mean. But actually, most of us have never really stopped and thought precisely about what an expression actually means. Most of the time, that’s OK: we’ve got an intuitive sense of it that’s good enough. But for type theory, it’s not sufficient. In fact, even if we did have a really good, formal notion of expressions, it wouldn’t be right for type theory: in type theory, we’re rebuilding mathematics from a foundation of computability, and that’s not the basis of any theory of expressions that’s used in other mathematical fields.

Why is this a problem?

Let’s start by looking at a nice, simple expression:

$x^2 + 3x + 7$

What does that mean? Roughly speaking, it’s a function with one parameter: $f(x) = x^2 + 3x + 7$. But that doesn’t really tell us much: all we’ve really done is add a bit of notation. We still don’t know what it means.

Let’s take it a step further. It’s actually describing a computation that adds three elements: $+(x^2, 3x, 7)$. But that’s not quite right either, because we know addition is binary. That means that we need to decide how to divide that addition into two parts. From the commutative property, we know that it doesn’t matter which way we divide it – but from a computational perspective, it might: doing it one way versus the other might take much longer!

We’ll pick left-associative, and say that it’s really $+(+(x^2, 3x), 7)$. We also need to expand other parts of this into this functional idea. If we follow it all out, we wind up with: $+(+(*(x,x), *(3, x)),7)$.

We’ve converted the expression into a collection of applications of primitive functions. Or in terms that are more familiar to geeks like me, we’ve taken the expression, and turned it into an abstract syntax tree (AST) that describes it as a computation.

We’re still being pretty vague here. We haven’t really defined our notion of function or computation. But even with that vagueness, we’ve already started making the notion of what an expression is much more concrete. We’ve taken the abstract notion of expressions, and translated it to a much less abstract notion of a computation expressed as a sequence of computable functions.

This is a really useful thing to do. It’s useful enough that we don’t want to limit it to just “pure” expressions. In the type theoretic view of computation, everything is an expression. That’s important for multiple reasons – but to make it concrete, we’re going to eventually get around to showing how types work in expressions, what it means for an expression to be well-typed, how to infer types for expressions, and so on. We want all of that to work for any program – not just for something that looks like a formula.

Fortunately, this works. We can also take an “expression” like `for i in 1 .. 10 do f(i)`, and analyze it as a function: for(i, 1, 10, f(i)).

So, we’ve got a way of understanding expressions as functions. But even if we want to keep the computational side of things abstract and hand-wavy, that’s still not really enough. We’re closer to understanding expressions, but we’ve still got some huge, gaping holes.

Let’s jump back to that example expression: $x^2 + 3x + 7$. What does it mean? What we’ve seen so far is that we can both understand it, as a series of function calls: $+(+(*(x, x), *(3, x)), 7)$. But we’d like to be able to evaluate it – to execute the computation that it describes. But we can’t do that: it’s got a gaping hole named $x$. What do we do with that?

We’re missing a really important notion: funcional abstraction. Our expression isn’t just an expression: what it really is is a function. We alluded to that before, but now we’re going to deal with it head-on. That expression doesn’t really define a computation: it defines a computational object that computes the function. When an expression has free variables – that is, variables that aren’t assigned a meaning within the expression – our expression represents something that’s been abstracted a level: instead of being a computation that produces a value, it’s an object that takes a parameter, and performs a computation on its parameter(s) in order to produce a value.

In our expression $x^2 + 3x + 7$, we’re looking at an expression in one free variable – which makes it a single-parameter function. In the notation of type theory, we’d write it as $(x)(+(+(*(x, x), *(3, x)), 7)$ – that is,
the parameter variable in parens ahead of the expression that it parameterizes. (Yes, this notation stinks; but I’m sticking with the notations from the texts I’m using, and this is it.)

This notion, of parameters and function abstraction turns out to be more complex than you might expect. I’m going to stop this post here – but around wednesday, I’ll post the next part, which will look at how to understand the arity of an expression.

## 10 thoughts on “Understanding Expressions”

1. Doug Fort

I like this format. When I sit down to read a book like ‘Good Math’ I like longer chapters. When I’m reading this in one of 20 open tabs in my browser, the short format is much easier to absorb in a single setting (I may never find this tab again). I hope this eventually turns into a book which I can refer to repeatedly.

2. Janne

I think the shorter format really makes it easier for us readers to digest.

Also, are we perchance reinventing LISP/Scheme here?

1. markcc Post author

We’ll see this later, but lazy functional languages like Haskell are pretty close to being pure expressions of type theory.

1. markcc Post author

I prefer to think of it in reverse.

In pretty much all programming languages, we interpret text into abstract syntax trees. In Lisp, McCarthy decided to just make the ASTs completely explicit: s-expressions are just the ASTs for lisp.

Since lisp in the main language that most people are familiar with where you really program in raw ASTs, ASTs naturally look lispish to us.

3. Michael Norrish

Nice.

minor typo: “funcional abstraction” rather than “functional abstraction”

4. Jason Gilbert (@JasonGilbert9)

I like the format. I think that shorter reads allow more thought on the subject. (Well, really the same amount of thought per post, but more posts, so more overall.) I also like the topic. My math education completely ignored the issue of expression, and I’ve always struggled with it. I’ve been able to do more now that I realize that expression can be just as important as computation, and it’s OK to stop and figure out the expression before you even think about the computation. I’m starting to think about math in a language-first, computation-second kind of way.

5. Sean Roark

This has reignited a fury lain dormant since CS 311 – Programming Languages. This post is better then the entire 2 weeks spent failing to teach Prolog.