What makes linear logic linear?

Sorry for the lack of posts this week. I’m traveling for work, and
I’m seriously jet-lagged, so I haven’t been able to find enough time
or energy to do the studying that I need to do to put together a solid
post.

Fortunately, someone sent me a question that I can answer
relatively easily, even in my jet-lagged state. (Feel free to ping me with more questions that can turn into easy but interesting posts. I appreciate it!)

The question was about linear logic: specifically, what makes
linear logic linear?

For those who haven’t ever seen it before, linear logic is based
on the idea of resource consumption. Where the normal propositional or
predicate logics that most of us are familiar with are focused around an
idea of truth, linear logic is focused on the idea of resource
posession and consumption. In standard propositional logic, if you’re given
the proposition “A”, that means that A is true, and you can use the truth
of A in as many inferences as you want. In linear logic, if you’re given
the proposition “A”, that means that you possess one instance of A,
and you can use it, once, in an inference.

Think of a conventional logic, like simple propositional
logic. Imagine that you had the following statements:

  1. A
  2. A ⇒ B
  3. A ⇒ C

If you’re working in normal propositional logic, then you can use
those statements to infer some more facts. Given “A”, and knowing “A
⇒ B”, you can infer B; and given “A”, and knowing “A ⇒ C”,
you can infer C. So given those three statements, we can infer that A
and B and C are all true.

In linear logic, if you start with the equivalent of those three
statements, you can infer either B or C, but not both: using
“A” in an inferences consumes it. So once you use it, it’s
gone, and you can’t use it again.

I’m not going to go into detail about linear logic here; I’ve written
about it before, and if you’re interested, you can go look at that post. I’m just going
to quickly walk through one simple example of how it’s used, and then go back to
the original question.

Pretty much every introduction to linear logic that I’ve ever seen uses the same
first example: a vending machine. Suppose you want to describe how a vending machine
works. For example, if I put 6 quarters into a soda machine, I get a soda. The key thing
that I want to capture is when I get the soda, the quarters are gone. I had
to give them up to get back the soda. In linear logic, I can write that as an
implication:

Quarter ⊗ Quarter ⊗ Quarter ⊗ Quarter ⊗ Quarter ⊗ Quarter -o Soda

“⊗” means simultaneous possession: “A ⊗ B” means “I have an A and a B at the same time”; “-o” is linear implication, “A -o B” means “I can give up an A to get a B”. So the expression above means that if I have six quarters, I can give them up and get a soda.

Try to think about what you’d need to do in conventional predicate
logic to say that. You’d need to introduce a notion of counting, and addition and subtraction,
and possession, and consumption. It would be quite difficult – and the result would be a lot
harder to analyze, because it would rely on at least some amount of arithmetic. Linear logic makes things like that really easy.

It should also be pretty obvious why computer scientists – and in
particular, programming language designers – are so interested in
linear logic. In implementing software, managing resources is always a
major issue. You can use linear logic in tools that analyze memory
usage, or management of resources like files. You can use linear
logic-based analysis and inference in a functional language compiler
to figure out when you can convert an allocation into an in-place
update. It’s incredibly useful.

But getting back to the question: why is it linear?

There’s a bunch of different ways to explain it. I’m going to use
one that’s informal, but easy to understand. Most logicians would hate
it as an explanation, but it’s easy.

In linear logic, simultaneous posession of resources is treated as a kind of
multiplication. If I have A and B at the same time, I can say that I have A*B.

Now, suppose that each distinct proposition that we can use in an
inference has a unique label, so that we can distinguish them. Then
for any sequence of inferences, we can describe the inference cost –
that is, the set of resources consumed by those inferences via a
multiplicative expression. So for my example above with the vending
machine, I can say that the six quarters are named
Quartera, Quarterb, Quarterc,
Quarterd, Quartere, and Quarterf.

Then, to get a soda, I need to spend the six quarters. The inference cost
will be Quartera*Quarterb*Quarterc*Quarterd*
Quartere*Quarterf.

In linear logic, the inference cost is always a linear equation.

Compare that to conventional propositional logic. In the example
at the beginning of this post, I used A twice to infer B and C. So the
inference cost there is A2 – it’s quadratic. I can’t do that in
linear logic.

0 thoughts on “What makes linear logic linear?

  1. Tam

    I’m sorry for the off-topic post, but does anyone know what I need to do to my browser (IE 7) to make symbols display as something other than boxes? (I can’t change browsers; I’m on my work computer.)

    Reply
  2. John Armstrong

    How about this: It’s linear in the same sense that quantum computation is linear, and consumption works the same as the no-cloning theorem.
    In classical logic we can use a premise as many times as we like, precisely because in classical computation we can copy bits. That is, make a copy of the premise (input) and then use the copy. You get the conclusion (the output of the computation) and still have the premise (input) hanging around. But the copying procedure is essentially quadratic. That is, a linear mechanism like linear logic or unitary quantum state evolution can’t copy. And so premises and inputs are used up in inferences and computations.

    Reply
  3. David

    Tam,
    The following instructions may not show all mathematics symbols, but it should show most of them.
    On your browser click Tools, Internet Options
    Now press the Fonts button at bottom of dialog box
    Under ‘Webpage Font’ select ‘Lucida Sans Unicode’
    Under ‘Plain Text Font’ select ‘Courier New’
    Click OK button
    This should turn most of the square boxes into math symbols. Because of the way CSS works it doesn’t always work on Mark’s site, but it does help a lot.

    Reply
  4. Samuel A. Falvo II

    I would say it’s “linear” because of the graphs it would produce, if you were to do so. Resources have precisely _one_ edge leading away from them — a single, straight _line_. The only exception would be to nodes that purposefully duplicate a proposition for use in other propositions in plurality. Another way of putting it, you can always draw a straight line from point A to point B on the graph, traversing the “flow” of information in the logic.

    Reply
  5. John Armstrong

    Samuel, that really isn’t it. “Linear”, in the mathematical sense, means we’re talking about vector spaces over fields, and transformations which preserve the additive structure. Mark is sort of sweeping this part under the rug here.

    Reply
  6. revere

    John: It’s not the only use of the word linear, though. Llnear is frequently (and mistakenly) used for straight line relationships which could be affine relationships; and more pertinently, it is used in other contexts, as in order theory, a linear order (meaning a total order or a chain). I’m guessing there are other uses, too.

    Reply
  7. T_U_T

    wonder whether it would be possible to introduce nonlinear consumption into this logic. Like the law of diminishing returns. Otherwise it would be quite useless methinks

    Reply

Leave a Reply