Linear Logic

[Monday][yesterday], I said that I needed to introduce the sequent calculus, because it would be useful for describing things like linear logic. Today we’re going to take a quick look at linear logic – in particular, at *propositional* linear logic; you can expand to predicate linear logic in a way very similar to the way we went from propositional logic to first order predicate logic.
So first: what the heck *is* linear logic?
The answer is, basically, logic where statements are treated as *resources*. So using a statement in an inference step in linear logic *consumes* the resource. This is a strange notion if you’re coming from a regular predicate logic. For example, in regular predicate logic, if we have the statements: “A”, “A ⇒ B”, and “A ⇒ C”, we know that we can conclude “B ∧ C”. In linear logic, that’s not true: using either implication statement would *consume* the “A”. So we could infer “B”, or we could infer “C”, but we could *not* infer both.
When people talk about linear logic, and why it makes sense, they almost always use a vending machine analogy. Suppose I walk up to a vending machine, and I want to buy a soda and a candy bar. I’ve got 8 quarters in my pocket; the soda costs $1.50; the candy bar costs $.75.
In linear logic, I’d say something like the following (the syntax is wrong, but we’ll get to syntax later): (Q,Q,Q,Q,Q,Q,Q,Q), (Q,Q,Q,Q,Q,Q) ⇒ Soda, (Q,Q,Q) ⇒ Candy.
Using the rules, I can by a soda by “spending” 6 of my Qs. I wind up with “(Q,Q) ∧ Soda”, and “(Q,Q,Q) ⇒ Candy”. I’ve consumed 6 Qs, and I’ve consumed the “(Q,Q,Q,Q,Q,Q) ⇒ Soda” implication. I can’t do anything else; I don’t have enough Qs.
The basic statements in linear logic, with intuitive meanings are:
1. A ⊗ B. This is called *multiplicative conjunction*, also known as *simultaneous occurrence*. This means that I definitely have both A and B. This has an *identity unit* called “1”, such that A ⊗ 1 ≡ 1 ⊗ A ≡ A. 1 represents the idea of the absence of any resource.
2. A & B : *additive conjunction*, aka *internal choice*. I can have either A *or* B, and I get to pick which one. The unit is ⊤, pronounced “top”, and represents a sort of “I don’t care” value.
3. A ⊕ B. This is called *additive disjunction*, also known as *external choice*. It means that I get either A or B, but I don’t get to pick which one. The unit here is 0, and represents the lack of an outcome.
4. A ⅋ B : *multiplicative disjunction*, aka *parallel occurence*; I *need to have* both A and B at the same time. The unit for this is ⊥, pronounced “bottom”, and represents the absence of a goal. In the vending machine metaphor, think of it as the “cancel” button on the vending machine: I decided I don’t want any, so I’m not going to spend my resources.
5. A -o B : Linear implication. Consume resource A to *produce* resource B. The normal symbol for this looks like an arrow with a circle instead of an arrowhead; this operator is often called “lolly” because of what the normal symbol looks like. I’m stuck writing it as “-o”, because there’s no HTML entity for the symbol.
6. !A : Positive exponentiation, pronounced “Of course A”. This *produces* an arbitrary number of As. Equivalent to A ⊗ !A.
7. ?A : Negative exponentiation, pronounced “Why not A?”. This *consumes* As.
Ok. So, suppose I want to talk about buying lunch. I’ve got 10 dollars to buy lunch. I’ll be full if I have a salad, a coke, and a tuna sandwich. If I wanted to write “I’ve got a dollar” as “D”, “I have a salad” as “S”, “I have a coke” as “C”, “I have a tuna sandwich” as “T”, and finally, “I’m full” as “F”
* I can write “I have 10 dollars” in LL as: “(D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D⊗ D ⊗ D)”.
* I can write “Tuna sandwich and salad and coke” as a group of things that I want to have all of as: “T ⅋ S ⅋ C”.
* I can say that I’ll be full if I have lunch as “T ⅋ S ⅋ C -o F”
If I want to talk about buying lunch, I can describe the prices of the things I want using implication:
* A coke costs one dollar: “D -o C”; I can spend one dollar, and in return I get one coke.
* A salad costs 3 dollars: “(D ⊗ D ⊗ D) -o S”
* A tuna sandwich also costs three dollars: “(D ⊗ D ⊗ D) -o S”
Now, I can do some reasoning with these.
* By taking 1 of the dollars, I can get one C. That leaves me with “D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D ⊗ D⊗ D ⊗ C”
* By taking 3 D, I can get one S. “D ⊗ D ⊗ D ⊗ D ⊗ D⊗ D ⊗ C ⊗ S”.
* By taking 3 D, I can get one T. “D ⊗ D⊗ D ⊗ C ⊗ S ⊗ T”.
* Now I’ve got my lunch. I can eat it and be full, with three dollars left: “D ⊗ D⊗ D ⊗ F”.
Just from this trivial example, you should be able to see why linear logic is cool: the idea of being able to talk about *how* resources are used in an inference process or a computation is really valuable, and linear logic gives you the ability to really work with the concept of resource in a solid, formal way. If you think of it in terms of the Curry-Howard isomorphism [types-as-proofs concept from the simply typed lambda calculus][types-as-proofs], you can imagine using linear logic for types of values that are *consumed* by a computation – i.e., they’re no longer available once they’ve been used.
I’m going to adopt a slightly different format for the sequents for working in linear logic. The way that I produced the center bars in yesterdays post was really painful to write, and didn’t even end up looking particularly good. So, the way that I’m going to right the sequents in this post is to wrap the “top” and “bottom” of the sequent in curly braces, and separate them by a “⇒”, as in:

{GivenContext :- GivenEntailment } ⇒ { InferredContext :- InferredEntailment}

Now, let’s take a look at the sequent rules for propositional linear logic. I’m using the version of these rules from [Patrick Lincoln’s SIGACT ’92 paper][sigact92]. Yeah, I know that’s a bit of a big load there. Don’t worry about it too much; the important part is the concept described up above; the sequents are useful to look at when you have a hard time figuring out what some operator means in inference. For example, you can see the difference between & and ⊕ (which I found confusing at first) by looking at their sequents, to see what they do.
1. **Identity**: { } ⇒ { A :- A }
2. **Cut**: { Γ1 :- A, Σ1   Γ2, A :- Σ2 } ⇒ { Γ1, Γ2 :- Σ12}
3. **Exchange Left**: { Γ1, A, B, Γ2 :- Σ } ⇒ { Γ1, B, A, Γ2 :- Σ }
4. **Exchange Right**: { Γ :- Σ1, A, B, Σ2 } ⇒ { Γ :- Σ1, B, A, Σ2}
5. **⊗ Left**: {Γ, A, B :- Σ} ⇒ { Γ, A ⊗ B :- Σ }
6. **⊗ Right**: { Γ1 :- A, Σ1   Γ2 :- B, Σ2} ⇒ { Γ1, Γ2 :- (A ⊗ B), Σ12}
7. **-o Left**: { Γ1 :- A, Σ1  Γ2, B :- Σ2 } ⇒ { Γ12, (A -o B) :- Σ12}
8. **-o Right**: { Γ, A :- B, Σ} ⇒ { Γ :- A -o B, Σ}
9. **⅋ Left**: { Γ1,A :- Σ1  Γ2 :- B, Σ2 } ⇒ { Γ12, (A ⅋ B) :- Σ12}
10. **⅋ Right**: { Γ :- A, B, Σ} ⇒ { Γ :- A ⅋ B, Σ}
11. **& Left**: { Γ, A :- Σ } ⇒ { Γ,A & B :- Σ}/{ Γ, B :- Σ } ⇒ { Γ,A & B :- Σ}
12. **& Right**: { Γ :- A,Σ   Γ :- B,Σ} ⇒ { Γ :- (A & B), Σ }
13. **⊕ Left**: Γ,A :- Σ   Γ,B :- Σ} ⇒ { Γ,A ⊕ B :- Σ}
14. **⊕ Right**: {Γ :- A,Σ} ⇒ {Γ :- A ⊕ B, Σ}/{Γ :- B,Σ} ⇒ {Γ :- A ⊕ B, Σ}
15. **!W**: {Γ :- Σ} ⇒ {Γ,!A :- Σ}
16. **!C**: {Γ,!A,!A :- Σ} ⇒ { Γ,!A :- Σ }
17. **!D**: { Γ, A :- Σ} ⇒ { Γ,!A :- Σ}
18. **!S**: { !Γ :- A, ?Σ} ⇒ { !Γ :- !A, ?Σ}
19. **?W**: {Γ :- Σ} ⇒ {Γ:- ?A, Σ}
20. **?C**: {Γ :- ?A,?A, Σ} ⇒ { Γ :- ?A, Σ }
21. **?D**: { Γ :- A, Σ} ⇒ { Γ :- ?A, Σ}
22. **?S**: { !Γ, A :- ?Σ} ⇒ { !Γ,?A :- ?Σ}
23. **⊥ Exp Left**: { Γ :- A, Σ} ⇒ { Γ, A :- Σ}
24. **⊥ Exp Right**: { Γ, A :- Σ} ⇒ { Γ :- A, Σ}
25. **0 Left**: { Γ,0 :- Σ } *(Nothing can be inferred)*
26. **⊤ Right**: { Γ :- ⊤,Σ}
27. **⊥ Left**: { ⊥ :- }
28. **⊥ Right**: {Γ :- Σ} ⇒ {Γ :- ⊥,Σ}
29. **1 Left**: { Γ :- Σ} ⇒ { Γ,1 :- Σ}
30. **1 Right**: { :- 1}
This is long enough that I’m not going to get into how this hooks into category theory today, except to point out that if you look carefully at the multiplication and exponentiations, they might seem a tad familiar.
[types-as-proofs]: http://goodmath.blogspot.com/2006/06/finally-modeling-lambda-calculus.html
[sigact92]: http://www.csl.sri.com/~lincoln/papers/sigact92.ps
[yesterday]: http://scienceblogs.com/goodmath/2006/07/a_brief_diversion_sequent_calc.php

0 thoughts on “Linear Logic

  1. Fred

    Hello Mark,
    I enjoy your blog, though sometimes I don’t understand it. Speaking of Linear Logic could you make some comments on the mathematics (is it Cartesianism?) used in this argument: http://lp-thoughts-philosophy.blogspot.com/
    Can you explain it better? Eli Goldratt, author of The Goal and It isn’t Luck, describes something similar called Theory of Constraints. No math is involved in his books but a similar argument. If the model were valid, how would you find the constraints? LP looks for crime, Goldratt looks for inventory.
    Thank You,
    Fred

    Reply
  2. Jeff Nuttall

    I’m stuck writing it as “-o”, because there’s no HTML entity for the symbol.

    Well, technically you could take advantage of its Unicode code to write it as “⊸”, but I’d imagine very few computers would display that correctly (mine doesn’t), so you’re probably still better off writing it as you did.
    (Check: “&#x22B8”–the character between the quotes is supposed to be a line with a circle on the right end, but on my computer (and probably on most others), it just comes out as a square…)

    Reply
  3. Jeff Nuttall

    Whoops…I meant to say a question mark, not a square…but oddly enough, though it did show up as a question mark in the preview, once it’s posted it actually shows up correctly on my computer–much to my surprise! Huh. Just out of curiosity, can anyone else actually see the line and circle there, or does my computer have some unusual settings for some reason?

    Reply
  4. Jeremy Henty

    Mark, there’s something I don’t get. In your example you exchange D ⊗ D ⊗ D for S by invoking an axiom ( D ⊗ D ⊗ D ) -o S . At the end you exchange C ⊗ S ⊗ T for F , so presumably you need an axiom ( C ⊗ S ⊗ T ) -o F to do that. But you don’t have that axiom. Instead you have T ⅋ S ⅋ C -o F , which is different. What gives?

    Reply
  5. Mark C. Chu-Carroll

    Jeremy:
    Here’s the intuition:
    * “C ⊗ S ⊗ T” is the statement that “I *have* resources C,S, and T”.
    * “T ⅋ S ⅋ C” is the statement that “I *need* resources C, S, and T”.
    ⅋ is a join for connecting *goals*; “⊗” is a join for connecting *posessions*.

    Reply
  6. Mark C. Chu-Carroll

    Jeff:
    On my browser (Mac powerbook, firefox), that looks like a horizontal line that’s got an extra pixel on the end – almost like a → if you cut off the bottom of the pointer, and left just the first pixel from the top of the pointer.
    That’s the symbol that I found on wikipedia when I was trying to find characters to use; it looks too much like a minus sign, which I thought would be confusing.

    Reply
  7. Mark C. Chu-Carroll

    Fred:
    I finally got around to taking a look at that link.
    First – there’s no connection between linear programming (what he’s talking about there), and linear logic. Linear programming is a branch of constraint theory, where all of the constraints are linear equations.
    I find myself wondering if I know the author of that post.. It reminds me of someone I knew in grad school.
    The problem with the approach there is that it’s really over-simplistic. *Not* everything in reality can be modelled as a system of linear constraints. In fact, many things in the reality of human beings can’t really be argued mathematically at all. We can make interesting approximations of many systems, but we can’t account for human nature: people *don’t* always have perfect information; even when we *do* have perfect information, we don’t always act in our own best interests; all sorts of impacts from social interaction and interpersonal group dynamics just don’t model well.
    (For example, a constraint-theory economic model can be used to show that slavery was a terrible idea economically. But look how hard people fought to keep it as an institution: their *social* structure depended on it, and they wouldn’t acknowledge the economic arguments that they’d be better off without it.)

    Reply

Leave a Reply