Basics: Going Meta

In math and computer science, we have a tendency to talk about “going meta”. It’s actually a
pretty simple idea, which tends to crop up in other places, as well. It’s also one of my favorite concepts – the idea of going meta is just plain cool. (Not to mention useful. There’s a running joke among computer scientists that the solution to any problem is to add a level of indirection – which is programmer-speak for going meta on constructs inside of a programming language. Object-orientation is, in some sense, just an example of how to go meta on procedures. Haskell type-classes are an example of going meta on types.)

Going meta basically means taking a step back, and instead of talking about some subject X, you talk about talking about X.

For example, we can talk about numbers. We can make statements about specific numbers – for example, “4<6”. We can make general statements about all numbers: “For all numbers x, there are an infinite number of numbers larger than x”. Those are both basic ground-level statements about
numbers. Going meta is shifting from talking about numbers to talking about talking about numbers. So most of this paragraph is actually meta-discussion about numbers: it’s talking about how we can talk about numbers. And we can meta many times if we want – for example, the previous sentence went meta on the meta – it talked about how we were talking about talking about numbers; and this sentence went even more meta, by talking about the sentence where we were talking about talking about numbers.

In a slightly more formal mode, think about standard, first order predicate logic. In first order logic, we have a set of objects that we can talk about. We make statements about
those objects using predicates. We can make statements about specific objects by referring to them directly in a predicate – for example, “HasBigNose(MarkCC)“. We can make general statements by introducing variables using quantifiers, and making statements in terms of those variables: “∀ x: HasBigNose(x) ⇒ SneezesLoudly(x)“. But we can’t talk about predicates using first order logic – predicates are fixed things, and there’s no way to make a statement about a predicate in first order logic – we can’t say that the predicate “HasBigNose” is a member of the set of really silly predicates – there’s no way to refer to the predicate that would allow us to say that.

But we can go meta – and jump to second order predicate logic. In second order predicate logic, first order predicates can be treated as objects that can be reasoned about. So we can have second-order predicates that talk about first-order predicates: “IsSillyPredicate(HasBigNose)“.

The fundamental source of meta in most modern math is set theory. Early efforts to use
set theory as a unifying basis for mathematics encountered a serious problem: set theory
was discovered to include paradoxical constructs. The most canonical example is the classic paradoxical statement: Russell’s paradox, which is a statement about the set R={x | x∉x} – that is, the set of all sets that do not contain themselves as a member. Is R an element of R?

The problem with Russell’s paradox is pretty obvious when you walk it through. Is R a member of itself? Suppose that R is a member of itself. Then by the definition of R, since R∈R, that means that it’s not a member of itself. Suppose it’s not a member of itself – then by the definition of R, it is a member of itself. So no matter what we do, we have a contradiction: if R is in itself, that leads to the inference that it’s not in itself, and vice versa.

The attempts to solve that work by creating a strict separation of levels – that is, separating
base from meta. For example, von Neumann’s version of
the solution
, called NBG (von Neumann-Bernays-Gödel) set theory, is to take the idea of a
collection of stuff, and separate it into two different kinds of things: classes and
sets. A class is any collection of things which can be defined by some property
held by all of its members. A set is a class which is a member of some other class. Thus, a class
is a sort of meta-set: it’s a kind of set which can contain other sets – but by making the
separation between classes and sets, we’ve created a sort of fire-break that prevents things like
Russell’s paradox from working. In this construct, Russell’s set becomes Russell’s class;
there’s no consistent statement of Russell’s class that can allow it to be a member of a set; and
so the paradox (seems to) disappear. So von Neumann and friends temporarily defeated Russell’s
paradox by introducing one level of meta into set theory.

The multi-level meta-junk grew out the great effort of early 20th century math – the Principia of Russell and Whitehead. They, along with many others, tried to create a perfect formal basis for mathematics. In their construction of math, everything was part of a strict hierarchy, where the meta-levels were absolutely separated from each other, and you could never find any way to formulate a statement in an nth-order logic that could be interpreted as a statement about an n-th order logic.

As we’ve mentioned before, Gödel blew that out of the water – it’s impossible to create that perfect, strict separation between the meta-layers. If you have a first-order logic that’s capable enough to express Peano arithmetic, then you have a logic that’s capable of being abused into formulating 2nd order statements by encoding statements into numbers, and writing statements (as numbers) about the numbers that encode statements – giving you a logic embedded in your first order logic in which you can both write any first-order statement, and also second-order statements – thus collapsing the meta-heirarchy of logic collapses.

0 thoughts on “Basics: Going Meta

  1. Henrik

    I can provide some anecdotal evidence backing up “∀ x: HasBigNose(x) ⇒ SneezesLoudly(x)”. I have a big nose and the volume of my sneezing is above average. Not only that, it seems to increase as I age. This doesn’t bode well, seeing as how my father’s sneezes sounded like shotgun blasts..

    Reply
  2. dileffante

    Now that you have moved to meta-things for the “basics” series (with this post and the one on axioms), may I suggest a post on “Theory” in math? Both J. Wilkins and Dr Freeride (if I remember correctly) had posts on “Theory” in sciences, but in math the word means something else, the criteria for validating propositions are different, etc…, and for lay persons it may be confusing (specially when a mathematical theory becomes hype, like ‘catastrophe theory’, ‘chaos theory’, etc).

    Reply
  3. Davis

    in math the word means something else…

    “Theory” in math is pretty much just a classification term — it’s a synonym for “area of study.” “Group theory” is the field which studies groups, “chaos theory” is the field which studies the mathematics of chaos, and so on.

    Reply
  4. Shane

    Finally, after all of this time, I understand the difference between first and second-order logic. Thanks.

    Reply
  5. Chad Groft

    As we’ve mentioned before, Gödel blew that out of the water – it’s impossible to create that perfect, strict separation between the meta-layers.

    This is not entirely accurate.

    If you have a first-order logic that’s capable enough to express Peano arithmetic, then you have a logic that’s capable of being abused into formulating 2nd order statements by encoding statements into numbers

    Encoding statements into numbers, yes, that is a necessary part of the theorem. You don’t actually need all of Peano arithmetic to make this work — specifically you can throw out the induction axioms, getting Robinson arithmetic — but whatever.

    and writing statements (as numbers) about the numbers that encode statements

    Hold on there! It’s true that you can encode a concept about numbers as a number, if you can express it as a formula. But you can’t express all concepts about numbers as formulas. You can usually express “x is an axiom of T”, for example if T is a decidable theory. If you can do that, you can express “x is provable from T”. But these are both highly nontrivial statements, and require some care to prove.
    One thing you can’t encode, given a consistent system T, is “x is a false statement”. If you could, you could have a sentence A which “said” “A is a false sentence”. Then A would be true iff it were false, a contradiction. This is due to Tarski.
    Another thing you definitely can’t do in set theory is take an encoded formula and form the set of all objects that satisfy the formula. Thus the defeat of the Russell paradox by von Neumann (equivalently that by Zermelo and Fraenkel) was not, contrary to your statement, temporary. Nor was that by Russell and Quine, for that matter. (As far as we know, that is!)
    If the sort of “complete” embedding of second-order concepts as first-order concepts that you imply is possible actually were possible, then all of the formal systems in common use, from Robinson arithmetic through Peano arithmetic on up to ZFC and its extensions, would be inconsistent, and thus useless as even partial descriptions of mathematics. This technically could be true, but I assure you it would be news to mathematicians. It certainly can’t be taken as demonstrated.

    Reply
  6. dileffante

    “Theory” in math is pretty much just a classification term

    D’accord (though there is another more technical meaning in logic, but I was refering to this loose one). Yet, I think that something more can be said about the topic. Yes, they are defined by the mathematical objects they study, but how are they born? How do they relate to each other? How do they develop? Etc. But of course, on second thoughts, the post I’m wishing to read would not be one for the “basics” series, after all. Perhaps more like the kind of thing that David Corfield could post at the n-Category Café.

    Reply
  7. Joseph j7uy5

    You mentioned that “going meta” tends to crop up in other places. Incidentally, one of the places it crops up is in psychotherapy. Often, especially with marital therapy, when there is an impasse of some sort, people need to back up and talk about talking. Surprisingly effective, if you can get people to do it right.
    Which doesn’t happen very often, but it is nice when it does.

    Reply
  8. AnotherRoy

    As a “Basic Concepts of Science” topic, I wonder if ‘meta’ was dealt with sufficiently. My own concept, roughly following Mark’s (without the formality (thank you, by the way! – I needed that)) was almost abandoned in the last few weeks when I found most of my usual on-line sources tended to focus on ‘meta’ as ‘abstracting the results from many many previous studies’. Even if I squint REAL hard I cannot find a ‘similar concept, different terms’ relationship between those two ‘levels of analysis’ concepts.
    I take from this that the word ‘meta’ has not yet reached ‘prime’ status and that different disciplines can (and do) use it within their own concept structure to have different, if you will, meta-meta concepts which do not easily translate back to one common over-arching (now there’s a ‘basic concept’ concept) definition.

    Reply
  9. Jonathan Vos Post

    This use of “meta” has accreted over some time. The original analogy was to Aristotle’s “Metaphysics” which was the book of lecture note jotted by his postdocs that came after the book called “Physics.” For the public at large, “Strange Loops” by Doug Hofstadter probably explains it best.
    “Metalogic” and “Imaginary Logic” and “Modal Logic” were both invented roughly a century ago by Vasiliev, a brilliant, mad, misunderstood Russian Poet/Philosopher/Psychologist, but that’s another story.

    Reply
  10. Thony C.

    “Metalogic” and “Imaginary Logic” and “Modal Logic” were both invented roughly a century ago by Vasiliev”
    Modal logic was invented by the Scots mathematician and logician Hugh MacColl.

    Reply
  11. Mark C. Chu-Carroll

    AnotherRoy:
    If I’m reading you correctly, you’re complaining about the use of meta in the term meta-analysis. While it’s not the clearest use, meta-analysis is does fit squarely into the definition I’ve given for “meta”. What meta-analysis is doing is using an analysis of other analyses to combine the results of many studied.
    I’m not a fan of meta-analysis, but it does fit the term. (The reasons I dislike are grist for a post sometime – the short version is that the entire concept is, in my opinion, ill-founded. It’s completely based on a major bit of selection bias: only the studies that get published get included in meta-analysis; and there are a lot of studies that don’t get published. In particular, studies with negative results are not pubishable in many fields – and so anything with negative results gets excluded from the meta-analysis, making the result garbage.)

    Reply
  12. Enigman

    Modal Logic was invented a century ago by C.I. Lewis. Incidentally, a brilliant book on the consequences of Russell’s paradox is Grattan-Guinness’s 2000 book, The Search for Mathematical Roots 1870-1940: Logics, Set Theories, and the Foundations of Mathematics from Cantor through Russell to Godel.

    Reply
  13. emk

    At MIT’s AI Lab, the most popular observation about “going meta” was:
    “You can solve any problem by adding an extra layer of indirection–unless your problem is performance or complexity.”
    The really fatal limit is actually the complexity. If your meta-layer is poorly designed, you may need to add *another* meta layer later, solve a different problem. And so to the point of total incomprehensibility.
    My current favorite version of “meta” in mathematics is category theory. It provides a vocabulary for all sorts of cool things, but keeping the layers of meta straight can get tricky: “Oh, so that’s a morphism from morphisms to other morphisms. But we can relate that first morphism to this other one, giving us… *brain dumps core*”

    Reply
  14. Scott Simmons

    Jonathan Vos Post:
    “Metalogic” and “Imaginary Logic” and “Modal Logic” were both invented roughly a century ago by Vasiliev, a brilliant, mad, misunderstood Russian Poet/Philosopher/Psychologist, but that’s another story.
    Thony C:
    “Modal logic was invented by the Scots mathematician and logician Hugh MacColl.”
    Enigman:
    “Modal Logic was invented a century ago by C.I. Lewis.”
    Hmmm …
    (◊ Modal Logic was invented by Vasiliev) & (◊ Modal Logic was invented by MacColl) & (◊ Modal Logic was invented by Lewis)
    But:
    ~◊((Modal Logic was invented by Vasiliev) & (Modal Logic was invented by MacColl) & (Modal Logic was invented by Lewis))
    But most interestingly:
    ~◊(Metalogic, Imaginary Logic, and Modal Logic are only two different things) :: (They weren’t ‘both’ invented by anybody)
    Hence, Jonathan’s grammar is wrong in all possible worlds. 🙂

    Reply
  15. Thony C.

    “Modal Logic was invented a century ago by C.I. Lewis.”
    MacColl started developing symbolic modal logic (modal logical was naturely first discussed by the ancient Greeks) in the 1870s when he published the first ever symbolic propositional logic. He wrote numerous articles on his logics in many journals including Mind and Nature. In 1906 he brough it all together in his book Symbolic logic and its Applications. In his Survey of Symbolic Logic (1918) Lewis acknowledged MacColl’s priority however by 1931 Lewis had become famous for his work on modal logics so his Symbolic Logic with Langford omits all mention of MacColl. In the Dover reprint of The Survey from1960 he also removed all mention of MacColl. The Nordic Journal of Philosophical Logic from 1998 has a special issue devoted to MacColl and his work which is available in the net.

    Reply
  16. Scott Simmons

    Just in reality, Jonathan. And I have yet to be convinced that reality is a possible world …

    Reply

Leave a Reply to Jonathan Vos Post Cancel reply