Monthly Archives: March 2007

Building Things in π-Calculus: Storage Cells

As a refresher for me, and to give some examples to help you guys understand it, I’m going to go through a couple of examples of interesting things you can build with π-calculus. We’ll start with a simple way of building mutable storage.

Continue reading

Basics: Parallel, Concurrent, and Distributed

This came up in a question in the post where I started to talk about π-calculus, but I thought it was an interesting enough topic to promote it up to a top-level post. If you listen to anyone talking about computers or software, there are three worlds you’ll constantly hear: parallel, concurrent, and distributed. At first glance, it sounds like they mean the same thing, but in fact, they’re three different things, and the differences are important.

Continue reading

An Experiment with π-calculus and Programming Language Design

I feel like a bit of a change of pace, and trying a bit of an experiment.

Re-reading Backus’s old FP work reminds me of what I was doing the last time I read it, which
was back in grad school. At the time, I was working on some stuff involving parallel computation,
and I discovered Robin Milner’s π-calculus as a tool for describing parallel computation. You
can think of π-calculus as being a sort of parallel (pun intended) for the λ-calculus: in
sequential, single-threaded computation, λ-calculus can be a great tool for describing what
things mean. But λ-calculus has absolutely no way of describing the concept of multiple
things happening at once. π-calculus is fundamentally built on the concept of multiple threads
which can only share information by passing messages.

There’s a reason that reading Backus made me think of this, beyond just the temporal coincendence of the fact that I was working with π-calculus the last time I read Backus’s
FP paper. One of the major points that Backus made was how poorly the vonNeumann model was
at describing many computations; that has become far more true in recent years. Even my laptop now has multiple processors; computation just isn’t single-threaded anymore. But most programming languages are basically deeply single-threaded. Even Haskell, for all of its functional
purity, isn’t particularly good at multi-threaded execution. But I’ve always thought it would be
a great idea to build a language around π-calculus the way that ML is built around λ-calculus.

So my experiment, such as it is, is to see if I can work through how to create an actual, useful, non-pathological programming language based on the π-calculus; and not just do that,
but do it publicly, here on the blog.

Continue reading

Backus's Idea of Functional Programming

In my earlier post about John Backus, I promised to write something about his later work on
functional programming languages. While I was in a doctors office getting treated for an awful
cough, I re-read his 1977 Turing Award Lecture. Even 30 years later, it remains a fascinating read,
and far from being dated, it’s positively astonishingly to see both how far-sighted Backus was, and how little progress we’ve actually made.

Continue reading

Sad News: John Backus has died.

I just heard that John Backus died last saturday.

John Backus was one of the most influential people in the development of what we now know as software engineering. In the early days of his career, there was no such thing as a programming language: there was just the raw machine language of the hardware. Backus was the first person
to come up with the idea of designing a different language, one which was easier for
humans to read and write than machine code, and having the machine do the translation.

Continue reading

The Mapping of the E8 Lie Group (Minor Update)

I’ve been getting tons of mail from people in response to the announcement of the mapping of
the E8 Lie group, asking what a Lie group is, what E8 is, and why the mapping of E8 is such a big deal?

Continue reading

Pigheaded Egnorance, Antibiotic Resistance, and Tautologies

So the Discovery Institute’s most recent addition has chosen to reply to my post about
tautologies.
(Once again, I’m not linking to him; I will not willingly be a source of hits for the DI website when they’re promoting dangerous ingorance like this.) Typically, he manages to totally miss the point:

Darwinist blogger and computer scientist MarkCC (why don’t they use their real names?) called me a lot of names a couple of days ago. The most profane was that I am a ‘bastion of s***headed ignorance.’ Profanity seems to be a particular problem with the computer-math Darwinists. A dysfunctional clad, perhaps. They’re dysfunctional because, as Aristotle wrote, effective rhetoric has three characteristics: logos, ethos, and pathos. Effective rhetoric appeals to the best in reason, ethics, and emotion. When I’m called unprintable names merely for expressing my skepticism about the relevance of Darwin’s theory to the practice of medicine, I’ve already won the ‘ethos’ and ‘pathos’ skirmishes. I can concentrate on the logos.

Yes, Dr. Egnor. Let’s make sure that we focus on issues of style rather than substance. Because we both know that you have nothing to say in response to the substance of my criticism of your pigheaded ignorance.

Continue reading

Personal Tidbit: Jobs

One more bit of personal blogging, and then it’ll be back to the math. You may have noticed
that I haven’t been as active in the discussions on my posts for the last few weeks as I
would normally be. There are two reasons for that; one I’ve mentioned before – my father’s illness.
The other is actually something good.

As of today, I’m unemployed. Briefly.

After 11 years at IBM Research, I decided to change jobs. Today was my last day working for IBM. One week from monday, I’ll be starting work for Google, as a Software Engineer at their New York lab. Nothing against IBM – it was just time for a change. Over the last few weeks, the process of interviewing, and then wrapping up my work at IBM has been taking up a lot of time. Things should be nicely mellow for the next week, and then a bit crazy for a while as learn the ropes at my new job.

Meta-Analysis Bogosity and the Power of Prayer

My fellow SBer Craig Hilberth at the Cheerful Oncologist writes about a
meta-analysis that purports to show the positive effect of intercessory prayer
. Neither Craig
nor I have access to the full paper. But what we know is that the claim is that the meta-analysis
shows a result of g=-0.171, p=0.015.

This really ticks me off. Why? because g=-0.17 is not significant. Meta-analysis generally considers g=0.20 to be the minimum cutoff for statistical significance.

Briefly, what is meta-analysis? The idea of it is, suppose you’ve got a bunch of studies of the same topic. Meta-analysis lets you take data from all of the studies in the group, and attempt to combine them. What you can do is get aggregate means and standard deviations, and measures of the significance and reliability of the aggregate measures.

Meta-analysis is a useful technique, but it’s very prone to a number of errors. It’s very easy to manipulate a meta-analysis to make it say whatever you want; and even if you’re being
scrupulously honest, it’s prone to sampling bias. After all, since meta-analysis is based on
combining the results of multiple published studies, the sample is only drawn from the studies that were published. And one thing that we know is that in most fields, it’s much harder to publish negative results than positive ones. So the published data that’s used as input to meta-analysis tends to incorporate a positive bias. There are techniques to try to work around
that, but it’s hard to accurately correct for bias in data when you have no actual measurements
to tell you how biased your data os.

So getting back to the meta-analysis results that they cited, what’s g? g, also called “Hedges g”, is a measure of how much the overall data set of the combined studies differs from the individual data sets means. G is a measure of the significance
of any aggregate result from combining the studies. The idea is, you’ve got a bunch of studies, each of which has a control group and a study group. You compute aggregate mean for both the study and control groups, take the difference, and divide it by the aggregate standard deviation. That’s g. Along with G, you compute a P-value, which essentially measures the reliability of the g-figure computed from the aggregate data.

Assuming a fixed events model – that is, that the studies are essentially compatible, and all measuring the same basic events – the minimum level at which g is considered significant is |g|=0.2, with a minimum p value of 0.05.

This meta-analysis has |g|=0.17, with a p-value of 0.015. So they’re well-below the minimum level of statistical significance for a fixed events model meta-analysis, and their P-value is less than one third of the level at which a |g|=0.2 would be considered significant.

So – what it comes down to is, they did a meta-analysis which produced no meaningful results, and they’re trying to spin it as a “small but statistically significant result”. In other words, they’re misrepresenting their results to try to claim that they say what they wanted them to say.

Simple Programming in Binary: Binary Combinatory Logic

For reasons that I’ll explain in another post, I don’t have a lot of time for writing a long pathological programming post, so I’m going to hit you with something short, sweet, and beautiful: binary combinatory logic.

I’ve written in the past about lambda calculus, and it’s equivalent variable-free form, the SKI combinator calculus. I’ve ever written about other combinator calculus based languages, like Unlambda and Iota.

Binary combinatory logic, aka BCL, is a language based on SKI calculus – except that it encodes the entire thing into binary. Two characters, plus two rewrite rules, and that’s it – a complete
combinator calculus based programming language.

SKI combinator calculus is a simple variable-free calculus with three constructs: S, K, and I; and I isn’t really primitive, but can be defined in terms of S and K.

  1. S=λx y z.x z (y z)
  2. K=λx.(λy.x)
  3. I=λx.x=SKK

So, in BCL, S is written “01”; K is written “00”. And there are two rewrite rules, which basically define “1” without a zero prefix as a a paren-like grouping construct:

  1. “1100xy”, where “x” and “y” are valid BCL terms (that is, complete syntactic units),
    gets rewritten to be “x”. If you follow that through, that means that it reduces to ((Kx)y).
  2. “11101xzy” gets rewritten to “11xz1yz”. Again, following it through, and that
    reduces out to “(((Sx)y)z)”.

So, following on unlambda’s method of handling IO, “hello world” in BCL is:

010001101000010000010110000000000101101111
000010110111110011111111011110000010011010

bcl.gif

And here’s the really neat thing. Write an interpreter for BCL in BCL. Take the bit string that results, and convert it to a bitmap. That’s what’s over the right here. So, for example, the first line is “1111100000111001”; keep going, and you’ll find the entire BCL interpreter.