Category Archives: Programming

The "C is Efficient" Language Fallacy

I came across an article yesterday about programming languages, which hit on one of my major peeves, so I can’t resist responding. The article is at greythumb.org, and it’s called Programmer’s rant: what should and should not be added to C/C++. It’s a variation on the extremely common belief that C and C++ are the best languages to use when you need code to run fast. They’re not. They’re good at things that need to get very close to the hardware – not in the efficiency sense, but in the sense of needing to be able to fairly directly munge the stack, address specific hardware registers, etc. But they are *dreadful* languages for writing real scientific and/or numerical code.

To quote the part of the article that set me off:

First of all, these fears are nonsense. C and C++ are never going to disappear. Why? Because there
are classes of programming problems that are still and will always be CPU bound and there is still
no language as fast as C or C++ for these problems. I highly doubt that there ever will be.

I’m talking about things like: scientific number crunching, game/simulation physics, raytracing,
real-time 3d graphics, audio processing, codec implementation, high-speed network packet routing,
evolutionary computation (my personal favorite :), and of course implementing all these high-level
languages’ runtimes. There are also problems like OS and hardware driver implementation where you
need something “close to the metal” that can interact closely with and even embed assembly language.
C is basically shorthand for assembler, which is why it’s the preferred language for that kind of
thing.

For these tasks, premature optimization at the level of language and framework choice is not evil.
In some cases it’s a requirement. I predict that at least some of these tasks will still be done in
C, C++, or some language with similar characteristics 50 years from now. To give you an idea of just
how much faster C can be for tasks like this, I have found that evolvable instruction set based
evolutionary computation is almost twice as fast when competently implemented in C than a similar
competent implementation in Java.

Here’s the problem. C and C++ suck rocks as languages for numerical computing. They are not the fastest, not by a longshot. In fact, the fundamental design of them makes it pretty much impossible to make really good, efficient code in C/C++. There’s a good reason that Fortran is still the language of choice for real, intense scientific applications that require the absolute best performance that can be drawn out of our machines – applications like computational fluid dynamics.

Making real applications run really fast is something that’s done with the help of a compiler. Modern architectures have reached the point where people can’t code effectively in assembler anymore – switching the order of two independent instructions can have a dramatic impact on performance in a modern machine, and the constraints that you need to optimize for are just more complicated than people can generally deal with.

So for modern systems, writing an efficient program is sort of a partnership. The human needs to carefully choose algorithms – the machine can’t possibly do that. And the machine needs to carefully compute instruction ordering, pipeline constraints, memory fetch delays, etc. The two together can build really fast systems. But the two parts aren’t independent: the human needs to express the algorithm in a way that allows the compiler to understand it well enough to be able to really optimize it.

And that’s where C and C++ fall down. C and C++ are strongly pointer-based languages. The real semantics of almost anything interesting end up involving pretty much unrestricted pointers. In C and C++, there’s no such thing as an array – there’s just pointers, which you can subscript and a shorthand for pointer arithmetic and indirection(`x[n]` in C/C++ is the same thing as `*(x+n)`.)

That pointer based nature means that in a C or C++ program, it’s very hard for a compiler to figure out what things are independent. It comes down to a problem called alias detection. Alias detection is identifying when two variables *might* be referencing the same location. Alias detection becomes a horrific mess in the presence of unrestricted pointers. Let me show you an example:

for (int i=0; i < 20000) {
  for (int j=0; j < 20000) {
    x[i][j] = y[i-2][j+1] * y[i+1][j-2];
  }
}

If you look at that loop, it can be parallelized or vectorized without any problem if and only if the array pointed to by `x` and the array pointed to by `y` are completely distinct with no overlap. But there’s no way to write code in C or C++ that guarantees that. If it were Fortran-77, you could easily check if they were distinct. If it were Fortran-98, you could check if `x` or `y` were declared as possible pointer targets, and the programmer could make it obvious that they didn’t overlap if they wanted to. But you can’t do that in C or C++. (And Fortran isn’t even the best – an experimental language called Sisal from Lawrence Livermore labs used to be able to beat Fortran by around 20% on typical code!)

That example involves parallelization of code, but alias related problems aren't just an issue for parallelism; it’s just easiest to show an example for parallelism. The aliasing issues in C and C++ have a very direct impact on real code. Let me tell you about a concrete example of this, and then I’ll stop ranting. About six years ago, I was working on a project where I needed to implement a rather messy algorithm to compute something called the "longest common subsequence" (LCS) of two arrays. The standard algorithm for computing LCS is using something called dynamic programming; it's **O***(n3) time, and **O**(n2) space. There’s an algorithm that was designed by people doing computational biology that can do it in the same time, but using on average **O**(n) space.

I didn’t know what language to use for this project, so I decided to do an experiment. I wrote the LCS algorithm in a bunch of different languages, to compare how complex the code was, and how fast it ran. I wrote the comp bio algorithm in C, C++, OCaml, Java, and Python, and recorded the results. What I got timing-wise for running the programs on arrays of 2000 elements each was:

  • C: 0.8 seconds.
  • C++: 2.3 seconds.
  • OCaml: 0.6 seconds interpreted, 0.3 seconds fully compiled.
  • Java: 1 minute 20 seconds.
  • Python: over 5 minutes.

About a year later, testing a new JIT for Java, the Java time was down to 0.7 seconds to run the code, plus about 1 second for the JVM to start up. (The startup times for C, C++, and Ocaml weren’t really measurable – they were smaller than the margin of error for the measurements.)The Objective-Caml bytecode interpreter was faster than the carefully hand-optimized C program! Why? Because the OCaml compiler could recognize that the arrays were completely independent – it didn’t need to worry about one iteration of the loop stepping on the values used by another. The C compiler couldn’t apply a lot of useful optimizations, because it couldn’t be sure that they were valid.

And it’s not just non-assignment based functional languages where you can see supposedly less-efficient high level languages crushing the performance of C/C++. CMU CommonLisp can beat C/C++ on numeric code. There was a paper a few years back documenting it: using a Sun SPARC workstation, if you use the optional type declarations, and write scientific/numeric code in Lisp, using vectors (Lisp arrays) and assignments to implement exactly the same algorithm as C, the CMU CommonLisp code will perform better than C code generated by either the Solaris C compiler or GCC with maximum optimization.

More Fractran: A Trivial Interpreter

For your amusement and edification, the following is a very simple interpreter for fractran
programs which, in addition to running the program to generate its result also generates a trace
to show you how the program executed.
;; A Trivial Fractran Interpreter
;;
;; Copyright 2006 Mark C. Chu-Carroll
;; http://scienceblogs.com/goodmath
;;
;; You’re welcome to do anything you want with this code as long
;; as you keep a copy of this header to identify the original source.
;;
;; This program runs a fractran program. A fractran program is a list
;; of fractions. The fractions are represented by a list of two integers:
;; the numerator, and the denominator. For example, the classic fractran
;; multiplication program could be written:
;; ((385 13) (13 21) (1 7) (3 11) (7 2) (1 3))
;; or:
;; (((* 7 11 5) 13) (13 (* 3 7)) (1 7) (3 11) (7 2) (1 3))
;;
;;
;; To run a program until it terminates, call (run-fractran program input).
;; This will return a list; the car of the list will be the result of
;; running the program, and the cdr will be a trace of the executions in the
;; form of a list of the fractions that ran at each step.
;;
;; To run a program for a specific maximum number of steps, call
;; (run-fractran-bounded program input maxsteps)
;;
(define (step-fractran fract int)
(if (equal? fract ()) int
(let ((fr (car fract)))
(if (= (remainder int (cadr fr)) 0)
(cons (/ (* int (car fr)) (cadr fr))
(list fr))
(step-fractran (cdr fract) int)))))
(define (run-fractran fract int)
(let ((step-result (step-fractran fract int)))
(if (list? step-result)
(let ((new-int (car step-result))
(last-step (cadr step-result)))
(cons step-result (run-fractran fract new-int)))
(list int ))))
(define (run-fractran-bounded fract int bound)
(if (> bound 0)
(let ((step-result (step-fractran fract int)))
(if (list? step-result)
(let ((new-int (car step-result))
(last-step (cadr step-result)))
(cons step-result (run-fractran-bounded fract new-int (- bound 1))))
(list int)))
(list int)))
;; The mult program.
(define mult ‘((385 13) (13 21) (1 7) (3 11) (7 2) (1 3)))
;;
;; (run-fractran mult 432)
;; The primes program
(define primes ‘((17 91) (78 85) (19 51) (23 38) (29 33) (77 29) (95 23)
(77 19) (1 17) (11 13) (13 11) (15 2) (1 7) (55 1)))
;; (run-fractran-bounded primes 2 1000)
———-
Commenter Pseudonym has kindly provided a Haskell version in the comments, which was mangled by MTs comment formatting, so I’m adding a properly formatted version here. I think it’s a really interesting comparison to the scheme code above. The Haskell code is very nice; cleaner than my rather slapdash Scheme version. But overall, I think it’s a pretty good comparison – it gives you a sense of what the same basic code looks like in the two languages. Personally, I think the Haskell is clearer than the Scheme, even though the Scheme is my own code.
module Fractran where
import Ratio
import Data.Maybe
import Control.Monad.Fix
type Program = [Rational]
runFractran :: [()] -> Program -> Integer -> [Integer]
runFractran bound prog l
= step bound prog l
where
step _ [] l = []
step [] (f:fs) l
= []
step (_:bound) (f:fs) l
= let p = f * fromIntegral l
in case denominator p of
1 -> let pi = numerator p
in pi : step bound prog pi
_ -> step bound fs l
fractran :: Program -> Integer -> [Integer]
fractran prog l
= runFractran (fix (():)) prog l
fractranBounded :: Int -> Program -> Integer -> [Integer]
fractranBounded b prog l
= runFractran (take b $ fix (():)) prog l
mult = [385%13, 13%21, 1%7, 3%11, 7%2, 1%3]
primes = [17%91, 78%85, 19%51, 23%38, 29%33, 77%29, 95%23,
77%19, 1%17, 11%13, 13%11, 15%2, 1%7, 55%1]
— fractran mult (2^4 * 3^3)
— fractranBounded 1000 primes 2

Prime Number Pathology: Fractran

Today’s pathological language is based on a piece of work called Fractran by John Conway of game theory fame. It’s a really fascinating bugger; absolutely insanely difficult to program in, but based on one of the most bizarrely elegant concepts of computation that I’ve ever seen. It’s amazing that this is Turing complete. It’s not a real programming language in the sense of being able to write practical programs; it’s more of a simple theoretical computational model which has been implemented as a language.

It’s based on the idea of numbers as products of prime factors. As you should remember from elementary school, every number can be represented by a collection of prime numbers that, multiplied together, produce the number. For a few examples:

  •  24 = 2×2×2×3, or 23×31
  •  291 = 3×97
  •  1800 = 5×5×3×3×2×2×2=52×32×23

Conway figured out that using something based on that concept, you can express any computable function using nothing but a list of positive fractions.

Continue reading

Haskell and Scheme: Which One and Why?

While I was waiting for stuff to install on my new machine, I was doing some browsing around the web, and came across an interesting article at a blog called “The Only Winning Move”, titled [Scheme Death Knell?](http://theonlywinningmove.blogspot.com/2006/10/scheme-death-knell.html). It’s not a bad article at all, but I disagree with its conclusions, and I thought it would be interesting to
discuss why.
The article was brought on by *another* blog post, this one at [Lambda the Ultimate](http://lambda-the-ultimate.org/) suggesting that someone translate Douglas Hofstadter’s columns introducing Scheme to replace the Scheme with Haskell.
Josh over at TOWM was slightly irritated by this idea, and concludes that the only reason
why people are suggesting that is because scheme is getting too old to be cool and trendy,
and so they want to switch to something newer.
I disagree.
Haskell is a *very* different language from Scheme, and I think that there are some genuinely good
reasons for teaching Haskell instead of Scheme. I *also* think there are some genuinely good reasons for using Scheme instead of Haskell. It’s not really clear to me which is better, but it’s worth contrasting the two to help understand the tradeoff.

Continue reading

A Metalanguage for Pathological Programming: Cellular Automata in Alpaca

Todays entry isn’t so much a pathological language itself as it is a delightful toy which can be used to *produce* pathological languages. I’m looking at Chris Pressey’s wonderful language [ALPACA](http://catseye.mine.nu:8080/projects/alpaca/), which is a meta-language for describing different kinds of cellular automata.
Frankly, I’m very jealous of Alpaca. I started writing a cellular automata language something like this on my own; I spent about two weeks of my free time working on it, and got it roughly 90% finished before I found out that he’d already done it, the bastard!
Alpaca definitely qualifies as a real language, in that it is capable of generating turing complete automatons. We can show this in two different ways. First, Conway’s life is a trivial program in Alpaca, and you can build a turing machine simulator using life. And second, Alpaca includes an implementation of a Brainfuck clone as a cellular automaton.
Alpaca’s a very simple language, but it lets you define pretty much any cellular automaton that works in a two-dimensional rectangular grid. And it comes with some very fun, and some very pathological examples.

Continue reading

Pathological Programming: Ignorance is Bliss, or at least control.

Todays dose of pathology is another masterpiece from the mangled mind of Chris Pressey. It’s called “[Version](http://catseye.mine.nu:8080/projects/version/)”, for no particularly good reason.
It’s a wonderfully simple language: there’s only *one* actual kind of statement: the labelled assignment. Every statement is of the form: “*label*: *var* = *value*”. But like last’s weeks monstrosity, Smith, Version is a language that doesn’t really have any flow control. But instead of copying instructions the Smith way, in Version the program is toroidal: it executes all statements in sequence, and when it reaches the end, it goes back to the beginning.
The way that you manage to control your program is that one of the variables you can assign values to is special: it’s called IGNORE, and it’s value is the *ignorance space*. The value of the ignorance space is an *irregular* expression (basically, a weak wild-card subset of regexps); if a statement’s label fits the ignorance space, then the statement is ignored rather than executed. The program will keep executing as long as there are any statements that are not part of the ignorance space.

Continue reading

Programming without Control: Smith

Joy of joys, [Cat’s Eye Technologies](http://catseye.mine.nu:8080/projects/), the home of Chris Pressey, one of the most prolific designers of thoroughly bizarre languages is back up! And so, today, we’re going to take a look at one of his masterpieces, the language *Smith*. This is one of my personal all-time favorite pathological languages; it’s positively brilliantly twisted.
Smith is, mostly, a pretty typical memory-oriented machine language. So what makes it pathological? It’s got *no* jumps, *no* branches, *no* control flow whatsoever. The program counter starts at zero, and every step, it increments by one *and only one*.
So, you might quite sensibly ask, how on earth could this beast do anything useful, much less be Turing complete? The answer is: self-modifying code; in particular, the program can *copy* its own instructions.

Continue reading

Worlds Greatest Pathological Language: TECO

I’ve got a real treat for you pathological programming fans!
Today, we’re going to take a quick look at the worlds most *useful* pathological programming language: TECO.
TECO is one of the most influential pieces of software ever written. If, by chance, you’ve ever heard of a little editor called “emacs”; well, that was originally a set of editor macros for TECO (EMACS = Editor MACroS).
As a language, it’s both wonderful and awful. On the good side, The central concept of the language is wonderful: it’s a powerful language for processing text, which works by basically repeatedly finding text that matches some kind of pattern, taking some kind of action when it finds it, and then selecting the next pattern to look for. That’s a very natural, easy to understand way of writing programs to do text processing. On the bad side, it’s got the most god-awful hideous syntax ever imagined.

Continue reading

Should we miss BASIC?

Over in my post accepting my victory as the biggest geek on ScienceBlogs, an interesting discussion about beginners learning to program got started in the comments. It was triggered by someone mentioning David Brin’s article in Salon about how terrible it is that computers no longer come with basic. The discussion was interesting; I think it’s interesting enough to justify a top-level post.
A few days ago in Salon, David Brin published an article (no link, because Salon is a pay-site), lamenting the fact that computers no longer come with BASIC interpreters, and how that was going to wreck the next generation of up-and-coming programmers:
>Only, quietly and without fanfare, or even any comment or notice by software
>pundits, we have drifted into a situation where almost none of the millions of
>personal computers in America offers a line-programming language simple enough
>for kids to pick up fast. Not even the one that was a software lingua franca on
>nearly all machines, only a decade or so ago. And that is not only a problem
>for Ben and me; it is a problem for our nation and civilization.
Yes indeedy, the lack of built-in BASIC interpreters is a problem for our very civilization!
There are two contradictory threads running through the article. One is the idea that “back in the good old days”, everyone had the same programming language, a “lingua franca” which everyone spoke. The other is that the “line-oriented” BASIC was a better tool for beginners learning to program than anything that we have now.
I think both of these are utter nonsense.

More Minimal Masochism: Pathological Programming in OISC

Expanding on last weeks theme of minimalism, this week, we’ve got OISC: the One Instruction Set Computer. This is a machine-code level primitive language with exactly one instruction. There are actually a few variants on this, but my favorite is the “subleq” OISC, which I’ll describe beneath the fold.

Continue reading