Monthly Archives: July 2007

L-System Fractals

fb.gif

In the post about Koch curves, I talked about how a grammar-rewrite system could be used to describe fractals. There’s a bit more to the grammar idea that I originally suggested. There’s something called an L-system (short for Lindenmayer system, after Aristid Lindenmayer, who invented it for describing the growth patterns of plants), which is a variant of the Thue grammar, which is extremely useful for generating a wide range of interesting fractals for describing plant growth, turbulence patterns, and lots of other things.

Continue reading

Fractal Dust and Noise

cantor-dust-tri.png

While reading Mandelbrot’s text on fractals, I found something that surprised me: a relationship between Shannon’s information theory and fractals. Thinking about it a bit, it’s not really that suprising; in fact, it’s more surprising that I’ve managed to read so much about information theory without encountering the fractal nature of noise in a more than cursory way. But noise in a communication channel is fractal – and relates to one of the earliest pathological fractal sets: Cantor’s set, which Mandelbrot elegantly terms “Cantor’s dust”. Since I find that a wonderfully description, almost poetic way of describing it, I’ll adopt Mandelbrot’s terminology.

Cantor’s dust is a pathological set. It’s caused no small amount of consternation among mathematicians and physicists who find it too strange, too bizarre to accept as anything more than an extreme artifact of logic in the realm of pure math. But it’s a very simple thing. To make it, you start a line segment. Cut it into three identical parts. Erase the middle one. Then repeat the cutting into thirds and erasing the middle on each of the two remaining segments – and then the segments remaining from that, and so on. The diagram below shows a few steps in the construction of the cantor dust.

cantor-dust.png

Why should it be called a dust? Because geometrically, in the limit, it’s got to be a set of completely disconnected points – a scattering of dust across the original line-segment.

It’s so simple – what is pathological about it? Well, it’s clearly 0-dimensional in the pure sense – as I said above, it’s just a collection of points. Topologically, it’s a set of points with empty neighborhoods. And yet – look at it. It’s clearly not zero-dimensional. It’s got a 1-dimensional geometric structure. But naive topology insists that it doesn’t. But there’s worse to it. It’s a similar problem to what we saw in the shape-filling curves. Clearly, the dust disappears into nothingness. Every part of it has zero length – it’s seems like it must converge to something very close to the empty set. And yet it doesn’t: the set of points in the Cantor dust has the same cardinality as the cardinality of the set of points in the original line.

For those of us who came of age as math geeks in the late 20th century, this doesn’t really seem that strange at all. But you’ve got to remember: the 20th century was a time of great turmoil in math. At the beginning of the century, the great work was solving mathematics: turning all of math into a glorious, perfect, clean, rational, elegant edifice. The common belief of the time was that math was beautiful and perfect – that while the real world might be ugly, might have all sorts of imperfections and irrationalities, that those real-world flaws could never touch the realm of pure math: math was, in the words of one famous mathematician “the perfect mind of God”. And then came the crash: the ramifications of Cantor’s set theory, Gödel’s incompleteness, Church and Turing’s uncomputability, fractals, Chaitin’s strange numbers… The edifice collapsed; math was flawed, imperfect, incomplete as anything else in the world. It was hugely traumatic, and there was (and in some circles still is) a great deal of resistance to the idea that so much irregularity or ever irrationality was a part of the world of math – that the part of math that we can really grasp and use is just an infinitessimal part of the monstrous world of what really exists in our abstractions.

But getting back to the point at hand: what does Cantor’s dust have to do with information and noise?

Imagine that you’re listening to sound through a telephone wire with incredibly precise recording equipment. You’re sending a perfectly clear sine-wave over the line, in order to see how much noise there is.

You start pretty high – you only want to record noises that exceed, say, 20% of the amplitude of the basic sine-wave. You wind up with a pattern of bursts of noise. Those noises are scattered around, temporally. Now, mark off every time period of greater that 5 minutes where there is no noise. Those are gaps in the noise – the largest gaps that you’re going to look at. Now look in the bursts of noise – that is, the periods of time where there was no gap in the noise longer than 5 minutes. Look for periods of 1 minute where there wasn’t any noise. In between the 5 minute gaps, you’ll get a collection of smaller 1 minute gaps, separated by smaller bursts of noise. Then look into those 1 minute gaps, for 10 second periods with no noise – and you’ll break the bursts of noise up further, into bursts longer than 10 seconds, but shorter than a minute. Keep doing that, and eventually, you’ll run out of noise. But turn down your noise threshold so that you can hear noise of a smaller amplitude, and you can find more noise, and more gaps, breaking up the bursts.

If you look at the distribution of noise, one thing you’ll notice is that the levels are independent: the length of the longest gaps has no relation to the frequency of smaller gaps between them. And the other thing you’ll notice is that the frequency of gaps is self-similar: the distribution of long gaps relative to sections of the recording of long length are the same as the distribution of short gaps relative to shorter sections of the recording. The noise distribution is fractal! In fact, it’s pretty much a slightly randomized version of Cantor’s dust.

Understanding the structure of noise isn’t just interesting in the abstract: it provides a necessary piece of knowledge, which is used regularly by communication engineers to determine the necessary properties of a communication channel in order to ensure proper transmission and storage of information. Recognizing the fractal nature of noise makes it possible to better predict the properties of that channel, and determine how much information we can safely pump through it, and how much redundancy we need to add to the information to prevent data loss.

Fractal Pathology: Peano’s Space Filling Curve

colored-hilbert.jpg

One of the strangest things in fractals, at least to me, is the idea of space filling curves. A space filling curve is a curve constructed using a Koch-like replacement method, but instead of being self-avoiding, it eventually contacts itself at every point.

What’s so strange about these things is that they start out as a non-self-contacting curve. Through further steps in the construction process, they get closer and closer to self-contacting, without touching. But in the limit, when the construction process is complete, you have a filled square.

Why is that so odd? Because you’ve basically taken a one-dimensional thing – a line – with no width at all – and by bending it enough times, you’ve wound up with a two-dimensional figure. This isn’t just odd to me – this was considered a crisis by many mathematicians – it seems to break some of the fundamental assumptions of geometry: how did we get width from something with no width? It’s nonsensical!

Continue reading

Powers and Products of Graphs

hyperproduct.jpg

Often, we use graphs as a structured representation of some kind of information. Many interesting problems involving the things we’re representing can then by described in terms of operations over
graphs. Today, I’m going to talk about some of the basic operations that we can do on graphs, and in later posts, we’ll see how those operations are used to solve graph-based problems.

Continue reading

A Book Review: "Lifecode: From egg to embryo by self-organization"

After seeing PZs comments on Stuart Pivar’s new version of his book, titled “Lifecode: From egg to embryo by self-organization”, I thought I would try taking a look. I’ve long thought that much of the stuff that I’ve read in biology is missing something when it comes to math. Looking at things, it often seems like there are mathematical ideas that might have important applications, but due to the fact that biology programs rarely (if ever) require students to study any advanced math, they don’t recognize the way that math could help them. So, hearing about Pivar’s book, which claims to propose a theory of structural development based on the math describing structural distortions of an expanding figure in a constrained space – well, naturally, I was interested.

So I wrote to the publisher of his book, to see if I could get a review copy. I wanted to try writing a review from the perspective of a mathematician. To my immense surprise, a courier arrived at my door two hours later with a copy of the book! It’s a lucky thing I was working from home that day! So I started reading it monday afternoon. I didn’t have a lot of time to read this week, so I didn’t finish the main text until thursday, despite the fact that it’s really quite short.

Continue reading

A Recipe Meme

I got hit by a mutant meme; I don’t remember who tagged me. I’m not terribly into these
meme things, but I don’t pass up excuses to post recipes. So below the fold are four recipes that I’ve created: seared duck breast with ancho chile sauce; saffron fish stew; smoked salmon hash; and
spicy collard greens.

Continue reading

Fractal Curves and Coastlines

Von_Koch_curve.gif

I just finally got my copy of Mandelbrot’s book on fractals. In his discussion of curve fractals (that is, fractals formed from an unbroken line, isomorphic to the interval (0,1)), he describes them in terms of shorelines rather than borders. I’ve got to admit
that his metaphor is better than mine, and I’ll adopt it for this post.

In my last post, I discussed the idea of how a border (or, better, a shoreline) has
a kind of fractal structure. It’s jagged, and the jags themselves have jagged edges, and *those* jags have jagged edges, and so on. Today, I’m going to show a bit of how to
generate curve fractals with that kind of structure.

Continue reading

Order From Chaos Using Graphs: Ramsey Theory

ramsey-triangle.jpg
One application of graph theory that’s particularly interesting to me
is called Ramsey theory. It’s particularly interesting as someone who
debunks a lot of creationist nonsense, because Ramsey theory is in
direct opposition to some of the basic ideas used by bozos to
purportedly refute evolution. What Ramsey theory studies is when some
kind of ordered structure *must* appear, even in a pathologically
chaotic process. Ramsey theory is focused largely on *structures* that
exhibit particular properties, and those structures are usually
represented as graphs.

Continue reading

Cliques, Subgraphs, and a bit of biology

pclique.jpg
Today, I’m going to talk a bit about two closely related problems in graph theory: the maximal clique detection problem, and the maximal common subgraph problem. The two problems are interesting both on their own as easy-to-understand but hard-to-compute problems; and also because they have so many applications. In particular, the two are used extensively
in bioinformatics and pharmacology for the analysis of complex molecular structure.

Continue reading

A Laughable Laffer Curve from the WSJ

Yesterday’s Wall Street Journal has a *spectacular* example of really bad math.
ED-AG112_1corpt_20070712182433.gif
The WSJ is, in general, an excellent paper with really high quality coverage of economic
issues. But their editorials page has long been a haven for some of the most idiotic
reactionary conservative nonsense this side of Fox News. But this latest piece takes the
cake. They claim that this figure is an accurately derived Laffer curve describing the relationship
between tax rates and tax revenues for different countries; and that the US has the highest corporate tax
rates in the world.

Continue reading