Category Archives: Fractals

Fractals without a Computer!

This is really remarkably clever:

Since I can’t stand to just post a video without any explanation:

A fractal is a figure with a self-similar pattern. What that means is that there is some way of looking at it where a piece of it looks almost the same as the whole thing. In this video, what they’ve done is set up three screens, in a triangular pattern, and set them to display the input from a camera. When you point the camera at the screens, what you get is whatever the camera is seeing repeated three times in a triangular pattern – and since what’s on the screens is what’s being seen by the camera; and what’s seen by the camera is, after a bit of delay, what’s on the screens, you’re getting a self-similar system. If you watch, they’re able to manipulate it to get Julia fractals, Sierpinski triangles, and several other really famous fractals.

It’s very cool – partly because it looks neat, but also partly because it shows you something important about fractals. We tend to think of fractals in computational terms, because in general we generate fractal images using digital computers. But you don’t need to. Fractals are actually fascinatingly ubiquitous, and you can produce them in lots of different ways – not just digitally.

Chaos: Bifurcation and Predictable Unpredictability

800px-LogisticMap_BifurcationDiagram.png

Let’s look at one of the classic chaos examples, which demonstrates just how simple a chaotic system can be. It really doesn’t take much at all to push a system from being nice and smoothly predictable to being completely crazy.

This example comes from mathematical biology, and it generates a graph commonly known as the logistical map. The question behind the graph is, how can I predict what the stable population of a particular species will be over time?

If there was an unlimited amount of food, and there were no predators, then it would be pretty easy. You’d have a pretty straightforward exponential growth curve. You’d have a constant, R, which is the growth rate. R would be determined by two factors: the rate of reproduction, and the rate of death from old age. With that number, you could put together a simple exponential curve – and presto, you’d have an accurate description of the population over time.

But reality isn’t that simple. There’s a finite amount of resources – that is, a finite amount of food for for your population to consume. So there’s a maximum number of individuals that could possibly survive – if you get more than that, some will die until the population shrinks below that maximum threshold. Plus, there are factors like predators and disease, which reduce the available population of reproducing individuals. The growth rate only considers “How many children will be generated per member of the population?”; predators cull the population, which effectively reduces the growth rate. But it’s not a straightforward relationship: the number of individuals that will be consumed by predators and disease is related to the size of the population!

Modeling this reasonably well turns out to be really simple. You take the maximum population based on resources, Pmax. You then describe the population at any given point in time as a population ratio: a fraction of Pmax. So if your environment could sustain one million individuals, and the population is really 500,000, then you’d describe the population ratio as 1/2.

Now, you can describe the population at time T with a recurrence relation:

P(t+1)= R × P(t) × (1-P(t))

That simple equation isn’t perfect, but it’s results are impressively close to accurate. It’s good enough to be very useful for studying population growth.

So, what happens when you look at the behavior of that function as you vary R? You find that below a certain threshold value, it falls to zero. Cross that threshold, and you get a nice increasing curve, which is roughly what you’d expect. Up until you hit R=3. Then it splits, and you get an oscillation between two different values. If you keep increasing R, it will split again – your population will oscillate between 4 different values. A bit farther, and it will split again, to eight values. And then things start getting really wacky – because the curves converge on one another, and even start to overlap: you’ve reached chaos territory. On a graph of the function, at that point, the graph becomes a black blur, and things become almost completely unpredictable. It looks like the beautiful diagram at the top of this post that I copied from wikipedia (it’s much more detailed then anything I could create on my own).

But here’s where it gets really amazing.

Take a look at that graph. You can see that it looks fractal. With a graph like that, we can look for something called a self-similarity scaling factor. The idea of a SS-scaling factor is that we’ve got a system with strong self-similarity. If we scale the graph up or down, what’s the scaling factor where a scaled version of the graph will exactly overlap with the un-scaled graph/

For this population curve, the SSSF turns out to about 4.669.

What’s the SSSF for the Mandelbrot set? 4.669.

In fact, the SSSF for nearly all bifurcating systems that we see, and their related fractals, is virtually always exactly 4.669. There’s a basic structure which underlies all systems of this sort.

What’s this sort? Basically, it’s a dynamical system with a quadratic maximum. In other words, if you look at the recurrence relation for the dynamical system, it’s got a quadratic factor, and it’s got a maximum value. The equation for our population system can be written: P(t+1) = R×P(t)-P(t)2, which is obviously quadratic, and it will always produce a value between zero and one, so it’s got a fixed maximum value, and Pick any chaotic dynamical system with a quadratic maximum, and you’ll find this constant in it. Any dynamical system with those properties will have a recurrence structure with a scaling factor of 4.669.

That number, 4.669 is called the Feigenbaum constant, after Mitchell Fiegenbaum, who first discovered it. Most people believe that it’s a transcendental number, but no one is sure! We’re not really sure of quite where the number comes from, which makes it difficult to determine whether or not it’s really transcendental!

But it’s damned useful. By knowing that a system is subject to recurrence at a rate determined by Feigenbaum’s constant, we know exactly when that system will become chaotic. We don’t need to continue to observe it as it scales up to see when the system will go chaotic – we can predict exactly when it will happen just by virtue of the structure of the system. Feigenbaum’s constant predictably tell us when a system will become unpredictable.

Strange Attractors and the Structure of Chaos

sage0-1.png

Sorry for the slowness of the blog; I fell behind in writing my book, which is on a rather strict schedule, and until I got close to catching up, I didn’t have time to do the research necessary to write the next chaos article. (And no one has sent me any particularly interesting bad math, so I haven’t had anything to use for a quick rip.)

Anyway… Where we left off last was talking about attractors. The natural question is, why do we really care about attractors when we’re talking about chaos? That’s a question which has two different answers.

First, attractors provide an interesting way of looking at chaos. If you look at a chaotic system with an attractor, it gives you a way of understanding the chaos. If you start with a point in the attractor basin of the system, and then plot it over time, you’ll get a trace that shows you the shape of the attractor – and by doing that, you get a nice view of the structure of the system.

Second, chaotic attractors are strange. In fact, that’s their name: strange attractors: a strange attractor is an attractor whose structure has fractal dimension, and most chaotic systems have fractal-dimension attractors.

Let’s go back to the first answer, to look at it in a bit more depth. Why do we want to look in the basin in order to find the structure of the chaotic system?

If you pick a point in the attractor itself, there’s no guarantee of what it’s going to do. It might jump around inside the attractor randomly; it might be a fixed point which just sits in one place and never moves. But there’s no straightforward way of figuring out what the attractor looks like starting from a point inside of it. To return to (and strain horribly) the metaphor I used in the last post, the attractor is the part of the black hole past the even horizon: nothing inside of it can tell you anything about what it looks like from the outside. What happens inside of a black hole? How are the things that were dragged into it moving around relative to one another, or are they moving around? We can’t really tell from the outside.

But the basin is a different matter. If you start at a point in the attractor basin, you’ve got something that’s basically orbital. You know that every path starting from a point in the basin will, over time, get arbitrarily close to the attractor. It will circle and cycle around. It’s never going to escape from that area around the attractor – it’s doomed to approach it. So if you start at a point in the basin around a strange attractor, you’ll get a path that tells you something about the attractor.

Attractors can also vividly demonstrate something else about chaotic systems: they’re not necessarily chaotic everywhere. Lots of systems have the potential for chaos: that is, they’ve got sub-regions of their phase-space where they behave chaotically, but they also have regions where they don’t. Gravitational dynamics is a pretty good example of that: there are plenty of N-body systems that are pretty much stable. We can computationally roll back the history of the major bodies in our solar system for hundreds of millions of years, and still have extremely accurate descriptions of where things were. But there are regions of the phase space of an N-body system where it’s chaotic. And those regions are the attractors and attractor basins of strange attractors in the phase space.

A beautiful example of this is the first well-studied strange attractor. The guy who invented chaos theory as we know it was named Edward Lorenz. He was a meteorologist who was studying weather using computational fluid flow. He’d implemented a simulation, and as part of an accident resulting from trying to reproduce a computation, but entering less precise values for the starting conditions, he got dramatically different results. Puzzling out why, he laid the foundations of chaos theory. In the course of studying it, he took the particular equations that he was using in the original simulation, and tried to simplify them to get the simplest system that he could that still showed the non-linear behavior.

The result is one of the most well-known images of modern math: the Lorenz attractor. It’s sort of a bent figure-eight. It’s dimensionality isn’t (to my knowledge) known precisely – but it’s a hair above two (the best estimate I could find in a quick search was in the 2.08 range). It’s not a particularly complex system – but it’s fascinating. If you look at the paths in the Lorenz attractor, you’ll see that things follow an orbital path – but there’s no good way to tell when two paths that are very close together will suddenly diverge, and one will pass on the far inside of the attractor basin, and the other will fly to the outer edge. You can’t watch a simulation for long without seeing that happen.

While searching for information about this kind of stuff, I came across a wonderful demo, which relates to something else that I promised to write about. There’s a fantastic open-source mathematical software system called sage. Sage is sort of like Mathematica, but open-source and based on Python. It’s a really wonderful system, which I really will write about at some point. On the Sage blog, they posted a simple Sage program for drawing the Lorenz attractor. Follow that link, and you can see the code, and experiment with different parameters. It’s a wonderful way to get a real sense of it. The image at the top of this post was generated by that Sage program, with tweaked parameters.

Fast Arithmetic and Fractals

As pointed out by a commenter, there are some really surprising places where fractal patterns can
appear. For example, there was a recent post on the Wolfram mathematica blog by the engineer who writes
the unlimited precision integer arithmetic code.

Continue reading

Fractal Applications: Logistical Maps and Chaos

In the course of the series of posts I’ve been writing on fractals, several people have either emailed or commented, saying something along the lines of “Yeah, that fractal stuff is cool – but what is it good for? Does it do anything other than make pretty pictures?”

That’s a very good question. So today, I’m going to show you an example of a real fractal that
has meaningful applications as a model of real phenomena. It’s called the logistic map.

Continue reading

Fractal Mountains

When you mention fractals, one of the things that immediately comes to mind for most people
is fractal landscapes. We’ve all seen amazing images of mountain ranges, planets, lakes, and things
of that sort that were generated by fractals.

mount2.gif

Seeing a fractal image of a mountain, like the one in this image (which I found
here via a google image search for “fractal mountain”), I expected to find that
it was based on an extremely complicated fractal. But the amazing thing about fractals is how
complexity emerges from simplicity. The basic process for generating a fractal mountain – and many other elements of fractal landscapes – is astonishingly simple.

Continue reading

The Julia Set Fractals

julia2.jpeg

Aside from the Mandelbrot set, the most famous fractals are the Julia sets. You’ve almost definitely seen images of the Julias (like the ones scattered through this post), but what you might not have realized is just how closely related the Julia sets are to the Mandelbrot set.

Continue reading

Fractal Dimension

pink-carpet.png

One of the most fundamental properties of fractals that we’ve mostly avoided so far is the idea of dimension. I mentioned that one of the basic properties of fractals is that their Hausdorff dimension is
larger than their simple topological dimension. But so far, I haven’t explained how to figure out the
Hausdorff dimension of a fractal.

When we’re talking about fractals, notion of dimension is tricky. There are a variety of different
ways of defining the dimension of a fractal: there’s the Hausdorff dimension; the box-counting dimension; the correlation dimension; and a variety of others. I’m going to talk about the fractal dimension, which is
a simplification of the Hausdorff dimension. If you want to see the full technical definition of
the Hausdorff dimension, I wrote about it in one of my topology posts.

Continue reading

The Sierpinski Gasket by Affine

sier-shear.png

So, in my last post, I promised to explain how the chaos game is is an attractor for the Sierpinski triangle. It’s actually pretty simple. First, though, we’ll introduce the idea of an affine transformation. Affine transformations aren’t strictly necessary for understanding the Chaos game, but by understanding the Chaos game in terms of affines, it makes it easier to understand
other attractors.

Continue reading

Iterated Function Systems and Attractors

Most of the fractals that I’ve written about so far – including all of the L-system fractals – are
examples of something called iterated function systems. Speaking informally, an iterated function
system is one where you have a transformation function which you apply repeatedly. Most iterated
function systems work in a contracting mode, where the function is repeatedly applied to smaller
and smaller pieces of the original set.

There’s another very interesting way of describing these fractals, and I find it very surprising
that it’s equivalent. It’s the idea of an attractor. An attractor is a dynamical system which, no matter what its starting point, will always evolve towards a particular shape. Even if you
perturb the dynamical system, up to some point, the pertubation will fade away over time, and the system
will continue to evolve toward the same target.

Continue reading