
One mathematical topic that I find fascinating, but which I’ve never had a chance to study formally is chaos. I’ve been sort of non-motivated about blog-writing lately due to so many demands on my time, which has left me feeling somewhat guilty towards those of you who follow this blog. So I decided to take this topic about which I know very little, and use the blog as an excuse to learn something about it. That gives you something interesting to read, and it gives me something to motivate me to write.
I’ll start off with a non-mathematical reason for why it interests me. Chaos is a very simple idea with very complex implications. The simplicity of the concept makes it incredibly ripe for idiots like Michael Crichton to believe that he understands it, even though he doesn’t have a clue. There’s an astonishingly huge quantity of totally bogus rubbish out there, where the authors are clueless folks who sincerely believe that their stuff is based on chaos theory – because they’ve heard the basic idea, and believed that they understood it. It’s a wonderful example of my old mantra: the worst math is no math. If you take a simple mathematical concept, and render it into informal non-mathematical words, and then try to reason from the informal stuff, what you get is garbage.
So, speaking mathematically, what is chaos?
To paraphrase something my book-editor recently mentioned: in math, the vast majority of everything is bad. Most functions are non-continuous. Most topologies are non-differentiable. Most numbers are irrational. Most irrational numbers are undescribable. And most complex systems are completely unstable.
Modern math students have, to a great degree, internalized this basic idea. We pretty much expect badness, so the implications of badness don’t surprise us. We’ve grown up mathematically knowing that there are many, many interesting things that we would really like to be able to do, but that can’t be done. That realization, that there are things that we can’t even hope to do, is a huge change from the historical attitudes of mathematicians and scientists – and it’s a very recent one. A hundred years ago, people believed that we could work out simple, predictable, closed-form solutions to all interesting mathematical problems. They expected that it might be very difficult to find a solution; and they expected that it might take a very long time, but they believed that it was always possible.
For one example that has a major influence on the study of chaos: John Von Neumann believed that he could build a nearly perfect weather prediction computer: it was just a matter of collecting enough data, and figuring out the right equations. In fact, he expected to be able to do more than that: he expected to be able to control the weather. He thought that the weather was likely to be a system where there were unstable points, and that by introducing small changes at the unstable points, that weather managers would be able to push the weather in a desired direction.
Of course, Von Neumann knew that you could never gather enough data to perfectly predict the weather. But most systems that people had studied could be approximated. If you could get measurements that were correct to within, say, 0.1%, you could use those measurements to make predictions that would be extremely close to correct – within some multiple of the precision of the basic measurements. Small measurement errors would mean small changes in the results of a prediction. So using reasonably precise but far from exact or complete measurements, you could make very accurate predictions.
For example, people studied the motion of the planets. Using the kinds of measurements that we can make using fairly crude instruments, people have been able to predict solar eclipses with great precision for hundreds of years. With better precision, measuring only the positions of the planets, we can predict all of the eclipses and alignments for the next thousand years – even though the computations will leave out the effects of everything but the 8 main planets and the sun.
Mathematicians largely assumed that most real systems would be similar: once you worked out what was involved, what equations described the system you wanted to study, you could predict that system with arbitrary precision, provided you could collect enough data.
Unfortunately, reality isn’t anywhere near that simple.
Our universe is effectively finite – so many of the places where things break seem like they shouldn’t affect us. There are no irrational numbers in real experience. Nothing that we can observe has a property whose value is an indescribable number. But even simple things break down.
Many complex systems have the property that they’re easy to describe – but where small changes have large effects. That’s the basic idea of chaos theory: that in complex dynamical systems, making a minute change to a measurement will produce huge, dramatic changes after a relatively short period of time.
For example, we compute weather predictions with the Navier-Stokes equations. N-S are a relatively simple set of equations that describe how fluids move and interact. We don’t have a closed-form solution to the N-S equations – meaning that given a particular point in a system, we can’t compute the way fluid will flow around it without also computing separately how fluid will flow around the points close to it, and we can’t compute those without computing the points around them, and so on.
So when we make weather predictions, we create a massive grid of points, and use the N-S equations to predict flows at every single point. Then we use the aggregate of that to make weather predictions. Using this, short-term predictions are generally pretty good towards the center of the grid.
But if you try to extend the predictions out in time, what you find is that they become unstable. Make a tiny, tiny change – alter the flow at one point by 1% – and suddenly, the prediction for the weather a week later is dramatically different. A difference of one percent in one out of a million cells can, over the space of a month, be responsible for the difference between a beautiful calm summer day and a cat-5 hurricane.
That basic bit is called sensitivity to initial conditions is a big part of what defines chaos – but it’s only a part. And that’s where the crackpots go wrong. Just understanding the sensitivity and divergence isn’t enough to define chaos – but to people like Crichton, understanding that piece is understanding the whole thing.
To really get the full picture, you need to dive in to topology. Chaotic systems have a property called topological mixing. Topological mixing is an idea which isn’t too complex informally, but which can take a lot of effort to describe and explain formally. The basic idea of it is that no matter where you start in a space, given enough time, you can wind up anywhere at all.
To get that notion formally, you need to look at the phase space of the system. You can define a dynamical system using a topological space called the phase space of the system. Speaking very loosely, the phase space P of a system is a topological space of points where each point p∈P corresponds to one possible state of the system, and the topological neighborhoods of p are the system states reachable on some path from p.
So – image that you have a neighborhood of points, G in a phase space. From each point in G, you traverse all possible forward paths through the phase space. At any given moment t, G will have evolved to form a new neighborhood of points, Gt. For the phase space to be chaotic, it has to have the property that for any arbitrary pair of neighborhoods G and H in the space, no matter how small they are, no matter how far apart they are, there will be a time t such that Gt and Ht will overlap.
But sensitivity to initial conditions and topological mixing together still aren’t sufficient to define chaos.
Chaotic systems must also have a property called dense periodic orbits. What that means is that Chaotic systems are approximately cyclic in a particular way. The phase space has the property that if the system passes through a point p in the neighborhood P, then after some finite period of time, the system will pass through another point in P. That’s not to say that it will repeat exactly: if it did, then you would have a repeating system, which would not be chaotic! But it will come arbitrarily close to repeating. And that almost-repetition has to have a specific property: the union of the set of all of those almost-cyclic paths must be equivalent to the entire phase-space itself. (We say, again speaking very loosely, that the set of almost-cycles is dense in the phase space.)
That’s complicated stuff. Don’t worry if you don’t understand it yet. It’ll take a lot of posts to even get close to making that comprehensible. But that’s what chaos really is: a dynamical system with all three properties: sensitivity to initial conditions, overlaps in neighborhood evolution, and dense periodic orbits.
In subsequent posts, I’ll spend some more time talking about each of the three key properties, and showing you examples of interesting chaotic systems.
Like this:
Like Loading...