One of the bad arguments that I’ve frequently seen from creationists
is the argument that some biological system is *too good* to be a possible result of an evolutionary process. On its face, this seems like it’s not a mathematical argument. But it actually is, and math is key to showing what the argument really is, and what’s wrong with it.
Let’s look at an example of this argument. Last week, on the “ID the Future” blog, Cornelius Hunter posted an article titled [“Design Science”][hunter] using exactly this argument:
>Darwinists say that evolution created the many biological marvels such as the
>bat’s biosonar and the fly’s vision system. They say that an undirected and
>haphazard process just happened to outwit the best scientists and engineers in
>the world–time and time again. According to Darwinists biological structures
>with unknown function are useless and an obvious sign of an inept, undirected
>process. But biological structures with awesome designs are, on the other hand,
>also supposed to be the product of undirected biological change, such as
>Claiming that the bat’s biosonar or the fly’s vision system is the result of
>evolution is more speculation than explanation. In fact, that is putting is
>nicely. How silly it would be to unequivocally claim that the most advanced,
>complex designs must have arisen as a consequence undirected biological change.
>A sequence of mutations just happened to produce the most accurate sonar system
>known to humanity.
>This is so silly, in fact, that Darwinists usually refrain from saying this. It
>is their theory, but more often than not Darwinists use the more
>plausible-sounding Lamarckian language. The designs, they say, arose as a
>consequence of selection pressure. This explanation violates their own
>principle that biological change must not be initiated or crafted in response
>to need. According to evolution, biological change must be undirected.
>Selection must play a role only after the biological change occurs, not before.
>Nor is gradualism a remedy to the problem. Construction of biosonar and
>advanced image processing, one undirected mutation at a time, is no better than
>all at once. In both cases the undirected biological change must hit upon the
>same phenomenal design. Gradualism, however, has the added burden that there
>must exist a very long sequence of finely graded useful intermediates, leading
>to the final design. We know of no such sequence, but we must believe it
>exists. All very amazing for such a lousy process.
There are two problems with this kind of argument. One is really mathematical,
and one is psychological. Let me get the psych one out of the way first. In our
observations of the universe, so far, we are the smartest living beings that we
know about. We tend to be very impressed by our own intelligence, and the things
that we have accomplished as a result. A lot of people, such as our friend Corny
in the quoted article, believe that this means that intelligence can find the
best solution to every problem, and that since we are the most intelligent, that
means that *we* can/will find the best solution to every problem. If something
in nature exceeds our ability to imagine/design a solution, then it
*necessarily* must have been created by something *more* intelligent than us. That’s a very inflated view of intelligence – and in particular, an incredibly inflated and egotistical view of humanity. It’s basically an assertion that *human understanding is a fundamental limit of natural systems*. If we can’t understand it, it can’t exist. It we didn’t think it up first, then it couldn’t have just happened naturally. I find it ridiculous to claim that somehow nothing in nature, no natural process, can every produce a better result than a human brain. However, this is ultimately a subjective argument; Cornelius and people like him believe that we belong on some kind of pedestal; people like me don’t.
The mathematical argument, however, is not subjective. Evolution can be viewed, mathematically, as an *optimization problem*. An optimization problem is, essentially, a search; we have some problem that has multiple solutions, and we want to find *the best* solution by some metric.
Optimization is a major topic in applied mathematics, and there are quite literally thousands of books published on it. There are numerous techniques that can be used to solve optimization problems. One of them is the evolutionary approach. Evolutionary approaches work in cases where the fitness landscape is basically smooth and continuous – that is, landscapes without gaps or breaks.
If the landscape is suitable, evolutionary approaches produce amazing results. For example, in a [timetabling optimization problem][timetable], there are tremendous results from evolutionary approaches – the evolutionary systems produce *better* results than the hand-optimized human ones. We simply *do not* perform as well at solving this type of problem as the evolutionary computation approach.
Evolutionary approaches to optimization problems do tend to suffer from one problem: they’ll often settle for a local minimum instead of a global. For example, in the curve below, if the optimization problem is getting the red ball to the lowest position, in general, an evolutionary approach will have no problem getting the ball to point 2, which is a good local minimum; but it will have trouble getting over the hump after 2 to find the global minimum at point 3.
There *is* a solution to that problem which works most of the time called *simulated annealing*. Simulated annealing introduces *noise* into the computation whenever it seems to be settling into a minimum; the idea is if you find yourself coming to a minimum point, you try adding a bunch of extra energy (mutation) to see if there’s local maximum you can push yourself over. If we look at nature, we find that [something very like simulated annealing occurs during natural evolution!][rate] When the environment suddenly changes in a way that is hostile, the mutation rate of bacteria *increases* beyond what would normally be healthy; this gives them to ability to “find” solutions to the environmental change that put them into danger. The usual relatively slow rate at which mutations occur isn’t enough to produce the change they need to find a solution; so they start using a faster, more error-prone copy technique in reproduction, which pushes them over the local peaks that might block paths to some change that could allow them to survive.
What’s interesting about simulated annealing is that the technique was developed in the 1980s by a brilliant mathematician named Scott Kirkpatric. It turns out that the work of this brilliant man was *preceeded* by the random process of nature. Bacteria have been doing this since before Kirkpatric thought of the idea. The brilliance of a human was beaten by the randomness of a natural process.
So, why would a natural evolutionary process *not* find optimal results? Well, there are a couple of arguments, all of which end up falling down:
* There’s the Dembski “no free lunch” nonsense, which I’ve [discussed before on this blog][nfl]. Basically, it says that evolution can’t succeed because no fitness function will work on all possible landscapes. Yippee. If NFL were true, evolutionary computation wouldn’t work either. It does.
* There’s the Berlinski argument that there’s one path to success, and it’s pretty much impossible for a random process like evolution to find it. No good either; evolution doesn’t follow a single path, nor does it search for a fixed solution. Bats happened to have evolved sonar, which makes them very well-suited towards the nocturnal insect-hunter role that they evolved into. Their cousins, mice and rates, evolved in a *different* direction. From any point, an evolutionary process can follow *many* different paths, and if *any* survive, then the evolutionary process finds *a* solution. The ancestors of bats were rodents that followed *one path*; todays house-mice are rodents whose ancestors followed a different one. Both are equally valid “solutions” in evolutionary terms.
* There’s the local minimum problem: evolutionary processes are good at finding “downhill” paths to a minimum (optimal) result; but they don’t climb hills well. So they can only find solutions that are pretty much all downhill – a clear downward path from a starting point to an optimum with no bumps along the way. If you think about curves on a 2D graph, most curves have bumps. Again, it’s not a problem: evolution isn’t two dimensional. The “fitness landscape” has many dimensions – not one or two, but *hundreds* of dimensions. The more dimensions you add to a curve, the *less* likely you are to wind up in a point where no motion in any direction in any of your dimensions is downhill. You may wind up following a *much* longer path to get to the minimum; and you may wind up going in unexpected directions and arriving at a surprising minimum; but it’s very rare to find a point in a high-dimensional landscape that’s a true local minimum in all dimensions. And even in the rare cases where they do, approaches like simulated annealing provide an escape route.
* There’s the gradualism problem. If you can only make tiny changes, how can you wind up with something so sophisticated? See the previous bullet: in a complex landscape, paths can be quite surprising. Keep changing, and you can find yourself in amazing places – exactly as we see in nature. And mix in the fact that the biological evolutionary process is following many paths, and it becomes pretty much *inevitable* that sometimes, you’ll find that random process winding up with an amazing result.
Ultimately, the problem with Corny’s argument is illustrated by his very last sentence: “All very amazing for such a lousy process.” Evolution *isn’t* a lousy process; it’s a brilliant one. It’s an example of a natural optimization process that does an excellent job of traversing the biological fitness landscape. In fact, it’s hard to imagine a *better* optimization process for a constantly changing fitness landscape.