Debunking "A Mathematicians View of Evolution"

This weekend, I came across Granville Sewell’s article “[A Mathematicians View of Evolution][sewell]”. My goodness, but what a wretched piece of dreck! I thought I’d take a moment to point out just how bad it is. This article, as described by the [Discovery Institute][diref], purportedly shows:
>… that Michael Behe’s arguments against neo-Darwinism from irreducible
>complexity are supported by mathematics and the quantitative sciences,
>especially when applied to the problem of the origin of new genetic
>information.
I have, in the past, commented that the *worst* math is no math. This article contains *no math*. It’s supposedly arguing that mathematics supports the idea of irreducible complexity. Only there’s no math – none!
The article claims that there are *two* arguments from mathematics that disprove evolution. Both are cheap rehashes of old creationist canards, so I won’t go into much depth. But it’s particularly appalling to see someone using trash like this with the claim that it’s a valid *mathematical* argument.
The First Argument: You can’t make big changes by adding up small ones.
————————————————————————-
Sewell:
>The cornerstone of Darwinism is the idea that major (complex) improvements can
>be built up through many minor improvements; that the new organs and new
>systems of organs which gave rise to new orders, classes and phyla developed
>gradually, through many very minor improvements.
This is only the first sentence of the argument, but it’s a good summary of what follows. There are, of course, several problems with this, but the biggest one coming from a mathematician is that this asserts that it’s impossible to move a large finite distance by taking small finite steps. This is allegedly a mathematician making this argument – but that’s what he’s claiming: that it’s impossible for any large change to occur as a result of a large number of small changes.
It also incorrectly assumes a *directionality* to evolution. This is one of the requirements of Behe’s idea: that evolution can only *add*. So if we see a complex system, the only way it could have been produced by an evolutionary process is by *adding* parts to an earlier system. That’s obviously not true – and it’s not even consistent with the other creationist arguments that he uses. And again, as a mathematician, he *should* be able to see the problem with that quite easily. In mathematical terms, this is the assertion that evolution is monotonically increasing in complexity over time. But neither he nor Behe makes any argument for *why* evolution would be monotonically increasing with respect to complexity.
So there’s the first basic claim, and my summary of what’s wrong with it. How does he support this claim?
Quite badly:
>Behe’s book is primarily a challenge to this cornerstone of Darwinism at the
>microscopic level. Although we may not be familiar with the complex biochemical
>systems discussed in this book, I believe mathematicians are well qualified to
>appreciate the general ideas involved. And although an analogy is only an
>analogy, perhaps the best way to understand Behe’s argument is by comparing the
>development of the genetic code of life with the development of a computer
>program. Suppose an engineer attempts to design a structural analysis computer
>program, writing it in a machine language that is totally unknown to him. He
>simply types out random characters at his keyboard, and periodically runs tests
>on the program to recognize and select out chance improvements when they occur.
>The improvements are permanently incorporated into the program while the other
>changes are discarded. If our engineer continues this process of random changes
>and testing for a long enough time, could he eventually develop a sophisticated
>structural analysis program? (Of course, when intelligent humans decide what
>constitutes an “improvement”, this is really artificial selection, so the
>analogy is far too generous.)
Same old nonsense. This is a *bad* analogy. A *very* bad analogy.
First of all, in evolution, *we start with a self-reproducing system*. We don’t start with completely non-functional noise. Second of all, evolution *does not have a specific goal*. The only “goal” is continued reproduction.
But most importantly for an argument coming from a supposed mathematician: he deliberately discards what is arguably *the* most important property of evolution. In computer science terms (since he’s using a programming argument, it seems reasonable to use a programming-based response): parallelism.
In evolution, you don’t try *one* change, test it to see if it’s good and keep it if it is, then go on and try another change. In evolution, you have millions of individuals *all reproducing at the same time*. You’re trying *millions* of paths at the same time.
In real evolutionary algorithms, we start with some kind of working program. We then copy it, *Many* times; as many as we can given the computational resources available to us. While copying, we randomly “mutate” each of the copies. Then we run them, and see what does best. The best ones, we keep for the next generation.
What kind of impact does parallelism have?
As an experiment, I grabbed a rather nifty piece of software for my mac called [Breve Creatures][breve]. Breve is an evolutionary algorithms toolkit; BC uses it to build moving machines. The way it works is that it produces a set of random assemblies of blocks, interconnected by hinges, based on an internal “genetic code”. For each one, it flexes the hinges. Each generation, it picks the assemblies that managed to move the farthest, and mutates it 20 times. Then it tries each of those. And so on. So Breve gives us just 20 paths per generation.
Often, in the first generation, you see virtually no motion. The assemblies are just random noise; one or two just happen to wiggle in a way that makes them fall over, which gives that a tiny bit of distance.
Typically within 20 generations, you get something that moves well; within 50, you get something that looks amazingly close to the way that some living creature moves. Just playing with this a little bit, I’ve watched it evolve things that move like inchworms, like snakes, like tripeds (two legs in front, one pusher leg in back), and quadrapeds (moving like a running dog).
In 20 generations of Breve, we’ve basically picked a path to successful motion from a tree of 2020 possible paths. Each generation, we’ve pruned off the ones that weren’t likely to lead us to faster motion, and focused on the subtrees that showed potential in the tests.
Breve isn’t a perfect analogy for biological evolution either; but it’s better than Sewell’s. There’s two important things to take from this Breve example:
1. Evolution doesn’t have a specific goal. In the case of Breve Creations, we didn’t say “I want to evolve something that walks like a dog.” The selection criteria was nothing more than “the ones that moved the furthest”. Different runs of BC create very different results; similarly, if you were to take a given species, and put isolated two populations of it in similar conditions, you’d likely see them evolve in *different* ways.
2. Evolution is a process that is massively parallel. If you want to model it as a search, it’s a massively parallel search that prunes the search space as it goes. Each selection step doesn’t just select one “outcome”; it prunes off huge areas of the search space.
So comparing the process to *one* guy randomly typing, trying *each* change to see how it works – it’s a totally ridiculous analogy. It deliberately omits the property of the process that allows it to work.
The Second Argument: Thermodynamics
————————————-
>The other point is very simple, but also seems to be appreciated only by more
>mathematically-oriented people. It is that to attribute the development of life
>on Earth to natural selection is to assign to it–and to it alone, of all known
>natural “forces”–the ability to violate the second law of thermodynamics and
>to cause order to arise from disorder.
Yes, it’s the old argument from thermodynamics.
I want to focus on one aspect of this which I think has been very under-discussed in refutations of the thermodynamic argument. Mostly, we tend to focus on the closed-system aspect: that is, the second law of thermodynamics says that in a *closed system*, entropy increases monotonically. Since the earth is manifestly *not* a closed system, there’s nothing about seeing a local decrease in entropy that would be a problem from a thermodynamic point of view.
But there’s another very important point. Entropy is *not* chaos. An system that seems ordered is *not* necessarily lower entropy than a system that seems chaotic. With respect to thermodynamics, the real question about biology is: do the chemical processes of life result in a net increase in entropy? The answer? *I don’t know*. But neither does Sewell or the other creationists who make this argument. Certainly, watching the action of life: the quantity of energy we consume, and the quantity of waste we produce, it doesn’t seem *at all* obvious that overall, life represents a net decrease in entropy. Sewell and folks like him make the argument from thermodynamics *never even try* to actually *do the math* and figure out if if the overall effect of any biological system represents a net increase or decrease in entropy.
For someone purportedly writing a *mathematicians* critique of evolution, to argue about thermodynamic entropy *without bothering to do the math necessary to make the argument* is a disgrace.
[sewell]: http://www.math.utep.edu/Faculty/sewell/articles/mathint.html
[diref]: http://www.discovery.org/scripts/viewDB/index.php?command=view&id=3640&program=CSC%20-%20Scientific%20Research%20and%20Scholarship%20-%20Science
[breve]: http://www.spiderland.org/breve/

75 thoughts on “Debunking "A Mathematicians View of Evolution"

  1. Corkscrew

    2LoT: “the amount of work you can get out of a closed system always decreases”. NOT “systems tend towards disorder”. Why can they not comprehend this simple concept?

    Reply
  2. Left_Wing_Fox

    With respect to thermodynamics, the real question about biology is: do the chemical processes of life result in a net increase in entropy?
    That seems self-evident: Yes, life results in a net increase in entropy. Plants use the energy from an external source to create a more organized form of matter (carbohydrates from water and carbon dioxide), but that process is not 100% efficient, therefore there is a net amount of entropy. Similarly animals that eat the carbohydrates do not use 100% of the available energy from the carbohydrates or other organized molecules created through these processes.
    If life ran counter to the second law of thermodynamics, we wouldn’t see those ecological pyramids, with massive amounts of plant life, a smaller mass of herbovires, then increasingly smaller amounts of apex predators; all of which require the ecternal energy source (prey, the sun, geothermal vents… etc)
    The thing is, this generation of more ordered material isn’t unique to life. Basic high school chemistry shows a wide range of materials that can be oxidized (generating electons/energy) or reduced through the addition of energy. None of this defies entropy, as none of these reactions are 100% efficient. But many of these occur as abiotic reactions; heat and pressure generate highly ordered structures like diamond, oil, and coal. As we view other planets in our galaxy, we see a range of hydrocarbon reactions from naturally ocurring methane occuring completely independantly from life.
    The use of the 2LoT argument pretty much shows that Math was about the only “science” cource this guy took in High school. All this was pretty much covered in the chemistry, biology and physics courses I took back then.

    Reply
  3. Ahcuah

    . . . violate the second law of thermodynamics and to cause order to arise from disorder.

    The question I always want to ask these morons is, “So, does God then build every snowflake one by one?”
    And, if snow really did violate the 2nd law, don’t you think we might have noticed it by now?

    Reply
  4. Mark C. Chu-Carroll

    LWF:
    That’s pretty much what I think: looking at the quantity of energy living things consume, and the quantity of waste they produce, it’s hard for me to imagine that the net effect of biological processes isn’t a net increase in entropy. Without doing the calculations, which I lack the knowledge to be able to do properly, I’m not quite willing to come out and say that I’m sure that that’s the case. But the blind assertion of the creationists that life *clearly* decreases entropy, without ever showing the calculations to support it, seems remarkably foolish.

    Reply
  5. Johnny Vector

    But the blind assertion of the creationists that life *clearly* decreases entropy, without ever showing the calculations to support it, seems remarkably foolish.

    But it’s worse than that! They don’t claim that life violates the second law, only that development of complex life does. A seed grows into an “obviously” more complex tree, no problem. But a lobe-finned fish evolving into an early quadruped? Ooh, that’s creation of “information” from nothing, must be the work of God!

    Reply
  6. Xanthir

    Corkscrew:
    There are many ways to talk about the 2LoT. Entropy is one of them. Sort of like how Godel’s Incompleteness Theorem and the Halting Problem are simply two ways of saying the same thing.
    Seriously, though, this sort of dreck barely even deserves a response. Blegh. And this man has the gall to call himself a mathematician.

    Reply
  7. Mark C. Chu-Carroll

    Xanthir:

    Seriously, though, this sort of dreck barely even deserves a response. Blegh. And this man has the gall to call himself a mathematician.

    Normally, I wouldn’t waste time debunking an article like this that’s just a pathetic rehash of the same old stupid arguments. But the fact that this is prominently featured as a *mathematician’s” critique is what ticked me off enough to write a post about it. Anyone who’d write crap like that has *no* right to call himself a mathematician.

    Reply
  8. Blake Stacey

    Xanthir wrote:

    Sort of like how Godel’s Incompleteness Theorem and the Halting Problem are simply two ways of saying the same thing.

    Bearing in mind, of course, that different approaches are useful in different contexts, and that one of these mathematically equivalent statements might be easier to explain to an uninitiated audience than the other.

    Reply
  9. PaulC

    You can’t make big changes by adding up small ones.

    I believe this is a reference to Peano’s lost axiom, from which one derives the surprising fact that all numbers are itsy bitsy little things.

    Reply
  10. MikeB

    Thanks for the parallel programs analogy. I had never seen it expressed like that and a little light went on in my head.
    Thanks also for the Breve link. Fascinating.
    As long as you provide insight like this in your critiques, I really don’t mind how horrible the critiqued piece is.

    Reply
  11. PaulC

    With respect to thermodynamics, the real question about biology is: do the chemical processes of life result in a net increase in entropy? The answer? I don’t know.

    I think it’s clear that they do result in increased entropy. In fact, low entropy systems are not very conducive to life.
    If you took the atoms of a living system and reorganized them to reduce entropy as much as possible, you’d wind up with crystalline matter as close to absolute zero as you could get it. If you warmed that same matter to the surface temperature of the earth, you’d immediately increase the entropy and probably make it more conducive to life.
    For a mundane example, consider two systems: one is a sterile, dry terrarium with crystalline mineral nutrients arranged in a regular pattern, a separate partition with water, and a small capsule containing soil bacteria, plant seeds, and maybe even some insect eggs. The second is what happens when you break open the capsule and partition, and add some sunlight. If the materials are chosen well, the result will be a tiny ecosystem that is at the same time more interesting and more entropic (more below) than the starting point.
    Note that the hypothetical starting point is far from minimal entropy, but it is also not a great example of what a person would think of as “organized complexity” either. It’s mostly inorganic chemicals. In this case (unlike the early earth) you need some living material to jumpstart the system to work within a reasonable time frame. However, the terrarium can be arbitrarily large with respect to the seed material, so its contribution to the initial entropy can be as small as you want. What we mean by “self-organizing complexity” is often the transformation from a lower entropy state to a higher (but not maximal) entropy state.
    My claim that the living system has higher entropy could be be shown by counting microstates. While the atoms in both the sterile terrarium and the living one can be organized in many possible ways (e.g. the molecules of liquid water are constantly changing microstate) you still get many more possibilities when you begin to combine carbon, oxygen, nitrogen, etc. into organic molecules and these molecules change orientation and conformation.
    (I thought about an extended example of counting microstates by subdivided the space into nanometer cubes and taking a census of elements in each; without going into great detail, it is not hard to show that the living system has many more possible microstates.)
    Another thing about both states is that neither looks like maximum entropy either. If you started with chamber full of heated atoms in gaseous state, then it’s true that the 2nd law of thermodynamics would tell you that unless you had some outside source of energy, you’d be stuck with that high entropy state. (This is a simple consequency of reversibility. Any system in which it is possible to go from more to fewer microstates cannot have reversible laws.) It will not organize itself into anything other than heated gas. But this looks nothing like the initial conditions on earth.

    Reply
  12. Jane Shevtsov

    You wrote, “Entropy is not chaos. An system that seems ordered is not necessarily lower entropy than a system that seems chaotic.” What do you mean? Is there a physical example of a system that looks more ordered in a higher entropy state?
    And yes, biochemical reactions do, on the whole, increase entropy. Have you read Into the Cool?

    Reply
  13. Joe Zeitler

    The fundamental problem with using entropy to argue for intelligent design is that entropy is a state property -the change in entropy when going from one state to another is a function of the states themselves and not on the path taken between the states. Intelligent design and natural selection describe paths between states, and entropy has nothing to do with that.

    Reply
  14. FhnuZoag

    Is there a physical example of a system that looks more ordered in a higher entropy state?
    Sure. The heat death of the universe, where entropy approaches maximum, is characterised by things reaching a state of maximum uniformity, where all the disordered jumble of galaxies disappear.
    Put it simply – everything increases entropy. Every reaction that happens, taken overall, increases entropy. When your computer switches on and information is displayed, entropy is increased. When your industrial robot constructs a line of cars, entropy has increased.
    The fact that biological reactions, like all reactions, increase entropy, is irrelevant here.

    Reply
  15. PaulC

    Jane Shevtsov:

    Is there a physical example of a system that looks more ordered in a higher entropy state?

    I think that “ordered” might be the wrong word in this case. There are two issues.
    One is the fact that “looks more” to a human observer does not mean “is more” in some precise mathematical sense. A tomato plant “looks” simpler to a human being than a nuclear power plant, but the functional behavior of the latter is far easier to characterize.
    The other is that a mere increase in “order” is not what we find interesting about life and other complex systems. E.g., a deck of cards sorted by suit and rank is more orderly, but far less interesting or complex than a game of bridge, even one carried out automatically by computer players.
    The most interesting kinds of structures seem to lie in an intermediate range of entropy. I don’t have a clear idea of how you’d begin to quantify what makes an “emergent structure” more interesting than whatever it emerged from, but it is definitely not tied in some obvious way to entropy.

    Reply
  16. Mark C. Chu-Carroll

    I received a question by email from a non-native english speaker who’s too shy to post here herself. I’ve got her permission to post her question and answer it here. I’ve corrected the english of the question a bit; I hope I the intended meaning of the questions right. (If I didn’t, you can email me and tell me what you meant, and I’ll correct this comment.)
    Her question was:

    Perhaps I don’t understand, but I got confused by your
    first argument.
    Why would “working” with one copy for a long period of time be different from “working” with many copies at the same time? (or how it is decided which of “changes” or “copies” to go to next generations? If it is by “Breve Creatures” software itself, does it matter that the software does already exist in the first place.)

    The reason that the parallelism makes such a difference has nothing to do with Breve; Breve just provides a convenient example.
    What happens in real evolution is that there are many individuals, reproducing *at the same time*. Each of those reproducing individuals is basically traversing one possible path through the searchable landscape.
    Now, here’s the key thing: only some of the individuals will survive. The reason that that is such a key fact is because *it prunes the search space*.
    Let’s say that the first generation, we give each individual a letter. Second generation, each individual is named by the letter of its parent, and a new letter for itself.
    So, generation one: is A, B, C, D, E, F.
    Generation two, if we consider all possibilities: AA, AB, AC, AD, AE, AF, BA, BB, BC, BD, BE, BF, … 36 possibilities in all.
    Generation three, all possibilities, 196 individuals.
    So, if we’re doing stuff the good old sequential way – try one change, see if it works, try the next change, see if it works, etc… To cover the equivalent of three generations will take a total of 238 tests.
    The evolutionary version prunes paths… So the first generation, try all six. Pick the two best; discard the other three. Now try six variations on the two best: 12 tests. Pick the two best, try six variations. another 12 tests. At the end of the third generation, we’ve done *30* tests.
    In just three generations, we’ve cut the search space by a factor of 8!
    So: for this little example, in the time it would take a single individual making a sequence of random changes to do *three* tests; the evolutionary process runs 6 tests *per generation* for each of three generations – 30 tests where the individual did three. *And* it reduced the search space by a huge factor. Even if we didn’t prune in the fourth generation, we’d wind up with 72 possibilities; without the pruning from the first three generations, we’d have to look at 1296 possibilities. The parallelism and pruning of evolution over generations is effectively a huge optimization of the search. The end result is that you *don’t* wind up with *the* guaranteed optimal solution that you could with an exhaustive manual search. But you *do* wind up with *a* damn good solution.

    Reply
  17. PaulC

    FhnuZoag:

    Sure. The heat death of the universe, where entropy approaches maximum, is characterised by things reaching a state of maximum uniformity, where all the disordered jumble of galaxies disappear.

    I think this is one of the counterintuitive things about entropy. Whether we think something of something as ordered has everything to do with how we choose to lump microstates together. You don’t need to invoke anything as exotic as the heat death of the universe. An ideal gas can seem “ordered” in that it follows predictable laws, and is treated as a uniform substance. But it’s actually very far from uniform in that there are endless combinations of positions and velocities of individual molecules that would look to us like the same ideal gas at the same temperature.
    Or, to use a really mundane example, consider mixing paints. Suppose you have a can of red paint into which you pour some yellow paint. If you start mixing slowly, you’ll get some appealing swirly patterns, which seem more complex than what you started with. But if you keep mixing, you’ll eventually get a uniform orange color. Some might say that this final state is “more ordered” than the intermediate swirly one. However, in a thermodynamic sense, it is higher entropy.

    Reply
  18. FhnuZoag

    To clarify a bit…
    The creationist 2nd law argument goes like this:
    1. Evolution increases order.
    2. Order reduces entropy.
    3. The second law of thermodynamics says that entropy can only increase.
    4. Therefore, evolution is impossible.
    They then focus on step 3. How can one argue against the second law? In truth, there are errors in that step, but a more important problem can be seen – what’s the point of step 3 in that at all? Why involve entropy, and stretch out the argument by adding additional assertions and requirements?
    So, we can collapse the argument down:
    1. Evolution increases order.
    2. Order can never increase.
    3. Therefore, evolution is impossible.
    And now, we must ask, what is the significance of the concept of order here, if it can never increase? Does the concept of ‘order’ have actually any rigourously defined properties other than the fact that it can never increase?
    The thing is, the idea of ‘order’ here is only used to disprove evolution. It isn’t a concept in science that has any meaningful purpose, but merely a descriptor that is tailor made to fit evolution and nothing else. Order-increasing-process is something that has been defined here not to exist. The entire weight of the argument rests on proving that evolution is something defined specifically not to exist. And of course, the use of ‘order’ here is rather useless because we have nothing to compare it with.
    Which brings us back to step one, because we haven’t actually proven that evolution is something that is defined not to exist. The ‘argument’ isn’t an argument at all, but just a way of drawing out the creationist core assertion.
    Instead of having invisible pink unicorns, we just use invisible pink unicorns riding motorcycles instead. Of course motorcycles exist. But that isn’t the point.

    Reply
  19. trrll

    The chemical processes of life must result in a net increase in entropy; otherwise, they would not happen. The “entropy is bad” notion is perhaps one of the most profound misconceptions of creationists. They tend to think of entropy as a purely destructive force, when in fact entropy is the very essence of life. Entropy is the only thing that gives any directionality to chemistry. All biological or chemical change is driven by the tendency of entropy to increase (Perhaps all change–the only thing that I am not sure about is cosmological change such as the expansion of the universe). In the absence of a change in entropy, you have equilibrium–a constant, unchanging state, also known as death.

    Reply
  20. Jane Shevtsov

    PaulC,
    I don’t know if there’s a rigorous definition of “order” (other than “low entropy”) and, if so, how well that corresponds to what we intuitively see as ordered, but uniformity certainly doesn’t fit the bill. Structure and pattern do.

    Reply
  21. Torbjörn Larsson

    This is as good and useful work as when you ripped Albert Voie’s unfortunately published paper apart ( http://scientopia.org/blogs/goodmath/2006/07/peer-reviewed-bad-id-math ). I’m quite sure that creationists won’t be as eager to get these erroneous publications revoked as they were on Haeckel’s oversimplified sketches in biology textbooks ( http://en.wikipedia.org/wiki/Embryo_drawings ).
    As Johnny remarks, creationists overlook the observation that life during organism development and repair also seems to violate 2LOT, since they also make the two unfounded claims that the organisms information content is constant and contained solely in the genes, contradicting all what we know of gene mechanisms and evo-devo. Thus they can split out species development by special appeal.
    It seems Sewell uses the false dichotomy and erroneous information constance reasoning for computers too. He imports computers containing preformed information to the moon, conveniently forgetting that computers also need to increase entropy to work.
    John Baez describes this:
    “We want to keep our macrostates from getting hopelessly smeared out. It’s a bit like herding a bunch of sheep that are drifting apart, getting them back into a tightly packed flock. Unfortunately, Liouville’s theorem says you can’t really “squeeze down” a flock of states! Volume in phase space is conserved….
    So, the trick is to squeeze our flock of states in some directions while letting them spread out in other, irrelevant directions.
    The relevant directions say whether some bit in memory is a zero or one – or more generally, anything that affects our computation. The irrelevant ones say how the molecules in our computer are wiggling
    around… or the molecules of air *around* the computer – or anything that doesn’t affect our computation.
    So, for our computer to keep acting digital, it should pump out *heat*!”
    In other words, a computer needs to increase disorder to be able to erase and rewrite memory. Baez goes on to give Landauer’s expression for the heat created. ( http://groups.google.dm/group/sci.math/browse_thread/thread/198281a63d9f3b7f/db5a595d819eacb9?hl=en ) As Scott Aaronson remarks: “he gives a lucid explanation of how, while it’s generally impossible to keep information from leaking out of a computer, it is possible to arrange things so that the information that does leak is irrelevant to the computation.” ( http://www.scottaaronson.com/blog/2006/07/two-john-related-announcements.html )
    Landauer’s expression is tied in with “why Maxwell’s demon can’t get you something for nothing”. ( http://www.lepp.cornell.edu/spr/2006-07/msg0074867.html ) I would like to see a creationist account for information in computing – they would be fighting demons!

    Reply
  22. Torbjörn Larsson

    Examples of structures and patterns that exist between randomness and uniformity are biological systems, networks and chaos.
    “More generally, any system of elements arranged at random (e.g. gas molecules) or in a completely regular or homogeneous way (molecules in a crystal lattice) is not complex. By contrast, the arrangement and interactions of neurons in a brain or of molecules in a cell is obviously extremely complex (see Fig.).” “Thus, a system that appears highly complex or random might turn out to be considerably simpler once the organizing principles are understood. Low-dimensional chaotic systems, for example, might appear random, yet their behavior can be fully determined by as few as three equations.” ( http://www.striz.org/docs/tononi-complexity.pdf )
    That paper goes on to suggest mutual information between subsets as a complexity measure for neural networks. It maxes out between completely random and completely regular systems. “complexity provides a measure for the amount of information that is integrated within a neural system”.

    Reply
  23. Sounder

    correct me if I’m wrong, but doesn’t the 2nd law say that entropy increases in CLOSED systems? And isn’t evolution an open system? Pardon the ignorant question.

    Reply
  24. Fred

    If you took the atoms of a living system and reorganized them to reduce entropy as much as possible, you’d wind up with crystalline matter as close to absolute zero as you could get it. My steam tables Thermodynamic Properties of Steam 1937, say the entropy of 250 Degree F steam is 1.6998 Btu/lbm-degree Rankin and at 500 Degress F it is 1.4325 Btu/lbm-degree Rankin. What am I doing wrong? With steam it seems that heating it up rduces entropy.

    Reply
  25. Mark C. Chu-Carroll

    Sounder:
    The 2nd law says that entropy always increases; it’s true that in an open system, entropy can decrease locally (inside the open system) at the expense of a larger increase outside the open system.
    That’s the argument that’s usually used to refute the creations 2nd law gibberish. Personally, I think that responding that way is conceding too much to the creationists: it’s admitting that somehow, life and evolution *do* represent a net *decrease* in entropy. I think that that’s a *very* strong claim; and it’s a claim that the creationists have utterly failed to demonstrate in any way other waving their hands in the air.

    Reply
  26. Xanthir

    Sounder: Yes, you are correct. A bit of a clarification: Evolution isn’t an open system, but the world is. Evolution’s a process.
    So, yeah, in a closed system entropy is guaranteed to increase over time. In an open system there’s no guarantees, because an open system specifically allows for anything. If you just kick all the high-entropy stuff out of the system (which is allowed), then yeah, it looks like the system has decreased in entropy. That’s why you have humans arising as bastions of order. It’s because you dont’ look at all our heat and crap, both of which are much more highly entropic than the stuff that they started from was.

    Reply
  27. Sounder

    I don’t know, the fact that the 2nd law doesn’t even apply to evolution (in the way creationists wish it did) is refutation enough for me.

    Reply
  28. PaulC

    Sounder:

    I don’t know, the fact that the 2nd law doesn’t even apply to evolution (in the way creationists wish it did) is refutation enough for me.

    If your only goal is to shut up a creationist as fast as possible, then you’re probably right. If the point is to begin to understand what, if anything, entropy has to do with structure and complexity, then the fact that the earth is an open system sidesteps some interesting questions. The conclusion I draw is that entropy has very little to do with the notion of “order” or “structure” as we understand it intuitively. It would still be pretty interesting to have quantitative measures of the latter, in my opinion, though ID/creationism is not a fruitful line of inquiry.

    Reply
  29. PaulC

    Fred: I don’t know how to interpret steam tables. However, it is true that entropy is lowest at absolute zero. I’m not basing this on any complex calculation; as soon as molecules start moving around, you get a lot more microstates.
    Actually, this steam entropy calculator shows entropy increasing with higher temperatures:
    http://www.higgins.ucdavis.edu/webMathematica/MSP/Examples/SteamTable
    This one appears to have two columns labeled entropy. One, the “Sat. liquid” goes up with temperature. The other, “Sat. vapor” goes down.
    http://www.nuc.berkeley.edu/courses/classes/E-115/Reader/steam_table_A-1.pdf
    I don’t know what this means. Maybe somebody with more specialized knowledge can help.

    Reply
  30. John Marley

    There really is nothing in the title that requires this article to have any actual math in it.
    Sewell is a mathematician, and this is his view on evolution.
    It isn’t his fault if anyone expects a mathmatician’s opinion to be mathematically sound.
    As for 2LoT, “order” and “chaos,” as used by evolution deniers, are entirely subjective terms. 2Lot has nothing to do with “decreasing order.”
    Here’s a fun example:
    You’ll need a bar magnet, a piece of stiff paper (a 6×9 index card works well), and some iron filings.
    Lay the magnet on a table
    Cover it with the index card
    Sprinkle some iron filings on the card.
    Hold the card steady with one hand, flick it gently with the other.
    Tell me that isn’t “increasing order.”
    2LoT says that, left alone, the energy of a system will seek the most even distribution. That is why your coffee gets cold and your milk gets warm.
    I am not a scientist. I do not even have any sort of degree. If I can understand this, anyone can.

    Reply
  31. Fred

    Hello Paul,
    You are absolutely correct. I have re-examined my steam tables and if you start with atmospheric vapor, as you compress it adiabaticlly the entropy goes down, at 3200 psi the vapor becomes indistinguishable from the liquid. At that time if you cool the liquid, the entropy continues to decline to zero at 32 degrees F. You learn more on GMBM by accident than you do on most blogs on purpose.
    Fred

    Reply
  32. dorkafork

    PaulC, I think I see what’s happening. If you increase the temperature, the entropy seems to increase… but I’m assuming you’re not changing the pressure. If you were to increase the temperature of steam with a pressure of 3 MPa from 300 K to 500 K, by heating it up you’re changing either the pressure or the quantity. So to keep the pressure constant, the quantity of steam would have to change, and while the entropy per kilogram is changing, the amount of kilograms is also changing, so the entropy of the system as a whole should work correctly.
    I am not a chemist, so if someone remembers Boyle’s Law better than me, feel free to point out if I’m screwing it up completely.

    Reply
  33. Mark

    The Entropy of a liquid is largely independent of pressure, so it will rise as temperature rises.
    The entropy of a gas, however, decrease with increasing pressure, so the increase in vapor pressure is enough to counteract the entropy caused by the increase in temperature.
    One thing this makes apparent is that the difference in entropy between the phases must decrease with increasing pressure. When the system is saturated, it is at maximum entropy, meaning an infinitessimal abount of water vapor evaporating or condensing will result in no net entropy change.
    Considering the evaporation of water, this removes heat from the system, causing a decrease in entropy. This must be balanced by an increase in the entropy of the state of the matter (more high entropy steam, less low entropy water). This can be written as Q/T=s_vap-s_liq
    Q is the heat of vaporization, which decreases at T increases, causing the entropy difference to decrease and eventually reach zero at the critical temperature and pressure.

    Reply
  34. Rasmus Persson

    The “We’re not dealing with an isolated system…” reply (already mentioned a couple of times in the thread), to the 2LOT objection, isn’t valid. By the First Law, the Universe is an isolated system. Thus, Life appearing in the Universe, must mean an increase in entropy.

    Reply
  35. Shygetz

    Thus, Life appearing in the Universe, must mean an increase in entropy.

    Yes, in the universe. The creationists who make this objection always fail to include the entropic changes in our sun, much less the rest of the universe, and always confine their argument to local, open systems.

    Reply
  36. Mark C. Chu-Carroll

    Rasmus:
    You’re wrong.
    Remember that you can have a *local* decrease in entropy at the expense of a larger *increase* in entropy elsewhere. So it *would* be possible for something like life on earth to be a net *decreaser* of entropy, because it’s driven by energy from the sun which is a *huge* producer of entropy; if the entropy produced by the sun generating the amount of power that reaches the earth outwieghs the reduction of entropy by life on earth, then thermodynamics would permit life to be a net decreaser of entropy.
    I don’t believe that that’s the case: to me, it looks like life is an entropy producer. But it’s not *impossible* for it to be a reducer. Of course, if someone like the creationists want to argue that it *is* an entropy reducer, then they need to really make the case for that, which I’ve never seen anyone even seriously try to do.

    Reply
  37. Mark C. Chu-Carroll

    I received the following comment via email, because the poster was having trouble getting the system to post it.
    was just discussing the frustrations of ID modeling and math issues used as a rationale against evolutionary theory this past week with a geochemistry colleague of mine who studies the influences of microbial systems on the redox chemistry in caves, mine shafts, lake sediments, etc (a geomicrobiologist). Thank you so much for dealing with the strengths and weaknesses of math, modeling, and coding without context and expertise in this post, and in your previous post with your <a href=" http://scienceblogs.com/goodmath/2006/07/mathematicians_and_evolution_m.php“ target=”_blank”>”two cents”.
    As a trained geologist, all of my colleague’s studies are framed within the context of a geologic time-scale: billions of years of chemical self-assembly and evolutionary development, not just ‘millions’ of years (or thousands, yet). He poses questions for what microorganisms are doing right now, but also extends these questions into the past environments with respect to what similar microorganisms could have done in a similar setting in the distant past. The time element is something that the mind is not really built to deal with directly (hence, all people find large time scales essentially impossible to conceive). Geologists are presented with a relatively unique tool: a learned and developed skill in spatio-temporal organization. They are trained to view time indirectly, by mapping the temporal construct onto a physical (yet abstracted) representation of time, via the rock record. Hence, we can look at an outcrop on the side of a road (perform dating experiments to verify age) and directly point to vertical layers of sediment separated by millions of years of deposition within a few meters of rock. In simply looking at these rock formations, one frequently finds physical remains (shells, burrows, imprints), organic fingerprints, and even genetic remnants of prior life (within limits). By selectively sampling these remains at a distinct layer, one has access to any number of morphological, geochemical, climatological, and radioactive isotopic signatures that help to estimate a setting of that time period. That’s a lot of information in just a few centimeters of rock that may represent 10,000 years!
    By convolving the expansive time element (collapsed into a few centimeters) with the amount of information that can be extracted regarding the environmental setting, and given the proxy of the immense density of bacterial life in the scoop of soil/sediment mentioned <a href=" http://scienceblogs.com/goodmath/2006/07/mathematicians_and_evolution_m.php“ target=”_blank”>previously you begin to open up a window of understanding into the diverse parameters that can guide mathematical modeling and the parallel computation/replication involved with billions of organisms through time. From this abundance of time and biological diversity, one can also see that the evolutionary system is inherently information-rich and ripe for models of self organization.

    Reply
  38. Bronze Dog

    (This comment most likely to apply only to Americans.)
    Entropy is like the national debt: The total amount can never decrease. But that doesn’t mean there can’t be some small local government experiencing a budget surplus.

    Reply
  39. PaulC

    I want to add as a caveat to my previous comments that living processes often reduce some entropy locally. This is easiest to see by looking at entropy in terms of combinatorial states rather than thermodynamic examples. For instance, living things create very pure, simple chemical compounds from others scattered more or less randomly in the environment (recycling waste). Bees produce regular honeycombs. Squirrels find nuts scattered randomly around a tree and concentrate them close to one location. All of these processes require energy. With solar energy hitting the earth at something around a kilowatt/square meter, it is not paradoxical that they are able to do this. Note that the process of turning the stored nuts into little squirrels probably increases the local entropy. It’s really not very useful to try to characterize living processes in terms of entropy.
    I suspect that the overall entropy of the earth has varied only very slowly over its lifetime and that living things have affected it only indirectly. It’s not impossible to imagine a situation in which living things could effect overall entropy, but I think it would have to be an atmospheric effect–changing the extent to which solar energy is absorbed or reflected, or the extent to which heat is retained or radiated into space.
    This isn’t enough for me to guess whether or not the presence of living things has increased or decreased the total entropy of the earth relative to its entropy before life existed. Because earth is not an open system, the 2nd law of thermodynamics gives absolutely no guidance on what to expect.

    Reply
  40. Dave S.

    A couple points, some no doubt already covered.
    1. The 2nd law of thermodynamics (SLOT) says that left to itself, energy will tend to spontaneously disperse and spread out. The SLOT holds for all systems, although it has different implications for different systems. The “tend” part is very important, and means not every system will have it’s energy spread out immediately. The process could take a while, perhaps a long while.
    2. The SLOT is about energy. The field of statistical mechanics (the molecular mechanistic explanation of the SLOT) makes statements about order on the molecular level (entropy). But our macromolecular mental concepts of what it means for something to be ‘ordered’ (like a neat vs untidy room or an ordered deck of cards) has little to do with order on the microscopic level and works only as a crude analogy.
    3. Life on Earth doesn’t violate the SLOT.
    The total entropy of the entire system (meaning the Earth, energy transferred to Earth from the Sun, and energy emitted from Earth) must increase – however entropy can certainly decrease locally and temporarily (but bear in mind “temporary” can be a good long time). Otherwise, how could ice form for instance?
    The Earth absorbs energy from the Sun at a temperature of about 6000 K (increasing the entropy on Earth). But the temperature of the Earth on average remains the same, and it emits radiation at a longer and less energetic wavelength at about 285 K into space. But since 6000 K is much larger than 285 K, the net result is still an increase of entropy.
    Even if life does decrease entropy, in order to violate SLOT, it must decrease it by an amount larger than the increase mentioned above. To violate SLOT under these condition, biomass would have to undergo a transformation of high temperture gas to crystalline solid in a matter of weeks, according to one calculation I found.

    Reply
  41. Rasmus Persson

    My previous comment was perhaps a little too laconical, as it engendered some misunderstanding (though the pronouncements themselves were correct). I’m not arguing as a Creationist, and I am well aware that the total entropy of the Universe might well increase, in spite of any local decrease.
    However, all else being equal, a priori we should posit that Life increases entropy based on the fact that it happened spontaneously, in a closed system (the universe), with everything else being essentially equal before and after the event (apart from unicellular organisms, of negligeable mass, floating around in the oceans of the Earth, the rest of the universe is unaffected; I would not posit that the introduction of Life had extragalactical effects)
    Entropy being a state function, we see that if the only change to the universe before and after the formation of Life was essentially the introduction of Life itself, than it must be responsible for the necessary increase of the entropy of the universe.
    Is this not correct?

    Reply
  42. Dave S.

    Rasmus –
    But life did not *poof* into existance from nothingness. The atoms making up the molecules in the first living cell for instance until then had been performing other functions (perhaps they were in a life-like but not quite living entity) with their own energetic relationships. You can’t just set the entropy change in the surroundings at zero by definition. That’s cooking the books. One problem is that Life is not a thing, but merely a descriptive term for a particular suite of relationships between molecules.

    Reply
  43. Rasmus Persson

    Dave,
    I’m not setting the surrounding’s entropy change to zero, I am approximating it as nonsignificant during the time frame involved (from the cooling of the Earth, up to an arbitrary point of Life, preferably early on (say procaryotic), so as to make the time frame small) and their immense size relative to the system. Then I draw my harmless conclusion: We should expect, a priori, that the creation of life meant an entropy increase. It’s not a very bold statement.

    Reply
  44. Rasmus Persson

    Erratum:
    This,
    and their immense size relative to the system,
    should be, e.g.: “in light of the surroundings’ immense size relative to the Earth”
    otherwise, it doesn’t make a whole lot of sense.

    Reply
  45. Rasmus Persson

    What am I talking about? I just realized my grasp of thermodynamics is very poor. Misconception at the heart of my posts. I beg everyone’s pardon.

    Reply
  46. Dave S.

    Rasmus –
    Possibly I’m not understanding your point, and if so, I apologize as that’s not my intent.
    But it seems to me you’re arguing as follows. Say Bill Gates gives me $50,000. Now that’s an insignificant amount to Bill and can be ignored. But’s it very significant to me. So in conclusion, we have a net increase in wealth because I have gained and Bill has not lost.
    One needs to consider not just the organisms themselves, but the environment used to create them and their effects on the environment as they grow.

    Reply
  47. Poison P'il

    >
    What happens if we don’t have a working program (particularly a self-replicating one) to start with? This is a question that still nags me. For response purposes, I would classify myself as a slightly above-average mathematician (B.A math, Masters biostatistics, 17 years’ teaching background, and actually think about probability and stats in my free time).
    Thanks

    Reply
  48. Mark C. Chu-Carroll

    PP:
    If we’re talking about evolutionary systems, then we *have* to have a reproducing system. If not, then there’s no evolution: evolution without reproduction is meaningless.
    WRT the arguments of creationists, creationists frequently try to blur the line between abiogenesis and evolution. Abiogenesis is the process by which the first self-replicating molecule came into being. Evolution is what happened *after* it started replicating. How, or even if, abiogenesis happened is a different question.
    Computational models of evolution always start with *at least* something that reproduces; either by self-replication, or by automatic reproduction by the environment in which the program is run; and usually, we start with a minimal working program of some sort.

    Reply
  49. Fred

    I have been reading my Thermo book (Thermodynamics by Lay) and entropy only has meaning in ordinary chemical systems. It is the method where we can calculate how much we lose by converting from one form of chemical or mechanical energy to another. (second law) Since we are essentially powered by the fusson reactions of the sun, the idea that we violate the second law in every day life is no surprise.

    Reply
  50. PaulC

    Fred, while thermodynamics is typically studied in classical systems, I don’t think there is anything there that cannot be extended to ones involving fusion or fission energy. The main difference is that you have some new energy sources that could be applied to reduce entropy in parts of your system, although in practice they usually produce a lot of waste heat that would increase it elsewhere.
    Actually, entropy is a very general concept that I think is easiest to understand in terms of discrete dynamic systems in which you can count the number of microstates and which change in discrete time steps (such as cellular automata). Entropy is just a measure of the number of states that a system can be in (and can vary depending on how you want to define them and group them).
    In this case, some of the arguments are pretty elementary. If a discrete system is reversible and deterministic, and you know that it was in one of k states at time t, then it must be in one of k states at time t+1. I.e., no change in entropy. If it could be in more than k states, then it would not be deterministic (one state would have multiple successors) and if it was in less than k possible states, then it would not be reversible (one state would have multiple predecessors).
    Classical physics turns out to be reversible, though this is quite counterintuitive (you can unshatter a vase if you can exactly reverse the velocity of all the particles involved). Thus, you cannot have a net decrease in entropy. Actually, I don’t think you can have a net increase if the system is deterministic, but we often treat such systems as non-deterministic (e.g. classical coin flips) even if they could be deterministic, and quantum events appear to be truly non-deterministic.
    To bring this back to territory I actually understand, Conway’s Game of Life is an example of an irreversible deterministic CA. In this case, it is indeed possible for the number of possible states at t+1 to be less than the number at t. E.g., there are patterns that eventually result in all dead cells. In general, high entropy patterns in Life almost always result in lower entropy successors. By contrast, CAs based on the Margolus neighborhood http://psoup.math.wisc.edu/mcell/rullex_marg.html can readily be defined to be deterministic and reversible. This makes them better discrete models of thermodynamic systems. And indeed if you were to fill a finite torus with a uniformly randomly chosen pattern and run such a rule on it, you would not see a reduction in entropy. I believe you can observe something like self-organization if you start with a low entropy starting state. There are certainly some interesting patterns (gliders and oscillators) that function in these rules.
    I believe that because we have an expanding universe, the potential size of the state space is actually increasing, so it is not a closed system in that sense (on shaky ground here; any comments?). Moreover, the initial state was not high entropy. All the energy used to be concentrated and as time progresses, it disperses. Thus, phenomena like galaxy clustering, planet formulation, and ultimately evolution are entirely compatible with the 2nd law of thermodynamics even if extended to non-classical systems.

    Reply
  51. Pi Guy

    John Marley:
    The flicking of the card adds energy to the system so it isn’t closed in the adiabatic sense. The increase in order results from the increase in energy added by the flicker.
    Also, one of the problems with trying to state a physical law in words that are actually defined by mathematical equations is that they are often difficult to translate in a one-to-one relationship manner. That’s why there are so many literal versions with words like “order” in them. (And chaos is not the same as disorder.) The Second Law, in its most general form, is usually written as
    dS>=0.
    Mathematically, the change in entropy is greater than or equal to zero. In other words, entropy can’t decrease but it can remain constant – smearing out isn’t required, it just tends to happen more frequently. If the card remains unflicked – remains a closed system – forever, nothing in the 2nd Law implies that any evening out must occur at all. The coffee-cream system is, on the other hand, not a closed system. Heat energy is lost through the liquid/air interface as well as through the walls of the cup.
    A simpler literal translation of the First and Second Laws:
    FLoT: You lose.
    SLoT: Even if you don’t lose, you still can’t win.
    Ultimately, the problem with invoking the Second Law is that it is a mathematical statement and, if you choose to apply it, sometimes you have to do the math.

    Reply
  52. Torbjörn Larsson

    Rasmus:
    You make several mistakes. I’m leaving out the biology for now.
    – When you discuss a system you must choose an appropriate boundary. Earth works well for studying entropy and life on Earth.
    – Why should the universe be closed? It is decidedly unclosed in the time direction. Lambda-CDM cosmology (the best choice after the latest WMAP observations) tells us so. And as PaulC comments, spacetime is expanding, which makes the system size larger. Lambda-CDM is absolutely flat after inflation, which makes an infinite spacetime the natural choice. Cosmologists usually chooses to discuss the observable Hubble volume however, and they therefore consider the universe open. (No boundary.)
    PaulC:
    Excellent comment, as so often. Regarding the universe you seem to be correct, see above. Cosmologists work with an open universe AFAIK. And the holographic hypothesis of string theorists (or Planck or classical volume counting) makes the expanding universe an entropically or statewise larger system, AFAIK.
    But I think you got the cosmology wrong. Inflation explains the lowentropic evenly distributed start (I think), but the initial galaxy seeding was upblown quantum randomness combined with gravitational assembly. And as Steven remarked above, gravitation is contraintuitive and galaxy and star gas assembly or (vaporating, so temporarily) black holes are entirely compatible with entropy increase.
    Poison:
    I’m not an abiogenesis expert, but as Mark says it is likely all the first systems needed were chemical (re)production sources, geothermal vents say. Most of the light products were probably alike. A natural selection of sorts could have worked by environmental and production rate differences – robust chemistry likely lived longer, faster chemistry likely drowned out slower – but this isn’t very effective.
    But when some sources neighbourhoods started to contain systems that (at first badly) reproduced chemicals from other chemicals, with the help of clays say, it looks a lot more like the evolution we are used to. Membranes, cell metabolism (by cell fusion at first) and gene material could all be later developments.
    Interesting ideas of a part of the bootstrap process of life are metabolism-first, cell metabolism by fusion, RNA worlds and quasispecies ( http://en.wikipedia.org/wiki/Origin_of_life , http://en.wikipedia.org/wiki/Hypercycle ). The later allow for “mutational “clouds” of closely related sequences” as converging to the first independently reproducing evolutionary systems.

    Reply
  53. Torbjörn Larsson

    Rasmus:
    Uuups. I erased the sentence that said that we are discussing closure in space for classical thermodynamic systems, but that an infinite spacetime of Lambda-CDM flat space implies infinite space too.

    Reply
  54. PaulC

    I just wanted to add one link that I found today. Norm Margolus has a presentation that covers, among other things the contrast between Conway’s Game of Life and the reversible “critters” rule. http://people.csail.mit.edu/nhm/physical_worlds.pdf
    The “critters” CA is a very interesting one in that it can be used to demonstrate self-organization in a system that cannot reduce entropy (because it is reversible). You can watch critters using the MCell program. http://www.mirekw.com/ca/index.html I’m sure there is other software that supports it, but that is all I know offhand.
    As a result, if you were to fill a toroidal critters CA with uniform random 0/1 cell states with equal probability and run the rule, you would not see any self-organization. You started with high entropy, and reversible rules cannot reduce the entropy.
    By contrast, you get very different results if you start with a lower entropy initial pattern. For instance, take a 200×200 torus of all 0s and place a 25×25 square of uniform random 0/1 cell states in the middle of the display. Now what you see are “gliders” (moving objects) traveling away from the center and sometimes colliding with interesting results (usually reflections of some kind). The amazing thing about critters is that you can run the rule in reverse and get back to the initial state (similar to unshattering a vase).
    The take-away is that self organization can proceed even in systems that cannot reduce entropy, at least provided you start with lower than maximum entropy. The state of being “organized” is not equivalent to being “low entropy” and a system may appear more organized even as it conserves increases entropy. This is not a problem for evolution or other well established self-organizing phenomena, unless you assume the universe began in a state of maximum entropy.
    BTW, the business about gravitational clustering actually increasing non-uniformity and entropy simultaneously is quite interesting, but it’s a bit hard for me to wrap my brain around since I am more familiar with self-organization in discrete systems.

    Reply
  55. Gaurav

    Mark,
    Could you do a post on why some (or all, if that is the case) models of evolution model it as a search. Since there is no “goal” or “objective”, it seems a little weird thinking about evolution as a search.

    Reply
  56. Davis

    PaulC, thanks for your excellent comments on entropy. I never really understood it in my undergrad thermo class, and your explanation is exactly what my little mathematician brain needed.
    Now I think I can see deterministic and reversible behavior in terms of function-like behavior, which is much clearer to me (especially since I’m teaching my students about the Inverse Function Theorem right now).

    Reply
  57. Torbjörn Larsson

    PaulC:
    Gravitation is truly different. For example, the theories of fundamental interactions we have today are still mere effective theories. Other theories are renormalizable QFT’s but GR is nonrenormalizable nonquantum. String theorists seem to expect that a theory of quantum gravity must contain the other interactions to work.
    John Baez site has texts on why when a “gas cloud shrinks, its entropy goes down” and where the entropy goes to save the 2LOT ( http://math.ucr.edu/home/baez/entropy.html ).
    He also mentions why the organisation is temporary:
    “you can show in the really long run, any isolated system consisting of sufficiently many point particles interacting gravitationally – even an apparently “gravitationally bound” system – will “boil off” as individual particles randomly happen to acquire enough kinetic energy to reach escape velocity”
    And of course “all black holes will eventually shrink away and disappear” due to hawking evaporation.
    “But the overall picture seems to lean heavily towards a far future where everything consists of isolated stable particles: electrons, neutrinos, and protons (unless protons decay). If the scenario I’m describing is correct, the density of these particles will go to zero, and eventually each one will be cut off from all the rest by a cosmological horizon, making them unable to interact.” ( http://math.ucr.edu/home/baez/end.html ).
    So 2LOT beats gravity’s self-organisation in the end. To use Pi’s characterisation, interactions may make things interesting for a while but ultimately thermodynamics wins.

    Reply
  58. lytefoot

    Why should the universe be closed? It is decidedly unclosed in the time direction. Lambda-CDM cosmology (the best choice after the latest WMAP observations) tells us so. And as PaulC comments, spacetime is expanding, which makes the system size larger. Lambda-CDM is absolutely flat after inflation, which makes an infinite spacetime the natural choice. Cosmologists usually chooses to discuss the observable Hubble volume however, and they therefore consider the universe open. (No boundary.)
    “Closed” in this case appears to mean in the thermodynamic sense, the sense implied by the First Law. That is, matter and energy can be neither created or destroyed. It isn’t used in a topological sense.
    In the thermodynamic sense, we can define “universe” to mean “the smallest sufficiently large closed system which inculdes the earth”. Such a system must exist, by the First Law. There is a unique smallest one (i.e. one which is a subsystem of all the others), since the intersections of closed systems must themselves be closed. This may or may not correspond precisely to the cosmological universe as defined by the Hubble.

    Reply
  59. John Marley

    Pi Guy:
    “The flicking of the card adds energy to the system so it isn’t closed in the adiabatic sense. The increase in order results from the increase in energy added by the flicker.”
    I understand that. I am also aware that I oversimplified my explanation of 2LoT.
    However, the card is not meant to be a closed system. Think of the card as the Earth, and the flicker as the Sun, providing energy to it.
    The example is meant to show that an apparent increase in ‘order’ is not a violation of 2Lot, and evolution deniers’ claims are a steaming load.

    Reply
  60. Torbjörn Larsson

    lytefoot,
    I’m discussing the problem of closure. Thermodynamically, you need to know the volume of the closure: dU=dQ-dW=dQ-PdV+… . I’m sure the cosmological state expression is quite involved, but volume or number of states must be in there due to extensive variables such as U and V.
    Your topological argument is based on that the first law implies that the universe is closed. But that is what we are trying to explore. And there is nothing in that law that implies anything about the size or closure of a system, it is the definition of system that defines size and closure (or not).
    As I said, I don’t know the GR equation for the state of the universe, but I’m quite sure cosmologists doesn’t assume closure but use an arbitrary volume. The hubble volume is most definitely not closed due to the expansion of the universe. If you instead follow the expanding volume, the number of possible states increases. Easiest to see that is if one start counting planck volumes. Each planck volume entropy maxes out when it becomes a black hole by increasing its massenergy. So each volume means a certain max entropy, which is a certain volume of phase space. Which is what defines the thermodynamical size of the system, see above.

    Reply
  61. Rasmus Persson

    Torbjörn,
    This is definitely confusing and, at the peril of pushing the thread off topic, I would like to try to get this cleared out.
    As I see it, having nothing but a very fundamental grasp of thermodynamics; it having passed a long time since I studied it, the argument lytefoot presents is sound. The first law of classical thermodynamics states that in an isolated system, neither mass, neither energy is created or destroyed (today, we might phrase this – I suppose – as “mass-energy is conserved”). Simply put, dQ = dW = 0 (the two means of introducing, or removing, energy in a non-isolated system). Consequently, dV = 0 (by your formulae).
    I don’t see where this simple argument is flawed?

    Reply
  62. Torbjörn Larsson

    Well, your argument works too. [A technical detail is that the differentials dQ, dW is pathdependent, not everyday perfect differentials.] So we can have an unchanging volume if the pressure is constant: dU, dQ = 0 -> PdV = 0 -> int(PdV) = C -> C = 0 choosen: P1*V1 = P2*V2 -> V1 = V2 iff P1 = P2. But in the hubble volume the pressure (imagine it filled with a gas) is changing due to the expansion.
    So I find two problems with lytefoot’s comment: the definition of closed system, and the application to the hubble volume.
    It is only if you can achieve isolation (closure) that massenergy is conserved. And in an expanding universe that isn’t possible, I think, except by pushing the boundary to infinity.
    Our everyday smallscale applications are really good approximations though.

    Reply
  63. Torbjörn Larsson

    “except by pushing the boundary to infinity” Now I’m doing it too, imagining boundaries that doesn’t exist. That is wrong, it won’t help, one can’t push the boundary beyond the expansion. What I can see the universe isn’t closed, and I believe that is how cosmologists sees it.

    Reply
  64. Rasmus Persson

    Torbjörn,
    I experience some difficulty in comprehending your argument. I am therefore going to condense my own, into its simplest form of representation. Could you please tell me why this is principally wrong?
    1. Define the Universe as that instantaneous volume which contains all the massenergy in existance. (Like this, the volume of the Universe may be subject to instantaneous change).
    2. The First Law states that massenergy can never be destroyed or created in an isolated system.
    3. The Universe is an isolated system, since there is no outside massenergy (by definition, it contains all in existance).
    Conclusion: The massenergy content of the Universe is constant, regardless of its volume.

    Reply
  65. Torbjörn Larsson

    Rasmus:
    I’m sorry if I’m obtuse. It seems like a good idea to discuss on your terms. Especially since we seem to be confusing the question of closure. And I can well be wrong. (As I was in my last two comments on closure of massenergy. It was wrong and doesn’t affect the argument either.)
    The problem isn’t that the massenergy isn’t constant and closed in that sense. Regardless of actual size.
    The problem is that the system isn’t closed in the sense of size due to the expansion.
    Total massenergy in the universe is likely constant. Total volume is not.
    This is going back to your and lytefoot’s comments about the universe as a closed system and how that affects entropy. The increase in volume means a change in entropy besides other sources including life.
    More rigorous treatment:
    If the universe really is infinite in extension massenergy is infinite and one has to be careful. We can define constant massenergy rigorously, for all cases, by looking at a smaller volume as the hubble volume.
    Then the system isn’t closed in the sense of size. (One can see that by for example follow the expanding massenergy outwards.) Increasing the assumed volume to infinity won’t change that, due to the continuing expansion.
    PS
    BTW, I think the expansion is why the 2LOT works without conflict with cosmology and QM. Otherwise the entropy would max out sooner or later, see my discussion on planck volume. Again, I can well be wrong.
    DS

    Reply
  66. Philip Dorrell

    A rigorous analysis of the thermodynamics of evolution by natural selection can be found at http://www.1729.com/evolution/2ndlaw.html (and see also The Evolution of biological complexity by Adami et al, in particular Adami shows that fitter and more “complex” does imply lower entropy in the genome).
    To prove that evolution by natural selection cannot possibly break the 2nd law, you have to do a thought experiment, in particular considering what would happen if only thermodynamically reversible processes were involved. The main consequence of this assumption is that fitter organisms would reproduce more successfully, but if the process of reproduction was reversible, then the fitter organisms would also un-reproduce more successfully, and this would cancel out their advantage.
    The other thing is that an increase in fitness by a random mutation is equivalent to a small spontaneous decrease in entropy in a closed system. Unfortunately a large number of small decreases cannot be accumulated into a macroscopically large decrease, for the same reason that Maxwell’s Demon doesn’t work. In fact evolution by natural selection is a natural example of Maxwell’s Demon (one of those “nothing new under the sun” things).

    Reply
  67. Jonathan Rogers

    I do think that “A Mathematician’s View of Evolution” is full of poor logic. However, since it doesn’t seem to contain any math, it doesn’t necessarily mean the author isn’t a decent mathematician, just that he isn’t applying logic on this issue. I think the key statement is:

    I know a good many mathematicians, physicists and computer scientists who, like me, are appalled that Darwin’s explanation for the development of life is so widely accepted in the life sciences.”

    I think that because he was so appalled, he allowed his logic to leave him regarding this particular issue.
    Like many in evangelical American communities, I was taught from a young age that Darwinian evolution was inherently incompatible with the Biblical account of creation. However, I gradually realized that creationist arguments such as from the 2nd Law of Thermodynamics weren’t entirely convincing or logical. I now believe that it’s entirely possible that the mechanism God used to create the life we see today can be described by evolution. I’m not sure why there’s long been such a strong rejection of this idea in American churches, but I think it’s partly because Darwinian evolution has also been used to justify secular humanist philosophical arguments, which are inherently incompatible with Biblical teaching. Strong emotions can easily cloud judgment even in the most logical scientists and mathematicians and there are few areas of human thought more emotional than morality and religion.

    Reply

Leave a Reply to PaulC Cancel reply