Monthly Archives: June 2006

Dembski and No Free Lunch with Competitive Agents (updated repost from blogger)

(Continuing in my series of updates of the GM/BM posts about the bad math of the IDists, I’m posting an update of my original critique of Dembski and his No Free Lunch nonsense. This post has more substantial changes than my last repost; based on the large numbers of comments I’ve gotten in the months since then, I’m addressing a bit more of the basics of how Dembski abuses NFL.)

It’s time to take a look at one of the most obnoxious duplicitous promoters of Bad Math, William Dembski. I have a deep revulsion for this character, because he’s actually a decent mathematician, but he’s devoted his skills to creating convincing mathematical arguments based on invalid premises. But he’s careful: he does his meticulous best to hide his assumptions under a flurry of mathematical jargon.

One of Dembski’s favorite arguments is based on the no free lunch theorems. In simple language, the NFL theorems say “Averaged over all fitness landscapes, no search function can perform better than a random walk”.

Let’s take a moment to consider what Dembski says NFL means when applied to evolution.

In Dembski’s framework, evolution is treated as a search algorithm. The search space is a graph. (This is graph in the discrete mathematics sense: a set of discrete nodes, with a finite number of edges to other nodes.) The nodes of the graph in this search space are outcomes of the search process at particular points in time; the edges exiting a node correspond to the possible changes that could be made to that node to produce a different outcome. To model the quality of a nodes outcome, we apply a fitness function, which produces a numeric value describing the fitness (quality) of the node.

The evolutionary search starts at some arbitrary node. It proceeds by looking at the edges exiting that node, and computes the fitness of their targets. Whichever edge produces the best result is selected, and the search algorithm progresses to that node, and then repeats the process.

How do you test how well a search process works? You select a fitness function which describes the desired outcome, and see how well the search process matches your assigned fitness. The quality of your search process is defined by the limit of the following:

  1. For all possible starting points in the graph:
    1. Run your search using your fitness metric for maxlength steps to reach an end point.
    2. Using the desired outcome fitness, compute the fitness of
      the end point

    3. Compute the ratio of your outcome to the the maximum result
      the desired outcome. This is the quality of your search for this length

So – what does NFL really say?

“Averaged over all fitness functions”: take every possible assignment of fitness values to nodes. For each one, compute the quality of its result. Take the average of the overall quality. This is the quality of the directed, or evolutionary, search.

“blind search”: blind search means instead of using a fitness function, at each step just pick an edge to traverse randomly.

So – NFL says that if you consider every possible assignment of fitness functions, you get the same result as if you didn’t use a fitness function at all.

At heart, this is a fancy tautology. The key is that “averaged over all fitness functions” bit. If you average over all fitness functions, then every node has the same fitness. So, in other words, if you consider a search in which you can’t tell the difference between different nodes, and a search in which you don’t look at the difference between different nodes, then you’ll get equivalently bad results.

Ok. So, let’s look at how Dembski responds to critiques of his NFL work. I’m going to focus on his paper Fitness Among Competitive Agents.

Now, in this paper, he’s responding to the idea that if you limit yourself to competitive fitness functions (loosely defined, that is, fitness functions where the majority of times that you compare two edges from a node, the target you select will be the one that is better according to the desired fitness function), then the result of running the search will, on average, be better than a random traversal.

Dembski’s response to this is to go into a long discussion of pairwise competitive functions. His focus is on the fact that a pairwise fitness function is not necessarily transitive. In his words (from page 2 of the PDF):

From the symmetry properties of this matrix, it is evident that just because one item happens to be pairwise superior to another does not mean that it is globally superior to the other. But that’s precisely the challenge of assigning fitness of competitive agents inasmuch as fitness is a global measure of adaptedness to an environment.

To provide such a global measure of adaptedness and thereby to overcome the intransitivities inherent in pairwise comparisons, fitness in competitive environments needs therefore to factor in average performance of agents as they compete across the board with other agents.

To translate that out of Dembski-speak: in pairwise competition, if A is better than B, and B is better than C, that doesn’t mean A is better than C. So, what you need to do to measure competitive fitness, you need to average the performance of your competitive agents over all possible competitions.

The example he uses for this is a chess tournament: if you create a fitness function for chess players from the results of a serious of tournaments, you can wind up with results like player A can consistently beat player B; B can consistently beat C, and C can consistently beat A.

That’s true. Competitive fitness functions can have that property. But it doesn’t actually matter: because that’s not what’s happening in an evolutionary process. He’s pulling the same old trick that he played in the non-competitive case: he’s averaging out the differences. In a given situation, a competitor does not have to beat every possible other fitness function. It does not have to be the best possible competitor in every possible situation. It just has to be good enough.

And to make matters worse for Dembski, in an evolutionary process, you aren’t limited to picking one “best” path. Evolution allows you to explore many paths at once, and the ones that meet the “good enough” criteria will survive. That’s what speciation is. In one situation, A is better, so it “wins”. Starting from the same point, but in a slightly different environment, B is better, so it wins. Both A and B win.

You’re still selecting a better result. The fact that you can’t always select one as best doesn’t matter. And it doesn’t change the fundamental outcome, which Dembski doesn’t really address, that in an evolutionary landscape, competitive fitness functions do produce a better result that random walks.

In my taxonomy of statistical errors, this is basically modifying the search space: he’s essentially arguing for properties of the search space that eliminate any advantage that can be gained by the nature of the evolutionary search algorithm. But his only argument for making those modifications have nothing to do with evolution: he’s carefully picking search spaces that have the properties he wants, even though they have fundamentally different properties from evolution.

It’s all hidden behind a lot of low-budget equations which are used to obfuscate things. (In “A Brief History of Time”, Steven Hawking said that his publisher told him that each equation in the book would cut the readership in half. Dembski appears to have taken that idea to heart, and throws in equations even when they aren’t needed, in order to try to prevent people from actually reading through the details of the paper where this error is hidden.)

The Problem with Irreducibly Complexity (revised post from blogger)

As I mentioned yesterday, I’m going to repost a few of my critiques of the bad math of the IDists, so that they’ll be here at ScienceBlogs. Here’s the first: Behe and irreducibly complexity. This isn’t quite the original blogger post; I’ve made a few clarifications and formatting fixes; but the content remains essentially the same. You can find the original post in my blogger information theory index. The original publication date was March 13, 2006.
Today, I thought I’d take on another of the intelligent design sacred cows: irreducible complexity. This is the cornerstone of some of the really bad arguments used by people like Michael Behe.
To quote Behe himself:

By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. An irreducibly complex biological system, if there is such a thing, would be a powerful challenge to Darwinian evolution. Since natural selection can only choose systems that are already working, then if a biological system cannot be produced gradually it would have to arise as an integrated unit, in one fell swoop, for natural selection to have any thing to act on.

Now, to be clear and honest upfront: Behe does not claim that this is a mathematical argument. But that doesn’t mean that I don’t get to use math to shred it.
There are a ton of problems with the whole IC argument, but I’m going to take a different tack, and say that even if those other flaws weren’t there, it’s still a meaningless argument. Because from a mathematical point of view, there’s a critical, fundamental problem with the entire idea of irreducible complexity: you can’t prove that something is irreducible complex.
This is a result of some work done by Greg Chaitin in Algorithmic Complexity Theory. A fairly nifty version of this can be found on Greg’s page.
The fundamental result is: given a system S, you cannot in general show that there is no smaller/simpler system that performs the same task as S.
As usual for algorithmic information theory, the proof is in terms of computer programs, but it works beyond that; you can think of the programs as the instructions to build and/or operate an arbitrary device.
First, suppose that we have a computing system φ, which we’ll treat as a function. So φ(x) = the result of running program x on φ. x is both a program and its input data coded into a single string, so x=(c,d), where c is code, and d is data.
Now, suppose we have a formal axiomatic system, which describes the basic rules that φ operates under. We can call this FAS.
If it’s possible to tell where you have minimal program using the axiomatic system, then you can write a program that examines other programs, and determines if they’re minimal. Even better: you can write a program that will generate a list of every possible minimal program, sorted by size.


Let’s jump aside for just a second to show how you can generate a list of every possible minimum program. Here’s a sketch of the program:
minimal.jpg

  1. First, write a program which generates every possible string of one character, then every possible string of two characters, etc., and outputs them in sequence.
  2. Connect the output of that program to another program, which checks each string that it receives as input to see if it’s a syntactically valid program for φ. If it is, it outputs it. If it isn’t, it just discards it.
  3. At this point, we’ve got a program which is generating every possible program for φ. Now, remember that we said that using FAS, we can write a program that tests an input program to determine if its minimal. So, we use that program to test our inputs, to see if they’re minimal. If they are, we output them; if they aren’t, we discard them.

Now, let’s take a second, and write out the program in mathematical terms:
Remember that φ is a function modeling our computing system, FAS is the formal axiomatic system. We can describe φ as a function from a combination of program and data to an output: φ(c,d)=result.
In this case, c is the program above; d is FAS. So φ(c,FAS)=a list of minimal programs.


Now, back to the main track.
Using the program that we sketched above, given any particular length, we can easily generate programs larger than that length.
Take our program, c, and our formal axiomatic system, FAS. and compute their
length. Call that l(c,FAS). If we know l(c,FAS), we can run φ(c,FAS) until it generates a string longer than l(c,FAS).
Ok. Now, write a program c’ that for φ that runs φ(c,FAS) until it finds a program K, where the length of the output of φ(K) is larger than l(c,FAS) + length(c’). c’ then outputs the same thing as φ(K).
This is the tricky part. What does this program do? It runs a program which generates a sequence of provably minimal programs. It runs those provably minimal programs until it finds one larger than itself plus all of its data. Then it runs that and emits the output.
So – c’ outputs the same result as a supposedly minimal program K, where K is larger than c’ and its data. But since c’ is a program which emits the same result as K, but is smaller, then K cannot be minimal.
No matter what you do – no matter what kind of formal system you’ve developed for showing that something is minimal, you’re screwed. Godel just came in and threw a wrench into the works. There is absolutely no way that you can show that any system is minimal – the idea of doing it is intrinsically contradictory.
Evil, huh?
But the point of it is quite deep. It’s not just a mathematical game. We can’t tell when something complicated is minimal. Even if we knew every relevant fact about all of the chemistry and physics that affects things, even if the world were perfectly deterministic, we can’t tell when something is as simple as it can possibly be.
So irreducibly complexity is useless in an argument; because we can’t know when something is irreducibly complex.

Creationists Respond to Debunking Dembski

While perusing my sitemeter stats for the page, I noticed that I’d been linked to in a discussion at creationtalk.com. Expecting amusement, I wandered on over to see who was linking to me.
Someone linked to my index of articles debunking Dembski and Berlinski. The moderator of the creationtalk forum responded to my series of articles on information theory and Dembski with:

No offense to you or him, but his arguments kind of suck. I looked at his response to Behe on IC, and Dembski on Specified Complexity , to Behe’s he didn’t refute it, and to Dembski’s his only arguement was basically summed up to “I don’t know the definition of specified complexity oh mercy”.

For readers who remember, my critique of Behe was that the entire concept of “irreducible complexity” is mathematically meaningless. It’s true that I didn’t refute Behe, in the sense that I didn’t waste any time arguing about whether or not irreducible complexity is indicative of design: there’s no point arguing about the implications of an irreducibly complex system if, in fact, we can never recognize whether a system is irreducibly complex. Sort of like arguing about how many steps it takes to square a circle, after you’ve seen the proof that it can’t be done in a finite number of steps.
But the Dembski line is the one that’s particularly funny. Because, you see, my critique of “specified complexity” was that you can’t mathematically refute specified complexity because Dembski never defines it. In paper after paper, he uses obfuscatory presentations of information theory to define complexity, and then handwaves his way past “specification”. The reason for this is that “specification” is a meaningless term. He can’t define it: because if he did, the vacuity of the entire concept becomes obvious.
A complex system is one which contains a lot of information; which, in information theory, means a system which can’t be described with a brief description. But specification, intuitively, means “can be described concisely”. So you wind up with two possibilities:

  1. “Specification” has a mathematical meaning, which is the opposite of “complexity”, and so
    “specified complexity” is a contradiction; or

  2. “Specification” is mathematically meaningless, in which case “specified complexity” is a meaningless concept in information theory.

The problem isn’t that “I don’t know the definition of specified complexity”. It’s not even that there is no definition of specified complexity. It’s that there cannot be a definition of specified complexity.
I’ll probably drag out my original Dembski and Berlinski tomorrow, polish them up a bit, and repost them here at ScienceBlogs.

Nutty Numerology and Stonehenge

As readers of GM/BM at the old site know, one of the things that I love to shred is trashy numerology. I also have a great dislike for the tendency of many modern pseudo-researchers to insist that ancient people were hopelessly naive or stupid.
I’ve found a delightful little article about Stonehenge that combines these two themes.
Stonehenge is quite an interesting construct. To the best of our knowledge, it was a sort of observatory: the patterns of stones and their arrangement are structured to allow a number of astronomical observations and predictions of astronomical events. This is a very cool thing.
Altie pseudo-researchers consistently insist that the people who lived around stonehenge when it was constructed could not possibly have built it. This is built on that naivety assumption that I mentioned above. For example, from the linked article:

Research has revealed that before the Sarsen Circle of upright stones was erected, a 285 foot diameter circle of 56 chalk holes, 3 feet in diameter, was created. (These are called the Aubrey Holes, in honor of John Aubrey).
A CBS TV program in the 1960’s ran a computer analysis of the Aubrey circle. They declared that Stonehenge’s location–latitude 51 degrees 11 minutes, was a very special location for eclipses of the moon. This location produces moon eclipses in the repeating sequence of 19 years, 19 years, and 18 years.
Adding 19+19+18=56. Thus if the white 3 foot diameter chalk holes were covered by a black stone, that was moved around the circle in synch with the passage of moon cycles, the black stone would arrive at the heal stone position, on the exact day when a moon eclipse would occur. (Eclipse computer.) (S.I.D.)
How could this stone computer have been created without the precise knowledge of the celestial mechanics of this unique geographic location? Certainly this was not the work of the early tribes that lived on this Salisbury Plain, thousands of years ago.

And why could it not have been the work of the people who lived there? Why, because it would have required careful observation of the skies by primitive people, and the recognition of simple arithmetical patterns in the repetitions of astronomical events. And obviously, things like repeated observations and arithmetic were clearly beyond the abilities of “early tribes”.
But there’s also something else quite interesting in the quote above – something that demonstrates the fundamental cluelessness of the writer. The author is corect that the geographic location of stonehenge is important. If you move it 100 miles north, it doesn’t work so well anymore as an instrument of observation or prediction.
There are two ways that a location of an artifact like Stonehenge could have been selected. One is the approach that the author of the linked article takes: we believe that someone wanted an artifact that could observe and predict astronomical events; and so therefore, they computed the perfect position for making those observations. Doing that computation to select the location where the artifact should be placed would require a lot of knowledge, and some reasonably complicated math.
But there’s another way that the location could have been chosen. Suppose you have a large number of people living on a relatively small island. Astronomical events like eclipses are very important to them. There is a location on the island where the pattern of events is clearest. Go north; things become less regular. Go south, things become less regular. But at the right location, you get the strongest pattern.
Now: add in a tradition where the people who do the astronomical observations/predictions are travellers.
Will the observers notice the pattern of astronomical events? Will they notice that in a certain location, the pattern becomes most regular?
If you don’t assume that people a thousand or two years ago were stupid, of course so!

This S.I.D. (Stored Information Device) clearly displays enormous information about planet Earth’s celestial relationships with the Sun, the Moon and the rotational speed of our planet.

Yes, that’s true. Does that necessarily imply that the people who built it knew about the real structure of the solar system, and the sizes, distances, and velocities of the bodies in our solar system? No. It means that they were aware of relationships. As they point out, lunar eclipses occur in a regular pattern at this location. This fact is an implication of the relationships of the positions and motions of celestial bodies. But you don’t need to know the positions and velocities of the bodies: you need to know the observable relationships between their motions. And that is something that is easily observable.
To give a simple example of this kind of thing: There’s a right triangle whose sides have lengths 1, 2, and the square root of three. To draw a 1,2,sqrt(3) right triangle, you could start with a horizontal line 1 inch long, and then draw a vertical line whose height is sqrt(3) inches, and then draw the hypotenuse. To do this, you need to be able to compute the square root of three, which is not the easiest thing to do. You clearly need to be able to do something beyond simple arithmetic to be able to compute and measure the square root of three without using a geometric relationship. On the other hand, you could draw a horizontal line of length 1; then draw a long vertical line from its endpoint; and then take a ruler, and rotate it until the distance from the endpoint of the horizontal line to an intersection with the vertical line was 2 inches. The second way doesn’t require you to be able to compute roots.

Following the stone computer, came the erection of the 30 upright stones that formed the Sarsen Circle, 100 feet in diameter. (My question was why 30? I divided 360 degrees by 30 and discovered the number 12. The number 12 is one of the most important numbers in the Anunnaki civilization…their Pantheon consisted of the Twelve Great Anunnaki gods, they declared 12 months in one year-2 twelve hour parts of each day, they created the 12 signs of the Zodiac. These Sarsen uprights are harder than granite and weigh 25 tons each. They were quarried at Marlborough Downs using tools not locally available at that time, and then transported these huge stones over 20 miles to this site.

And now we get the trashy numerology.
Why are there twelve months in a year in pretty much every calendar we know of? Why is the number of months equal to the number of signs of the zodiac? Could it be, perhaps, that the moon goes around the earth pretty darned close to twelve times a year? You know, the moon? That thing that’s the most obvious thing in the night sky? That thing that’s perfectly correlated with the tides?
No. Not that. Couldn’t be. Must be aliens.

More Category Theory: Getting into Functors

Let’s talk a bit about functors. Functors are fun!

What’s a functor? I already gave the short definition: a structure-preserving mapping between categories. Let’s be a bit more formal. What does the structure-preserving property mean?

A functor F from category C to category D is a mapping from C to D that:

  • Maps each member m in Obj(C) to an object F(m) in Obj(D).
  • Maps each arrow a : x rightarrow y  in Mor(C) to an arrow F(a) : F(x) rightarrow F(y), where:
    • forall o in Obj(C): F(1_o) = 1_{F(o)}. (Identity is preserved by the functor mapping of morphisms.)
    • forall m,n in Mor(C):  F(n circ o) = F(o) circ F(n). (Associativity is preserved by the Functor mapping of morphisms.)

That’s the standard textbook gunk for defining a functor. But if you look back at the original definition of a category, you should notice that this looks familiar. In fact, it’s almost identical to the definition of the necessary properties of arrows!

We can make functors much easier to understand by talking about them in the language of categories themselves. Functors are arrows between a category of categories.

There’s a kind of category, called a small category. (I happen to dislike the term “small” category; I’d prefer something like “well-behaved”.) A small category is a category C where Obj(C) and Mor(C) are sets, not proper classes. Alas, more definitions; in set theory, a class is a collection of sets that can be defined by a non-paradoxical property that all of its members share. Some classes are sets of sets; some classes are not sets; they lack some of the required properties of sets – but still, the class is a collection with a well-defined, non-paradoxical, unambiguous property. If a class isn’t a set of sets, but just a collection that isn’t a set, then it’s called a proper class.

Any category whose collections of objects and arrows are sets, not proper classes, are called small categories. Small categories are, basically, categories that are well-behaved – meaning that their collections of objects and arrows don’t have any of the obnoxious properties that would prevent them from being sets.

The small categories are, quite beautifully, the objects of a category called Cat. (For some reason, category theorists like three-letter labels.) The arrows of Cat are the functors: Functors are morphisms between small categories. Once you wrap you head around that, then the meaning of a functor, and the meaning of a structure-preserving transformation become extremely easy to understand.

Functors come up over and over again, all over mathematics. They’re an amazingly useful notion. I was looking for a list of examples of things that you can describe using functors, and found a really wonderful list on wikipedia.. I highly recommend following that link and taking a look at the list. I’ll just mention one particularly interesting example: groups and group actions.

If you’ve been reading GM/BM long enough, you’ll remember my posts on group theory, and how long it took for me to work up to the point where I could really define what symmetry meant, and how every symmetric transformation was actually a group action. Category theory makes that much easier.

Every group can be represented as a category with a single object. A functor from the category of a group to the category of Sets is a group action on the set that is the target of the functor. Poof! Symmetry. (This paragraph was modified to correct some ambiguous wording; the ambiguity was pointed out by commenter “Cat lover”.)

Since symmetry means structure-preserving transformation; and a functor is a structure preserving transformation – well, they’re almost the same thing. The functor is an even more general abstraction of that concept: group symmetry is just one particular case of a functor transformation. Once you get functors, understanding symmetry is easy. And so are lots of other things.

And of course, you can always carry these things further. There is a category of functors themselves; and notions which can be most easily understood in terms of functors operating on the category of functors!

By now, it should be sort of clear why category theory is affectionately known as abstract nonsense. Category theory operates at a level of abstraction where almost anything can be wrapped up in it; and once you’ve wrapped something up in a category, almost anything you can do with it can itself be wrapped up asa category – levels upon levels, categories of categories, categories of functors on categories of functors on categories, ad infinitum. And yet, it makes sense. It captures a useful, comprehensible notion. All that abstraction, to the point where it seems like nothing could possibly come out of it. And then out pops a piece of beautiful crystal.

A Mathematical Meme from Janet

Janet over at Adventures in Ethics and Science has tagged all of us newbies with a Pi meme. As the new math-geek-in-residence here, I’m obligated to take on anything dealing with Pi.

  • 3 reasons you blog about science
    1. Because I genuinely enjoy teaching, and the one thing that I regret
      about being in industry instead of academia is that I don’t get to teach.
      Blogging gives me an opportunity to do something sort-of like teaching,
      but on my own terms and my own schedule.

    2. Because I’m obsessed with this stuff, and I love it, and I want to try
      to show other people why they should love it too.

    3. Because I’m a thoroughly nasty person who enjoys mocking idiots.
  • Point at which you would stop blogging: This is an easy one. If it were to stop being fun.
  • 1 thing you frequently blog besides science: I could cheat here, and say math. But that would be cheating, and we know math geeks never cheat, right? So that leaves music. I come from a family of musicians; my older brother was a professional french horn player and composer (before he went nutso and became a fundie ultra-orthodox rabbi); my younger sister is a music teacher. As a techie, I’m the black sheep of the family :-).
  • 4 words that describe your blogging style: I’ll pick words that I’ve gotten in real feedback from readers. (1) informative, (2) engaging, (3) obnoxious, and (4) arrogant. (Guess which ones were feedback from people who were targets of bad-math critiques?)
  • 1 aspect of blogging you find difficult: dealing with rude commenters.
  • 5 SB blogs that are new to you.
    1. Afarensis. No, it’s not new to SB, but it’s new to me.
    2. A Blog Around the Clock. I used to read one Coturnix’s blogs back at the old home; now he’s merged three blogs into one here. Good stuff.
    3. Chaotic Utopia. One of my fellow newbies who’s got an obsession with fractals.
    4. Framing Science. The intersection of science and politics.
    5. Terra Sigillata. Taking on alt-woo medicine.
  • 9 non-SB blogs: In no particular order:
    1. Big Dumb Chimp
    2. Rockstar Ramblings
    3. Pandagon
    4. Feministe
    5. World O’ Crap
    6. Making Light
    7. Orcinus
    8. Milieu
    9. Eschaton
  • 2 important features of your blogging environment: this one is actually hard, because I don’t really have a single blogging environment. Best I can come up with is my IPod, and a network connection. (I constantly look at various online sources like wikipedia, mathworld, and various peoples webpages to check what I’m writing.)
  • 6 items you would bring to a meet-up with the other ScienceBloggers:
    1. My powerbook. (Or MacBook if I ever get around to upgrading.)
    2. Geeky t-shirt.
    3. Sunglasses (to mask the glare of PZs fame 🙂 )
    4. A bottle of good rum. (inside joke)
    5. A Zagats guide. (What’s the point of getting together with fun people, and not going out for good food?)
    6. My lovely wife. She’s also a hopeless geek (computational linguistics), and a true expert at finding the very best food wherever she goes. Plus she can read maps, which I can’t. (I’m actually learning disabled – maps mean absolutely nothing to me. I have no idea how people use them to get places. Really.)
  • 5 conversations you would have before the end of that meet-up: I’m going to cheat a bit… A couple of convos that Janet wants to have would involve me, so I’ll just join in.
    1. With both Abel Pharmboy and Janet about being from NJ.
    2. With Janet about math jokes.
    3. With Orac about the kinds of goofy sciffy we both seem to like.
    4. With Tara about Findlay, Ohio. I spent four years of my childhood outside of New Jersey, and that was in Findlay Ohio, In another of these memes that circulate around the geeks of the blogosphere, Tara mentioned that she grew up in that miserable little town. I’m curious to find out if it changed after my family left.
    5. With Abel Pharmboy, about alt-woo medicine. I’ve been meaning to take on some of the stupid mathematical arguments used by alt-med types, but I haven’t had the patience to sit through the gunk of their sites to track down the stuff where I can offer something new.

This weeks SB question: What else would I do with my life?

As usual, once a week, the Seed folks send all of us a question from one of the SB readers:

Assuming that time and money were not obstacles, what area of scientific research, outside of your own discipline, would you most like to explore? Why?

I’ve actually got two answers to that question.

First up: theoretical physics. I’m fascinated by the work that’s trying to unify quantum mechanics and relativity: string theory, the shape of extended dimensions, etc. The problem is, I think that this answer is probably cheating, even though it’s my second choice after what I’m doing now. Because what attracted me to what I’m doing is the math: computer science is a science of applied math with a deep theoretical side; and what attracts me to physics is also the beautiful deep math. In fact, the particular parts of physics that most interest me are the parts that are closest to pure math – the shape of dimensions in string theory, the strange topologies that Lisa Randall has been suggesting, etc.

If that’s cheating, and I really have to get away from the math, then I’d have to say evolutionary development, aka evo-devo. Around holiday time last year, PZ posted a list of books for science geeks, and one was by a guy named Sean Carroll (alas, no relation) on evolutionary development. I grabbed the book on his recommendation – and the ways that gene expression drives the development of living things, the way you can recognize the relationships between species by watching how they form; the way you can use the relationships between species to explore how features evolved – it’s just unbelievably cool.

Earth as the center of the universe? Only if you use bad math.

One of my favorite places on the net to find really goofy bad math is Answers in Genesis. When I’m trying to avoid doing real work, I like to wander over there and look at the crazy stuff that people will actually take seriously in order to justify their religion.

In my latest swing by over there, I came across something which is a bizzare argument, but which is actually interesting mathematically. It’s an argument that the earth (or at least the milky way) must be at the center of the universe, because when we look at the redshifts of other objects in the universe, they appear to be quantized.

Here’s the short version of the argument, in their own words:

Over the last few decades, new evidence has surfaced that restores man to a central place in God’s universe. Astronomers have confirmed that numerical values of galaxy redshifts are ‘quantized’, tending to fall into distinct groups. According to Hubble’s law, redshifts are proportional to the distances of the galaxies from us. Then it would be the distances themselves that fall into groups. That would mean the galaxies tend to be grouped into (conceptual) spherical shells concentric around our home galaxy, the Milky Way. The shells turn out to be on the order of a million light years apart. The groups of redshifts would be distinct from each other only if our viewing location is less than a million light years from the centre. The odds for the Earth having such a unique position in the cosmos by accident are less than one in a trillion. Since big bang theorists presuppose the cosmos has naturalistic origins and cannot have a unique centre, they have sought other explanations, without notable success so far. Thus, redshift quantization is evidence (1) against the big bang theory, and (2) for a galactocentric cosmology, such as one by Robert Gentry or the one in my book, Starlight and Time.

This argument is actually an interesting combination of mathematical cluelessness and mathematical depth.

If you make the assumption that the universe is the inside of a giant sphere, expanding from a center point, then quantized redshift would be pretty surprising. Not just if you weren’t at the center of the universe – if the universe is an essentially flat shape, then a quantized redshift is very hard to explain. That’s because a quantized redshift in a “flat” universe would imply that things were expanding in a sequence of discrete shells, which would be quite a strange discovery.

But: if for some reason it was quantized, then no matter where you are, you will continue to see some degree of quantization in the motion of other objects. What you’d see is different redshifts – but they’d appear in a sort of stepped form: looking in any particular direction, you’d see a series of quantized shifts; looking in a different direction, you’d see a different series of shifts. The only place you’d get a perfectly uniform set of quantized shifts would be in the geometric center.

What the AiG guys ignore is the fact that in a flat geometry, the fact of quantized redshifts is incredibly hard to explain. Even if our galaxy were dead center in a uniform flat universe, the fact is, quantized redshifts – which imply discrete shells of matter in an expanding universe – are incredibly difficult to explain.

The flat geometry is a fundamental assumption of the AiG guys. If the universe is not flat, then the whole requirement for us to be at the center of things goes right out the window. For example, if our universe is the 3-dimensional surface of a four dimensional sphere – then if you see a quantized shift anywhere, you’ll see a quantized redshift everywhere. If fact, there are a lot of geometries that are much more likely to present a quantized redshift, and none of them except the flat one require any strange assumptions like “we’re in the dead center of the entire universe”. So do they make any argument to justify the assumption of a flat universe? No. Of course not. In fact, they just simply mock it:

They picture the galaxies like grains of dust all over the surface of the balloon. (No galaxies would be inside the balloon.) As the expansion proceeds, the rubber (representing the ‘fabric’ of space itself) stretches outward. This spreads the dust apart. From the viewpoint of each grain, the others move away from it, but no grain can claim to be the unique centre of the expansion. On the surface of the balloon, there is no centre. The true centre of the expansion would be in the air inside the balloon, which represents ‘hyperspace’, beyond the perception of creatures confined to the 3-D ‘surface’.

That’s the most substantive part of the section where they “address” the geometry of the universe. It’s not handled at all as an issue that needs to be seriously considered – but just as some ridiculously bizzare and impossible notion dreamed up by a bunch of eggheads looking for excuses to deny god. Even though it explains exactly what they’re trying to say can’t be explained.

But hey, let’s ignore that. Even if we do assume something ridiculous like a flat universe with us at the center, the quantized redshift is surprising. They specifically quote one of the discoverers of the quantization of redshift making this point; only they don’t understand what he’s saying:

‘The redshift has imprinted on it a pattern that appears to have its origin in microscopic quantum physics, yet it carries this imprint across cosmological boundaries.’ 39

Thus secular astronomers have avoided the simple explanation, most not even mentioning it as a possibility. Instead, they have grasped at a straw they would normally disdain, by invoking mysterious unknown physics. I suggest that they are avoiding the obvious because galactocentricity brings into question their deepest worldviews. This issue cuts right to the heart of the big bang theory–its naturalistic evolutionist presuppositions.

This is a really amazing miscomprehension here. They’re so desparate to discredit scientific explanation that they quote things that mean the dead opposite of what they say it means. They want it to say that the quantized redshift is unexplainable unless you believe that our galaxy is at the center of the universe. But that’s not what it says. What it says is: quantized shift is a surprising thing at all. It doesn’t matter where in the universe we are; if we’re in an absolute center (if there is such a thing), or if we’re on an edge, or if we’re in a random location in a geometry without an edge: it’s surprising.

But it is explainable by the very theories that they’re disdaining as they quote him; and in fact, his quote explains it. The best theory for the quantization of the redshift is that in the very earliest moments of the universe, quantum fluctations created a non-uniformity in the distribution of what became matter in the universe. As the universe expanded, that tiny quantum effect eventually ended up producing galaxies, galactic structures, and the quantized distribution of objects in the observable universe.

The quotation about the redshift pattern isn’t attempting to explain away some observation that suggests that we’re at the center of the universe. It’s trying to explain something far deeper than that. Whatever the shape of the universe, whatever our location in the universe, whether or not the phrase “the center of the universe” has any meaning at all, the quantized redshift is an amazing, surprising thing. And what’s particularly exciting about it is that this very large-scale phenomenon is a directly observable result of some of the smallest-scale phenomena in our universe.

Aside from that they engage in what I call obfuscatory mathematics. There are a bunch of equations scattered through the article. None of the equations are particularly enlightening; none of them actually add any real information to the article or strenghten their arguments in any way. They’re just there to add the gloss of credibility that you get from having equations in your article: “Oh, look, they must know what they’re talking about, they used math!”.

Some Basic Examples of Categories

For me, the frustrating thing about learning category theory was that
it seemed to be full of definitions, but that I couldn’t see why I should care.
What were these category things, and what could I really talk about using this
strange new mathematical language of categories?

To avoid that in my presentation, I’m going to show you a couple of examples up front of things we can talk about using the language of category theory: sets, partially ordered sets, and groups.

Sets as a Category

We can talk about sets using category theory. The objects in the category of sets are, obviously, sets. Arrows in the category of sets are total functions between sets.

Let’s see how these satisfy our definition of categories:

  • Given a function f from set A to set B, it’s represented by an arrow f : A → B.
  • º is function composition. It meets the properties of a categorical º:
    • Associativity: function composition over total functions is associative; we know that from set theory.
    • Identity: for any set S, 1S is the identity function: (∀ i ∈ S) 1S(i) = i. It should be pretty obvious that for any f : S → T, f º 1S = f; and 1T º f = f.

Partially Ordered Sets

Partially ordered sets (that is, sets that have a “<=" operator) can be described as a category, usually called PoSet. The objects are the partially ordered sets; the arrows are monotonic functions (a function f is monotonic if (∀ x,y &isin domain(x)) x <= y ⇒ f(x) <= f(y).). Like regular sets, º is function composition.

It’s pretty easy to show the associativity and identity properties; it’s basically the same as for sets, except that we need to show that º preserves the monotonicity property. And that’s not very hard:

  • Suppose we have arrows f : A → B, g : B → C. We know that f and g are monotonic
    functions.

  • Now, for any pair of x and y in the domain of f, we know that if x <= y, then f(x) <= f(y).
  • Likewise, for any pair s,t in the domain of g, we know that if s <= t, then g(s) <= g(t).
  • Put those together: if x <= y, then f(x) <= f(y). f(x) and f(y) are in the domain of g, so if (f(x) <= f(y)) then we know g(f(x)) <= g(f(y)).

Groups as a Category

There is a category Grp where the objects are groups; group homomorphisms are arrows. Homomorphisms are structure-preserving functions between sets; so function composition of those structure-preserving functions is the composition operator º. The proof that function composition preserves structure is pretty much the same as the proof we just ran through for partially ordered sets.

Once you have groups as a category, then you can do something very cool. If groups are a category, then functors over groups are symmetric transformations. Walk it through, and you’ll see that it fits. What took me a week of writing to be able to explain when I was talking about group theory can be stated in one sentence using the language of category theory. That’s a perfect example of why cat theory is useful: it lets you say some very important, very complicated things in very simple ways.

Miscellaneous Comments

There’ve been a couple of questions about from category theory skeptics in the comments. Please don’t think I’m ignoring you. This stuff is confusing enough for most people (me included) that I want to take it slowly, just a little bit at a time, to give readers an opportunity to digest each bit before going on to the next. I promise that I’ll answer your questions eventually!