Creationists Respond to Debunking Dembski

While perusing my sitemeter stats for the page, I noticed that I’d been linked to in a discussion at creationtalk.com. Expecting amusement, I wandered on over to see who was linking to me.
Someone linked to my index of articles debunking Dembski and Berlinski. The moderator of the creationtalk forum responded to my series of articles on information theory and Dembski with:

No offense to you or him, but his arguments kind of suck. I looked at his response to Behe on IC, and Dembski on Specified Complexity , to Behe’s he didn’t refute it, and to Dembski’s his only arguement was basically summed up to “I don’t know the definition of specified complexity oh mercy”.

For readers who remember, my critique of Behe was that the entire concept of “irreducible complexity” is mathematically meaningless. It’s true that I didn’t refute Behe, in the sense that I didn’t waste any time arguing about whether or not irreducible complexity is indicative of design: there’s no point arguing about the implications of an irreducibly complex system if, in fact, we can never recognize whether a system is irreducibly complex. Sort of like arguing about how many steps it takes to square a circle, after you’ve seen the proof that it can’t be done in a finite number of steps.
But the Dembski line is the one that’s particularly funny. Because, you see, my critique of “specified complexity” was that you can’t mathematically refute specified complexity because Dembski never defines it. In paper after paper, he uses obfuscatory presentations of information theory to define complexity, and then handwaves his way past “specification”. The reason for this is that “specification” is a meaningless term. He can’t define it: because if he did, the vacuity of the entire concept becomes obvious.
A complex system is one which contains a lot of information; which, in information theory, means a system which can’t be described with a brief description. But specification, intuitively, means “can be described concisely”. So you wind up with two possibilities:

  1. “Specification” has a mathematical meaning, which is the opposite of “complexity”, and so
    “specified complexity” is a contradiction; or

  2. “Specification” is mathematically meaningless, in which case “specified complexity” is a meaningless concept in information theory.

The problem isn’t that “I don’t know the definition of specified complexity”. It’s not even that there is no definition of specified complexity. It’s that there cannot be a definition of specified complexity.
I’ll probably drag out my original Dembski and Berlinski tomorrow, polish them up a bit, and repost them here at ScienceBlogs.

Nutty Numerology and Stonehenge

As readers of GM/BM at the old site know, one of the things that I love to shred is trashy numerology. I also have a great dislike for the tendency of many modern pseudo-researchers to insist that ancient people were hopelessly naive or stupid.
I’ve found a delightful little article about Stonehenge that combines these two themes.
Stonehenge is quite an interesting construct. To the best of our knowledge, it was a sort of observatory: the patterns of stones and their arrangement are structured to allow a number of astronomical observations and predictions of astronomical events. This is a very cool thing.
Altie pseudo-researchers consistently insist that the people who lived around stonehenge when it was constructed could not possibly have built it. This is built on that naivety assumption that I mentioned above. For example, from the linked article:

Research has revealed that before the Sarsen Circle of upright stones was erected, a 285 foot diameter circle of 56 chalk holes, 3 feet in diameter, was created. (These are called the Aubrey Holes, in honor of John Aubrey).
A CBS TV program in the 1960’s ran a computer analysis of the Aubrey circle. They declared that Stonehenge’s location–latitude 51 degrees 11 minutes, was a very special location for eclipses of the moon. This location produces moon eclipses in the repeating sequence of 19 years, 19 years, and 18 years.
Adding 19+19+18=56. Thus if the white 3 foot diameter chalk holes were covered by a black stone, that was moved around the circle in synch with the passage of moon cycles, the black stone would arrive at the heal stone position, on the exact day when a moon eclipse would occur. (Eclipse computer.) (S.I.D.)
How could this stone computer have been created without the precise knowledge of the celestial mechanics of this unique geographic location? Certainly this was not the work of the early tribes that lived on this Salisbury Plain, thousands of years ago.

And why could it not have been the work of the people who lived there? Why, because it would have required careful observation of the skies by primitive people, and the recognition of simple arithmetical patterns in the repetitions of astronomical events. And obviously, things like repeated observations and arithmetic were clearly beyond the abilities of “early tribes”.
But there’s also something else quite interesting in the quote above – something that demonstrates the fundamental cluelessness of the writer. The author is corect that the geographic location of stonehenge is important. If you move it 100 miles north, it doesn’t work so well anymore as an instrument of observation or prediction.
There are two ways that a location of an artifact like Stonehenge could have been selected. One is the approach that the author of the linked article takes: we believe that someone wanted an artifact that could observe and predict astronomical events; and so therefore, they computed the perfect position for making those observations. Doing that computation to select the location where the artifact should be placed would require a lot of knowledge, and some reasonably complicated math.
But there’s another way that the location could have been chosen. Suppose you have a large number of people living on a relatively small island. Astronomical events like eclipses are very important to them. There is a location on the island where the pattern of events is clearest. Go north; things become less regular. Go south, things become less regular. But at the right location, you get the strongest pattern.
Now: add in a tradition where the people who do the astronomical observations/predictions are travellers.
Will the observers notice the pattern of astronomical events? Will they notice that in a certain location, the pattern becomes most regular?
If you don’t assume that people a thousand or two years ago were stupid, of course so!

This S.I.D. (Stored Information Device) clearly displays enormous information about planet Earth’s celestial relationships with the Sun, the Moon and the rotational speed of our planet.

Yes, that’s true. Does that necessarily imply that the people who built it knew about the real structure of the solar system, and the sizes, distances, and velocities of the bodies in our solar system? No. It means that they were aware of relationships. As they point out, lunar eclipses occur in a regular pattern at this location. This fact is an implication of the relationships of the positions and motions of celestial bodies. But you don’t need to know the positions and velocities of the bodies: you need to know the observable relationships between their motions. And that is something that is easily observable.
To give a simple example of this kind of thing: There’s a right triangle whose sides have lengths 1, 2, and the square root of three. To draw a 1,2,sqrt(3) right triangle, you could start with a horizontal line 1 inch long, and then draw a vertical line whose height is sqrt(3) inches, and then draw the hypotenuse. To do this, you need to be able to compute the square root of three, which is not the easiest thing to do. You clearly need to be able to do something beyond simple arithmetic to be able to compute and measure the square root of three without using a geometric relationship. On the other hand, you could draw a horizontal line of length 1; then draw a long vertical line from its endpoint; and then take a ruler, and rotate it until the distance from the endpoint of the horizontal line to an intersection with the vertical line was 2 inches. The second way doesn’t require you to be able to compute roots.

Following the stone computer, came the erection of the 30 upright stones that formed the Sarsen Circle, 100 feet in diameter. (My question was why 30? I divided 360 degrees by 30 and discovered the number 12. The number 12 is one of the most important numbers in the Anunnaki civilization…their Pantheon consisted of the Twelve Great Anunnaki gods, they declared 12 months in one year-2 twelve hour parts of each day, they created the 12 signs of the Zodiac. These Sarsen uprights are harder than granite and weigh 25 tons each. They were quarried at Marlborough Downs using tools not locally available at that time, and then transported these huge stones over 20 miles to this site.

And now we get the trashy numerology.
Why are there twelve months in a year in pretty much every calendar we know of? Why is the number of months equal to the number of signs of the zodiac? Could it be, perhaps, that the moon goes around the earth pretty darned close to twelve times a year? You know, the moon? That thing that’s the most obvious thing in the night sky? That thing that’s perfectly correlated with the tides?
No. Not that. Couldn’t be. Must be aliens.

More Category Theory: Getting into Functors

Let’s talk a bit about functors. Functors are fun!

What’s a functor? I already gave the short definition: a structure-preserving mapping between categories. Let’s be a bit more formal. What does the structure-preserving property mean?

A functor F from category C to category D is a mapping from C to D that:

  • Maps each member m in Obj(C) to an object F(m) in Obj(D).
  • Maps each arrow a : x rightarrow y  in Mor(C) to an arrow F(a) : F(x) rightarrow F(y), where:
    • forall o in Obj(C): F(1_o) = 1_{F(o)}. (Identity is preserved by the functor mapping of morphisms.)
    • forall m,n in Mor(C):  F(n circ o) = F(o) circ F(n). (Associativity is preserved by the Functor mapping of morphisms.)

That’s the standard textbook gunk for defining a functor. But if you look back at the original definition of a category, you should notice that this looks familiar. In fact, it’s almost identical to the definition of the necessary properties of arrows!

We can make functors much easier to understand by talking about them in the language of categories themselves. Functors are arrows between a category of categories.

There’s a kind of category, called a small category. (I happen to dislike the term “small” category; I’d prefer something like “well-behaved”.) A small category is a category C where Obj(C) and Mor(C) are sets, not proper classes. Alas, more definitions; in set theory, a class is a collection of sets that can be defined by a non-paradoxical property that all of its members share. Some classes are sets of sets; some classes are not sets; they lack some of the required properties of sets – but still, the class is a collection with a well-defined, non-paradoxical, unambiguous property. If a class isn’t a set of sets, but just a collection that isn’t a set, then it’s called a proper class.

Any category whose collections of objects and arrows are sets, not proper classes, are called small categories. Small categories are, basically, categories that are well-behaved – meaning that their collections of objects and arrows don’t have any of the obnoxious properties that would prevent them from being sets.

The small categories are, quite beautifully, the objects of a category called Cat. (For some reason, category theorists like three-letter labels.) The arrows of Cat are the functors: Functors are morphisms between small categories. Once you wrap you head around that, then the meaning of a functor, and the meaning of a structure-preserving transformation become extremely easy to understand.

Functors come up over and over again, all over mathematics. They’re an amazingly useful notion. I was looking for a list of examples of things that you can describe using functors, and found a really wonderful list on wikipedia.. I highly recommend following that link and taking a look at the list. I’ll just mention one particularly interesting example: groups and group actions.

If you’ve been reading GM/BM long enough, you’ll remember my posts on group theory, and how long it took for me to work up to the point where I could really define what symmetry meant, and how every symmetric transformation was actually a group action. Category theory makes that much easier.

Every group can be represented as a category with a single object. A functor from the category of a group to the category of Sets is a group action on the set that is the target of the functor. Poof! Symmetry. (This paragraph was modified to correct some ambiguous wording; the ambiguity was pointed out by commenter “Cat lover”.)

Since symmetry means structure-preserving transformation; and a functor is a structure preserving transformation – well, they’re almost the same thing. The functor is an even more general abstraction of that concept: group symmetry is just one particular case of a functor transformation. Once you get functors, understanding symmetry is easy. And so are lots of other things.

And of course, you can always carry these things further. There is a category of functors themselves; and notions which can be most easily understood in terms of functors operating on the category of functors!

By now, it should be sort of clear why category theory is affectionately known as abstract nonsense. Category theory operates at a level of abstraction where almost anything can be wrapped up in it; and once you’ve wrapped something up in a category, almost anything you can do with it can itself be wrapped up asa category – levels upon levels, categories of categories, categories of functors on categories of functors on categories, ad infinitum. And yet, it makes sense. It captures a useful, comprehensible notion. All that abstraction, to the point where it seems like nothing could possibly come out of it. And then out pops a piece of beautiful crystal.

A Mathematical Meme from Janet

Janet over at Adventures in Ethics and Science has tagged all of us newbies with a Pi meme. As the new math-geek-in-residence here, I’m obligated to take on anything dealing with Pi.

  • 3 reasons you blog about science
    1. Because I genuinely enjoy teaching, and the one thing that I regret
      about being in industry instead of academia is that I don’t get to teach.
      Blogging gives me an opportunity to do something sort-of like teaching,
      but on my own terms and my own schedule.

    2. Because I’m obsessed with this stuff, and I love it, and I want to try
      to show other people why they should love it too.

    3. Because I’m a thoroughly nasty person who enjoys mocking idiots.
  • Point at which you would stop blogging: This is an easy one. If it were to stop being fun.
  • 1 thing you frequently blog besides science: I could cheat here, and say math. But that would be cheating, and we know math geeks never cheat, right? So that leaves music. I come from a family of musicians; my older brother was a professional french horn player and composer (before he went nutso and became a fundie ultra-orthodox rabbi); my younger sister is a music teacher. As a techie, I’m the black sheep of the family :-).
  • 4 words that describe your blogging style: I’ll pick words that I’ve gotten in real feedback from readers. (1) informative, (2) engaging, (3) obnoxious, and (4) arrogant. (Guess which ones were feedback from people who were targets of bad-math critiques?)
  • 1 aspect of blogging you find difficult: dealing with rude commenters.
  • 5 SB blogs that are new to you.
    1. Afarensis. No, it’s not new to SB, but it’s new to me.
    2. A Blog Around the Clock. I used to read one Coturnix’s blogs back at the old home; now he’s merged three blogs into one here. Good stuff.
    3. Chaotic Utopia. One of my fellow newbies who’s got an obsession with fractals.
    4. Framing Science. The intersection of science and politics.
    5. Terra Sigillata. Taking on alt-woo medicine.
  • 9 non-SB blogs: In no particular order:
    1. Big Dumb Chimp
    2. Rockstar Ramblings
    3. Pandagon
    4. Feministe
    5. World O’ Crap
    6. Making Light
    7. Orcinus
    8. Milieu
    9. Eschaton
  • 2 important features of your blogging environment: this one is actually hard, because I don’t really have a single blogging environment. Best I can come up with is my IPod, and a network connection. (I constantly look at various online sources like wikipedia, mathworld, and various peoples webpages to check what I’m writing.)
  • 6 items you would bring to a meet-up with the other ScienceBloggers:
    1. My powerbook. (Or MacBook if I ever get around to upgrading.)
    2. Geeky t-shirt.
    3. Sunglasses (to mask the glare of PZs fame 🙂 )
    4. A bottle of good rum. (inside joke)
    5. A Zagats guide. (What’s the point of getting together with fun people, and not going out for good food?)
    6. My lovely wife. She’s also a hopeless geek (computational linguistics), and a true expert at finding the very best food wherever she goes. Plus she can read maps, which I can’t. (I’m actually learning disabled – maps mean absolutely nothing to me. I have no idea how people use them to get places. Really.)
  • 5 conversations you would have before the end of that meet-up: I’m going to cheat a bit… A couple of convos that Janet wants to have would involve me, so I’ll just join in.
    1. With both Abel Pharmboy and Janet about being from NJ.
    2. With Janet about math jokes.
    3. With Orac about the kinds of goofy sciffy we both seem to like.
    4. With Tara about Findlay, Ohio. I spent four years of my childhood outside of New Jersey, and that was in Findlay Ohio, In another of these memes that circulate around the geeks of the blogosphere, Tara mentioned that she grew up in that miserable little town. I’m curious to find out if it changed after my family left.
    5. With Abel Pharmboy, about alt-woo medicine. I’ve been meaning to take on some of the stupid mathematical arguments used by alt-med types, but I haven’t had the patience to sit through the gunk of their sites to track down the stuff where I can offer something new.

This weeks SB question: What else would I do with my life?

As usual, once a week, the Seed folks send all of us a question from one of the SB readers:

Assuming that time and money were not obstacles, what area of scientific research, outside of your own discipline, would you most like to explore? Why?

I’ve actually got two answers to that question.

First up: theoretical physics. I’m fascinated by the work that’s trying to unify quantum mechanics and relativity: string theory, the shape of extended dimensions, etc. The problem is, I think that this answer is probably cheating, even though it’s my second choice after what I’m doing now. Because what attracted me to what I’m doing is the math: computer science is a science of applied math with a deep theoretical side; and what attracts me to physics is also the beautiful deep math. In fact, the particular parts of physics that most interest me are the parts that are closest to pure math – the shape of dimensions in string theory, the strange topologies that Lisa Randall has been suggesting, etc.

If that’s cheating, and I really have to get away from the math, then I’d have to say evolutionary development, aka evo-devo. Around holiday time last year, PZ posted a list of books for science geeks, and one was by a guy named Sean Carroll (alas, no relation) on evolutionary development. I grabbed the book on his recommendation – and the ways that gene expression drives the development of living things, the way you can recognize the relationships between species by watching how they form; the way you can use the relationships between species to explore how features evolved – it’s just unbelievably cool.

Earth as the center of the universe? Only if you use bad math.

One of my favorite places on the net to find really goofy bad math is Answers in Genesis. When I’m trying to avoid doing real work, I like to wander over there and look at the crazy stuff that people will actually take seriously in order to justify their religion.

In my latest swing by over there, I came across something which is a bizzare argument, but which is actually interesting mathematically. It’s an argument that the earth (or at least the milky way) must be at the center of the universe, because when we look at the redshifts of other objects in the universe, they appear to be quantized.

Here’s the short version of the argument, in their own words:

Over the last few decades, new evidence has surfaced that restores man to a central place in God’s universe. Astronomers have confirmed that numerical values of galaxy redshifts are ‘quantized’, tending to fall into distinct groups. According to Hubble’s law, redshifts are proportional to the distances of the galaxies from us. Then it would be the distances themselves that fall into groups. That would mean the galaxies tend to be grouped into (conceptual) spherical shells concentric around our home galaxy, the Milky Way. The shells turn out to be on the order of a million light years apart. The groups of redshifts would be distinct from each other only if our viewing location is less than a million light years from the centre. The odds for the Earth having such a unique position in the cosmos by accident are less than one in a trillion. Since big bang theorists presuppose the cosmos has naturalistic origins and cannot have a unique centre, they have sought other explanations, without notable success so far. Thus, redshift quantization is evidence (1) against the big bang theory, and (2) for a galactocentric cosmology, such as one by Robert Gentry or the one in my book, Starlight and Time.

This argument is actually an interesting combination of mathematical cluelessness and mathematical depth.

If you make the assumption that the universe is the inside of a giant sphere, expanding from a center point, then quantized redshift would be pretty surprising. Not just if you weren’t at the center of the universe – if the universe is an essentially flat shape, then a quantized redshift is very hard to explain. That’s because a quantized redshift in a “flat” universe would imply that things were expanding in a sequence of discrete shells, which would be quite a strange discovery.

But: if for some reason it was quantized, then no matter where you are, you will continue to see some degree of quantization in the motion of other objects. What you’d see is different redshifts – but they’d appear in a sort of stepped form: looking in any particular direction, you’d see a series of quantized shifts; looking in a different direction, you’d see a different series of shifts. The only place you’d get a perfectly uniform set of quantized shifts would be in the geometric center.

What the AiG guys ignore is the fact that in a flat geometry, the fact of quantized redshifts is incredibly hard to explain. Even if our galaxy were dead center in a uniform flat universe, the fact is, quantized redshifts – which imply discrete shells of matter in an expanding universe – are incredibly difficult to explain.

The flat geometry is a fundamental assumption of the AiG guys. If the universe is not flat, then the whole requirement for us to be at the center of things goes right out the window. For example, if our universe is the 3-dimensional surface of a four dimensional sphere – then if you see a quantized shift anywhere, you’ll see a quantized redshift everywhere. If fact, there are a lot of geometries that are much more likely to present a quantized redshift, and none of them except the flat one require any strange assumptions like “we’re in the dead center of the entire universe”. So do they make any argument to justify the assumption of a flat universe? No. Of course not. In fact, they just simply mock it:

They picture the galaxies like grains of dust all over the surface of the balloon. (No galaxies would be inside the balloon.) As the expansion proceeds, the rubber (representing the ‘fabric’ of space itself) stretches outward. This spreads the dust apart. From the viewpoint of each grain, the others move away from it, but no grain can claim to be the unique centre of the expansion. On the surface of the balloon, there is no centre. The true centre of the expansion would be in the air inside the balloon, which represents ‘hyperspace’, beyond the perception of creatures confined to the 3-D ‘surface’.

That’s the most substantive part of the section where they “address” the geometry of the universe. It’s not handled at all as an issue that needs to be seriously considered – but just as some ridiculously bizzare and impossible notion dreamed up by a bunch of eggheads looking for excuses to deny god. Even though it explains exactly what they’re trying to say can’t be explained.

But hey, let’s ignore that. Even if we do assume something ridiculous like a flat universe with us at the center, the quantized redshift is surprising. They specifically quote one of the discoverers of the quantization of redshift making this point; only they don’t understand what he’s saying:

‘The redshift has imprinted on it a pattern that appears to have its origin in microscopic quantum physics, yet it carries this imprint across cosmological boundaries.’ 39

Thus secular astronomers have avoided the simple explanation, most not even mentioning it as a possibility. Instead, they have grasped at a straw they would normally disdain, by invoking mysterious unknown physics. I suggest that they are avoiding the obvious because galactocentricity brings into question their deepest worldviews. This issue cuts right to the heart of the big bang theory–its naturalistic evolutionist presuppositions.

This is a really amazing miscomprehension here. They’re so desparate to discredit scientific explanation that they quote things that mean the dead opposite of what they say it means. They want it to say that the quantized redshift is unexplainable unless you believe that our galaxy is at the center of the universe. But that’s not what it says. What it says is: quantized shift is a surprising thing at all. It doesn’t matter where in the universe we are; if we’re in an absolute center (if there is such a thing), or if we’re on an edge, or if we’re in a random location in a geometry without an edge: it’s surprising.

But it is explainable by the very theories that they’re disdaining as they quote him; and in fact, his quote explains it. The best theory for the quantization of the redshift is that in the very earliest moments of the universe, quantum fluctations created a non-uniformity in the distribution of what became matter in the universe. As the universe expanded, that tiny quantum effect eventually ended up producing galaxies, galactic structures, and the quantized distribution of objects in the observable universe.

The quotation about the redshift pattern isn’t attempting to explain away some observation that suggests that we’re at the center of the universe. It’s trying to explain something far deeper than that. Whatever the shape of the universe, whatever our location in the universe, whether or not the phrase “the center of the universe” has any meaning at all, the quantized redshift is an amazing, surprising thing. And what’s particularly exciting about it is that this very large-scale phenomenon is a directly observable result of some of the smallest-scale phenomena in our universe.

Aside from that they engage in what I call obfuscatory mathematics. There are a bunch of equations scattered through the article. None of the equations are particularly enlightening; none of them actually add any real information to the article or strenghten their arguments in any way. They’re just there to add the gloss of credibility that you get from having equations in your article: “Oh, look, they must know what they’re talking about, they used math!”.

Some Basic Examples of Categories

For me, the frustrating thing about learning category theory was that
it seemed to be full of definitions, but that I couldn’t see why I should care.
What were these category things, and what could I really talk about using this
strange new mathematical language of categories?

To avoid that in my presentation, I’m going to show you a couple of examples up front of things we can talk about using the language of category theory: sets, partially ordered sets, and groups.

Sets as a Category

We can talk about sets using category theory. The objects in the category of sets are, obviously, sets. Arrows in the category of sets are total functions between sets.

Let’s see how these satisfy our definition of categories:

  • Given a function f from set A to set B, it’s represented by an arrow f : A → B.
  • º is function composition. It meets the properties of a categorical º:
    • Associativity: function composition over total functions is associative; we know that from set theory.
    • Identity: for any set S, 1S is the identity function: (∀ i ∈ S) 1S(i) = i. It should be pretty obvious that for any f : S → T, f º 1S = f; and 1T º f = f.

Partially Ordered Sets

Partially ordered sets (that is, sets that have a “<=" operator) can be described as a category, usually called PoSet. The objects are the partially ordered sets; the arrows are monotonic functions (a function f is monotonic if (∀ x,y &isin domain(x)) x <= y ⇒ f(x) <= f(y).). Like regular sets, º is function composition.

It’s pretty easy to show the associativity and identity properties; it’s basically the same as for sets, except that we need to show that º preserves the monotonicity property. And that’s not very hard:

  • Suppose we have arrows f : A → B, g : B → C. We know that f and g are monotonic
    functions.

  • Now, for any pair of x and y in the domain of f, we know that if x <= y, then f(x) <= f(y).
  • Likewise, for any pair s,t in the domain of g, we know that if s <= t, then g(s) <= g(t).
  • Put those together: if x <= y, then f(x) <= f(y). f(x) and f(y) are in the domain of g, so if (f(x) <= f(y)) then we know g(f(x)) <= g(f(y)).

Groups as a Category

There is a category Grp where the objects are groups; group homomorphisms are arrows. Homomorphisms are structure-preserving functions between sets; so function composition of those structure-preserving functions is the composition operator º. The proof that function composition preserves structure is pretty much the same as the proof we just ran through for partially ordered sets.

Once you have groups as a category, then you can do something very cool. If groups are a category, then functors over groups are symmetric transformations. Walk it through, and you’ll see that it fits. What took me a week of writing to be able to explain when I was talking about group theory can be stated in one sentence using the language of category theory. That’s a perfect example of why cat theory is useful: it lets you say some very important, very complicated things in very simple ways.

Miscellaneous Comments

There’ve been a couple of questions about from category theory skeptics in the comments. Please don’t think I’m ignoring you. This stuff is confusing enough for most people (me included) that I want to take it slowly, just a little bit at a time, to give readers an opportunity to digest each bit before going on to the next. I promise that I’ll answer your questions eventually!

Friday Random Ten

What kind of music does a math geek listen to?

  1. Capercaille: Who will raise their voice?. Traditional celtic folk music. Very beautiful song.
  2. Seamus Egan: Weep Not for the Memories. Mostly traditional Irish music, by a bizzarely talented multi-instrumentalist. Seamus Egan is one of the best Irish flutists in the world; but he also manages to play great tenor banjo, tenor guitar, six-string guitar, electric guitar, bohran, and keyboards.
  3. Gentle Giant: Experience. Gentle Giant is 70s progressive stuff, with heavy influence from early madrigal singing. Wierd, but incredibly cool.
  4. Tony Trischka Band: Steam/Foam of the Ancient Lake. Tony is my former banjo teacher. He’s also the guy who taught Bela Fleck to play Jazz. I have a very hard time deciding who I like better: Tony or Bela. They both do things with the banjo that knock my socks off. I think Bela gets a bit too much credit: not that he’s not spectacularly talented and creating; but he often gets credit for single-handedly redefining the banjo as an instrument, when Tony deserves a big share of the credit. This is a track off of the first album by Tony’s latest band. It’s great – I highly recommend the TTB to anyone.
  5. Trouth Fishing in America: Lullaby. TFiA is an incredibly great two-man folk band. They do both adult music, and music oriented towards children. Both are brilliant. Lullaby is, quite simple, one of the most beautifully perfect lullabies that I’ve ever heard. One of the two guys in TFiA, Ezra Idlet, is also somewhat famous for building a treehouse – not a kids treehouse, literally a treehouse: running water, electricity, central heating, etc. His house is a treehouse.
  6. Kind Crimson: B’Boom. A Bruford track off of one of Crimson’s recent albums. What more needs to be said?
  7. Dirty Three: Amy. The Dirty Three are something that they call a “post-rock ensemble”. All I can say is, it’s brilliant, amazing, fantastic music that I don’t know how to describe.
  8. Broadside Electric: Pastures of Plenty. Broadside is a Philadelpha based band that plays electrified folk. This is their take on an old folk track.
  9. Marillion: Ocean Cloud. Marillion is one of my favorite bands. They’re a neo-progressive group that started out as a Genesis cover band. Ocean Cloud is a long track off of their most recent album. It’s an amazing piece of work.
  10. Martin Hayes: Lucy Farr’s. Martin is a very traditional Irish fiddler. One of the really great things about him is that he’s really traditional. He doesn’t push the music to be ultrafast or showy; he takes it at speed that it was traditionally played, that you could dance to. It’s wonderful to hear the traditional tunes played right, without being over-adorned, over-accellerated, or otherwise mangled in the name of commericalism and ego.

Interesting mix today, all great stuff.

Diagrams in Category Theory

One of the things that I find niftiest about category theory is category diagrams. A lot of things that normally turn into complex equations or long-winded logical statements can be expressed in diagrams by capturing the things that you’re talking about in a category, and then using category diagrams to express the idea that you want to get accross.

A category diagram is a directed graph, where the nodes are objects from a category, and the edges are morphisms. Category theorists say that a graph commutes if, for any two paths through arrows in the diagram from node A to node B, the composition of all edges from the first path is equal to the composition of all edges from the second path.

As usual, an example will make that clearer.
cat-assoc.jpg

This diagram is a way of expression the associativy property of morphisms: f º (g º h) = (f º g) º h. The way that the diagram illustrates this is: (g º h) is the morphism from A to C. When we compose that with f, we wind up at D. Alternatively, (f º g) is the arrow from B to D; if we compose that with H, we wind up at D. The two paths: f º (A → C), and (B → D) &ordm h are both paths from A to D, therefore if the diagram commutes, they must be equal.

Let’s look at one more diagram, which we’ll use to define an interesting concept, the principal morphism between two objects. The principle morphism is a single arrow from A to B, and any composition of morphisms that goes from A to B will end up being equivalent to it.

In diagram form, a morphism m is principle if (∀ x : A → A) (∀ y : A → B), the following diagram commutes.
cat-principal.jpg

In words, this says that f is a principal morphism if for every endomorphic arrow x, and for every arrow y from A to B, f is is the result of composing x and y. There’s also something interesting about this diagram that you should notice: A appears twice in the diagram! It’s the same object; we just draw it in two places to make the commutation pattern easier to see. A single object can appear in a diagram as many times as you want to to make the pattern of commutation easy to see. When you’re looking at a diagram, you need to be a bit careful to read the labels to make sure you know what it means. (This paragraph was corrected after a commenter pointed out a really silly error; I originally said “any identity arrow”, not “any endomorphic arrow”.)

One more definition by diagram: x and y are a retraction pair, and A is a retract of B (written A < B) if the following diagram commutes:
cat-retract.jpg

That is, x : A → B, and y : B → A are a retraction pair if y º x = 1A.