Monthly Archives: July 2006

Comments, Typekey, Etc.

Just so folks know:
ScienceBlogs is experimenting with some new anti-spam stuff, which should do away with the need
for typekey. I’ve disabled typekey for Goodmath/Badmath, and we’ll how it goes. If you’ve got cookies or cached data for the site, you might have a bit of trouble with comments for a day or two; if you do, please drop me an email (see the contact tab), and I’ll see what I can do.
I’m also trying to figure out the right settings for the spam filter on the blog; if you post a comment and it doesn’t appear immediately, it’s probably because I don’t have the settings right. Don’t worry; it just means your message is sitting in the moderation queue until I get around to releasing it.

An Update on the Bible Code Bozos

About 10 days ago, I wrote a post about a group of bozos who believe they’ve found a secret code in the bible, and that according to them, there was going to be a nuclear attack on the UN building in NYC by terrorists. This was their fourth attempt to predict a date based on their oh-so-marvelous code.
Well, obviously, they were wrong again. But, do they let that stop them? Of course not! That’s the beauty of using really bad math for your code: you can always change the result when it doesn’t work. If you get the result you want, then you can say your code was right; if you don’t get things right, it’s never the fault of the code: it’s just that you misinterpreted. I thought it would be amusing to show their excuse:

We made another mistake. The monthly Sabbath of 2006Tammuz is not 30 individual daily Sabbaths, but is one month long Sabbath. Our new predicted date for a Nuclear Attack on the UN in New York City launched from the Sea or a great River is Sundown Tuesday July 25th – Sundown Thursday July 27th.

Yeah, they got days and months mixed up. That’s the ticket! So now it’s another three weeks off. But they’re still right! They still know the truth, and want to save us!

Who would have guessed? Dick Cheney can do the math

Or at least his financial advisers can.
Kiplinger’s, via MSN Money, are [reporting that Dick Cheney is betting that the economy is going to tank][cheney-invest]. When you take a look at the numbers: the deficit, the state of the dollar, the price of energy, stagnant wages, and the way that the economy is only being propped up by consumer spending, it’s hard to be optimistic about the economy. And apparently, despite what he says, Cheney’s not betting his own money on the success of he and George’s economic policies.
[cheney-invest]: http://articles.moneycentral.msn.com/Investing/Extra/CheneysBettingonBadNews.aspx
He’s put *at least* 10 million in a municipal bonds fund that will only do really well if interest rates keep rising; at least another mil in a money market fund that also depends on rising interest rates; and at least 2 million in “inflation protected” securities. Inflation protected securities are basically bonds and bond-like securities that pay a low interest rate, but that are structured to ensure that the principal grows with inflation. They’re really on only a good investment if you believe that inflation is on the rise and the dollar is going to sink.
Overall, our vice president has somewhere between 13 and 40 million dollars invested in things whose performance is based on interest rates and inflation rising, and the dollar tanking.
According to the same public disclosure documents from which this information was originally taken, his net worth is somewhere between 30 and 100 million. What that means is that it looks like the majority of his fluid money is solidly bet against the success of the policies of the government he is a part of.
Not pretty. But what did you really expect from a corrupt, power-hungry
asshole who considers the government to be a great big racket for rewarding
his buddies?
(See also [Attu sees All][attu]’s take on this.)
[attu]: http://attu.blogspot.com/2006/07/cheneys-betting-on-bad-news.html

Friday Random Ten, July 7

It’s friday again, so it’s time for a random ten. So out comes my iPod, and the results are:
1. **Bela Fleck and the Flecktones, “Latitude”**: mediocre tune off of the latest Flecktones album. This album was a 3-CD set. Unfortunately, it really should have been a single CD; they just didn’t bother to separate the good stuff from the not-so-good stuff. Very disappointing – they’re an amazing group of guys (well, except for Jeff…), and this just isn’t up to the quality they should be able to produce.
2. **Marillion, “Man of a Thousand Faces”**: a really fantastic Marillion tune. It ends with a very Yes-like layering buildup.
3. **Tony Trischka Band, “Sky is Sleeping”**: a track off of the TTBs first album. Tony doesn’t disappoint: brilliant playing, great chemistry between the band members. Features some truly amazing back-and-forth between banjo and sax.
4. **Sonic Youth, “Helen Lundeberg”**: something from Sonic Youth’s latest. I love this album.
5. **Peter Hammill, “Our Oyster”**: live Hammill, wonderful, strange, dark, depressing. It’s a tune about Tianamen Square.
6. **Flower Kings, “Fast Lane”**: typical FK – aka amazing neo-progrock.
7. **Broadside Electric, “Sheath and Knife”**: a modern rendition of a very gruesome old medieval ballad about incest.
8. **Stuart Duncan, “Thai Clips”**: a nice little bluegrass tune by one of the best bluegrass fiddlers around. Don’t ask why it’s called “Thai Clips”, nothing about it sounds remotely Thai.
9. **Dirty Three, “Ember”**: how many times do I need to rave about how much I love the Dirty Three?
10. **Lunasa, “Spoil the Dance”**: nice flute-heavy traditional Irish by Lunasa. For once, it’s not played so insanely fast. I’d guess around 130bpm, rather than the usual 170 to 180 of Lunasa. Lunasa’s a great band, and I love all their recordings; but Irish music like this is supposed to be *dance* music; you can’t dance at 180bpm.

The Site Banner

As you can see, there’s a new site banner.
I got about a dozen submissions this time. They were all terrific, but something about this one just really grabbed me; it was absolutely exactly what I wanted. It was designed by Josh Gemmel. So Josh gets immortalized in the “about” tab of the blog.
Any of you folks who submitted a banner, if there’s some topic you want me to write about, drop me a note. I’ll try to do articles for all of you.
Thanks everyone for your time and effort!

Arrow Equality and Pullbacks

We’re almost at the end of this run of category definitions. We need to get to the point of talking about something called a *pullback*. A pullback is a way of describing a kind of equivalence of arrows, which gets used a lot in things like interesting natural transformations. But, before we get to pullbacks, it helps to understand the equalizer of a pair of morphisms, which is a weaker notion of arrow equivalence.
We’ll start with sets and functions again to get an intuition; and then we’ll work our way back to categories and categorical equalizers.
Suppose we have two functions mapping from members of set A to members of set B.

f, g : A → B

Suppose that they have a non-empty intersection: that is, that there is some set of values x ∈ A for which f(x) = g(x). The set of values C from A on which f and g return the same result (*agree*) is called the *equalizer* of f and g. Obviously, C is a subset of A.
Now, let’s look at the category theoretic version of that. We have *objects* A and B.
We have two arrows f, g : A → B. This is the category analogue of the setup of sets and functions from above. To get to the equalizer, we need to add an object C which is a *subobject* of A (which corresponds to the subset of A on which f and g agree in the set model).
The equalizer of A and B is the pair of the object C, and an arrow i : C → A. (That is, the object and arrow that define C as a subobject of A.) This object and arrow must satisfy the following conditions:
1. f º i = g º i
2. (∀ j : D → A) f º j = g º j ⇒ (∃ 1 k : D → C) i º k = j.
That second one is the mouthful. What it says is: if I have any arrow j from some other object D to A: if f and g agree on composition about j, then there can only be *one* *unique* arrow from C to D which composes with j to get to A. In other words, (C, i) is a *selector* for the arrows on which A and B agree; you can only compose an arrow to A in a way that will compose equivalently with f and g to B if you go through (C, i) Or in diagram form, k in the following diagram is necessarily unique:

equalizer.jpg

There are a couple of interesting properties of equalizers that are worth mentioning. The morphism in an equalizer is a *always* monic arrow (monomorphism); and if it’s epic (an epimorphism), then it must also be iso (an isomorphism).
The pullback is very nearly the same construction as the equalizer we just looked at; except it’s abstracting one step further.
Suppose we have two arrows pointing to the same target, f : B → A and g : C → A. Then the pullback of of f and g is the triple of an object and two arrows (B×AC, p : B×AC → B, q : B×AC → C). The elements of this triple must meet the following requirements:
1. f º p = g º q
2. (f º p) : B×AC → A
3. For every triple (D, h : D → B , k : D → C), there is exactly one unique arrow A : D → B×AC where pºA = h, and q º A = k.
As happens so frequently in category theory, this is clearer using a diagram.

pullback.jpg

If you look at this, you should definitely be able to see how this corresponds to the categorical equalizer. If you’re careful and clever, you can also see the resemblance to categorical product (which is why we use the ×A syntax). It’s a general construction that says that f and g are equivalent with respect to the product-like object B×AC.
Here’s the neat thing. Work backwards through this abstraction process to figure out what this construction means if objects are sets and arrows are functions, and what’s the pullback of the sets A and B?

{ (x,y) ∈ A × B : f(x) = g(y) }

Right back where we started, almost. The pullback is an equalizer; working it back shows that.

Huh? How'd that happen?

As lots of folks around SB have been commenting today, Nature magazine has come up with a list of the top 50 science blogs, based on technorati ratings. According to them, GM/BM is the number 45 science blog in the world. Even if it is a screwy way of figuring out what science blogs are most widely read, it’s still just astounding that by any measure, this blog is ranked that high.
I’ve only been doing this blogging thing since March. And when I started, I really expected that I’d be lucky to get a dozen readers a day, if that. I thought I’d probably wind up giving up and folding within the first month.
Instead, it’s been four months, and there are somewhere around a thousand people reading this blog each weekday. (Or a hell of a lot more than that on a day like today, when I’ve been linked by DarkSyde on DailyKos and by the USAToday. Thanks to both of you!)
Thanks folks. I’m really amazed at how well this blog has been received; and I’m happier than I can really express to find out that people are interested in the crazy stuff I write about.
Also, while I’m chattering away: the GM/BM DonorsChoose challenge raised $1400 towards supporting math education. Those of you who donated, thank you! SB as a whole raised over $30,000 towards math and science education. That’s going to make a real difference to a lot of kids.

Peer Reviewed Bad ID Math

In comments to [my recent post about Gilder’s article][gilder], a couple of readers asked me to take a look at a [DI promoted][dipromote] paper by
Albert Voie, called [Biological function and the genetic code are interdependent][voie]. This paper was actually peer reviewed and accepted by a journal called “Chaos, Solitons, and Fractals”. I’m not familiar with the journal, but it is published by Elsevier, a respectable publisher.
Overall, it’s a rather dreadful paper. It’s one of those wretched attempts to take Gödel’s theorem and try to apply it to something other than formal axiomatic systems.
Let’s take a look at the abstract: it’s pretty representative of the style of the paper.
>Life never ceases to astonish scientists as its secrets are more and more
>revealed. In particular the origin of life remains a mystery. One wonders how
>the scientific community could unravel a one-time past-tense event with such
>low probability. This paper shows that there are logical reasons for this
>problem. Life expresses both function and sign systems. This parallels the
>logically necessary symbolic self-referring structure in self-reproducing
>systems. Due to the abstract realm of function and sign systems, life is not a
>subsystem of natural laws. This suggests that our reason is limited in respect
>to solve the problem of the origin of life and that we are left taking life as
>an axiom.
We get a good idea of what we’re in for with that second sentence: there’s no particular reason to throw in an assertion about the probability of life; but he’s signaling his intended audience by throwing in that old canard without any support.
The babble about “function” and “sign” systems is the real focus of the paper. He creates this distinction between a “function” system (which is a mechanism that performs some function), and a “sign” system (which is information describing a system), and then tries to use a Gödel-based argument to claim that life is a self-referencing system that produces the classic problematical statements of incompleteness.
Gödel formulas are subsystems of the mind
———————————————–
So. Let’s dive in a hit the meat of the paper. Section one is titled “Gödel formulas are subsystems of the mind”. The basic argument of the section is that the paradoxical statements that Gödel showed are unavoidable are strictly products of intelligence.
He starts off by providing a summary of the incompleteness theorem. He uses a quote from Wikipedia. The interesting thing is that he *misquotes* wikipedia; my guess is that it’s deliberate.
His quotation:
>In any consistent formalization of mathematics that is sufficiently strong to
>axiomatize the natural numbers — that is, sufficiently strong to define the
>operations that collectively define the natural numbers — one can construct a
>true (!) statement that can be neither proved nor disproved within that system
>itself.
In the [wikipedia article][wiki-incompleteness] that that comes from, where he places the “!”, there’s actually a footnote explaining that “true” in used in the disquotational sense, meaning (to quote the wikipedia article on disquotationalism): “that ‘truth’ is a mere word that is conventional to use in certain contexts of discourse but not a word that points to anything in reality”. (As an interesting sidenote, he provides a bibliographic citation for that quote that it comes from wikipedia; but he *doesn’t* identify the article that it came from. I had to go searching for those words.) Two paragraphs later, he includes another quotation of a summary of Godel, which ends midsentence with elipsis. I don’t have a copy of the quoted text, but let’s just say that I have my doubts about the honesty of the statement.
The reason that I believe this removal of the footnote is deliberate is because he immediately starts to build on the “truth” of the self-referential statement. For example, the very first statement after the misquote:
>Gödel’s statement says: “I am unprovable in this formal system.” This turns out
>to be a difficult statement for a formal system to deal with since whether the
>statement is true or not the formal system will end up contradicting itself.
>However, we then know something that the formal system doesn’t: that the
>statement is really true.
The catch of course is that the statement is *not* really true. Incompleteness statements are neither true *nor* false. They are paradoxical.
And now we start to get to his real point:
>What might confuse the readers are the words *”there are true mathematical
>statements”*. It sounds like they have some sort of pre-existence in a Platonic
>realm. A more down to earth formulation is that it is always possible to
>**construct** or **design** such statements.
See, he’s trying to use the fact that we can devise the Gödel type circular statements as an “out” to demand design. He wants to argue that *any* self-referential statement is in the family of things that fall under the rubric of incompleteness; and that incompleteness means that no mechanical system can *produce* a self-referential statement. So the only way to create these self-referencing statements is by the intervention of an intelligent mind. And finally, he asserts that a self-replicating *device* is the same as a self-referencing *statement*; and therefore a self-replicating device is impossible except as a product of an intelligent mind.
There are lots of problems with that notion. The two key ones:
1. There are plenty of self-referential statements that *don’t* trigger
incompleteness. For example, in set theory, I *can* talk about “the set of
all sets that contain themselves”. I can prove that there are two
sets that meet that description: one contains itself, the other doesn’t.
There’s no paradox there; there’s no incompleteness issue.
2. Unintelligent mechanical systems can produce self-referential statements
that do fall under incompleteness. It’s actually not difficult: it’s
a *mechanical* process to generate canonical incompleteness statements.
Computer programs and machines are subsystems of the mind
———————————————————-
So now we’re on to section two. Voie wants to get to the point of being able to
“prove” that life is a kind of a machine that has an incompleteness property.
He starts by saying a formal system is “abstract and non-physical”, and as such “is is really easy to see that they are subsystems of the human mind”, and “belong to another category of phenomena than subsystems of the laws of nature”.
One one level, it’s true; a formal system is an abstract set of rules, with no physical form. It does *not* follow that they are “subsystems of the human mind”. In fact, I’d argue that the statement “X is a subsystem of the human mind” is a totally meaningless statement. Given that we don’t understand quite what the mind is or how it works, what does it mean that something is a “subsystem” of it.
There’s a clear undercurrent here of mind/body dualism here; but he doesn’t bother to argue the point. He simply asserts its difference as an implicit part of his argument.
From this point, he starts to try to define “function” in an abstract sense. He quotes wikipedia again (he doesn’t have much of a taste for citations in the primary literature!), leading to the statement (his statement, not a wikipedia quotation):
>The non-physical part of a machine fit into the same category of phenomena as
>formal systems. This is also reflected by the fact that an algorithm and an
>analogue computer share the same function.
Quoting wikipedia again, he moves on to: “A machine, for example, cannot be explained in terms of physics and chemistry.” Yeah, that old thing again. I’m sure the folks at Intel will be absolutely *shocked* to discover that they can’t explain a computer in terms of physics and chemistry. This is just degenerating into silliness.
>As the logician can manipulate a formal system to create true statements that
>are not formally derivable from the system, the engineer can manipulate
>inanimate matter to create the structure of the machine, which harnesses the
>laws of physics and chemistry for the purposes the machine is designed to
>serve. The cause to a machine’s functionality is found in the mind of the
>engineer and nowhere else.
Again: dualism. According to Voie, the “purpose” or “function” of the machine is described as a formal system; the machine itself is a physical system; and those are *two distinctly different things*: one exists only in the mind of the creator; one exists in the physical world.
The interdependency of biological function and sign systems
————————————————————-
And now, section three.
He insists on the existence of a “sign system”. A sign system, as near as I can figure it out (he never defines it clearly) is a language for describing and/or building function systems. He asserts:
>Only an abstract sign based language can store the abstract information
>necessary to build functional biomolecules.
This is just a naked assertion, completely unsupported. Why does a biomolecule *require* an abstract sign-based language? Because he says so. That’s all.
Now, here’s where the train *really* goes off the tracks:
>An important implication of Gödel’s incompleteness theorem is that it is not
>possible to have a finite description with itself as the proper part. In other
>words, it is not possible to read yourself or process yourself as process. We
>will investigate how this parallels the necessary coexistence of biological
>function and biological information.
This is the real key point of this section; and it is total nonsense. Gödel’s theorem says no such thing. In fact, what it does is demonstrate exactly *how* you can represent a formal system with itself as a part, There’s no problem there at all.
What’s a universal turing machine? It’s a turing machine that takes a description of a turing machine as an input. And there *is* a universal turing machine implementation of a universal turing machine: a formal system which has itself as a part.
Life is not a subsystem of the laws of nature
———————————————-
It gets worse.
Now he’s going to try to put thing together: he’s claimed that a formal system can’t include itself; he’s argued that biomolecules are the result of a formal sign system; so now, he’s going to try to combine that to say that life is a self-referential thing that requires the kind of self-reference that can only be the product of an intelligent mind:
>Life is fundamentally dependent upon symbolic representation in order to
>realize biological function. A system based on autocatalysis, like the
>hypothesized RNA-world, can’t really express biological function since it is a
>pure dynamical process. Life is autonomous with something we could call
>”closure of operations” or a cluster of functional parts relating to a whole
>(see [15] for a wider discussion of these terms). Functional parts are only
>meaningful under a whole, in other words it is the whole that gives meaning to
>its parts. Further, in order to define a sign (which can be a symbol, an index,
>or an icon) a whole cluster of self-referring concepts seems to be presupposed,
>that is, the definition cannot be given on a priori grounds, without implicitly
>referring to this cluster of conceptual agents [16]. This recursive dependency
>really seals off the system from a deterministic bottom up causation. The top
>down causation constitutes an irreducible structure.
Got it? Life is dependent on symbolic representation. But biochemical processes can’t possibly express biological function, because biological function is dependent on symbolic representations, which are outside of the domain of physical processes. He asserts the symbolic nature of biochemicals; then he asserts that symbolic stuff is a distinct domain separate from the physical; and therefore physical stuff can’t represent it. Poof! An irreducible structure!
And now, the crowning stupidity, at least when it comes to the math:
>In algorithmic information theory there is another concept of irreducible
>structures. If some phenomena X (such as life) follows from laws there should
>be a compression algorithm H(X) with much less information content in bits than
>X [17].
Nonsense, bullshit, pure gibberish. There is absolutely no such statement anywhere in information theory. He tries to build up more argument based on this
statement: but of course, it makes no more sense than the statement it’s built on.
But you know where he’s going: it’s exactly what he’s been building all along. The idea is what I’ve been mocking all along: Life is a self-referential system with two parts: a symbolic one, and a functional one. A functional system cannot represent the symbolic part of the biological systems. A symbolic system can’t perform any function without an intelligence to realize it in a functional system. And the two can’t work together without being assembled by an intelligent mind, because when the two are combined, you have a self-referential
system, which is impossible.
Conclusion
————
So… To summarize the points of the argument:
1. Dualism: there is a distinction between the physical realm of objects and machines, and the idealogical realm of symbols and functions; if something exists in the symbolic realm, it can’t be represented in the physical realm except by the intervention of an intelligent mind.
2. Gödel’s theorem says that self-referential systems are impossible, except by intervention of an intelligent mind. (wrong)
3. Gödel’s theorem says that incompleteness statements are *true*.(wrong)
4. Biological systems are a combination of functional and symbol parts which form a self-referential system.
5. Therefore, biological systems can only exist as the result of the deliberate actions of an intelligent being.
This stinker actually got *peer-reviewed* and *accepted* by a journal. It just goes to show that peer review can *really* screw up badly at times. Given that the journal is apparently supposed to be about fractals and such that the reviewers likely weren’t particularly familiar with Gödel and information theory. Because anyone with a clue about either would have sent this to the trashbin where it belongs.
[wiki-incompleteness]: http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorem
[gilder]: http://scienceblogs.com/goodmath/2006/07/the_bad_math_of_gilders_new_sc.php
[dipromote]: http://www.uncommondescent.com/index.php/archives/722
[voie]: http://home.online.no/~albvoie/index.cfm

The Bad Math of Gilder's New Screed

As several [other][panda] [folks][pz] have mentioned, George Gilder has written a [new anti-evolution article][gilder-article] which was published in the National Review.
[panda]: http://www.pandasthumb.org/archives/2006/07/the_technogeek.html
[pz]: http://scienceblogs.com/pharyngula/2006/07/if_it_werent_for_those_feminis.php
[gilder-article]: http://www.discovery.org/scripts/viewDB/index.php?command=view&id=3631
There’s a lot to hate in this article. It’s a poorly written screed, which manages to mix together all of Gilder’s bogeymen: feminists, liberals, anti-supply-siders, peer reviewers, academics, and whoever else dares to disagree with him about, well, anything.
Plenty of folks are writing about the problems in this article; as usual, I’m going to ignore most of it, and focus specifically on the mathematical parts of it. Given that his argument is mathematical at root, those errors are fatal to the argument of the article as a whole.
We start with a really strange characterization of Shannon information theory:
>After Wealth & Poverty, my work focused on the subject of human creativity as
>epitomized by science and technology and embodied in computers and
>communications. At the forefront of this field is a discipline called
>information theory. Largely invented in 1948 by Claude Shannon of MIT, it
>rigorously explained digital computation and transmission by zero-one, or
>off-on, codes called “bits.” Shannon defined information as unexpected bits, or
>”news,” and calculated its passage over a “channel” by elaborate logarithmic
>rules. That channel could be a wire or another other path across a distance of
>space, or it could be a transfer of information across a span of time, as in
>evolution.
What’s weird about this characterization is that there’s a very strange shift in it. He starts off OK: “the channel could be a wire or another path across a distance of space”. Where he gets strange is when he *drops the channel* as he transitions from talking about transmitting information across space to transmitting information across time. Space versus time is not something that we talk about in Shannon’s information theory. Information is something abstract; it can be transferred over a channel. What “transferred” means is that the information originated at entity A; and after communication, that information has been seen by entity B. Space, time – they don’t make a difference. Gilder doesn’t get that.
>Crucial in information theory was the separation of content from conduit —
>information from the vehicle that transports it. It takes a low-entropy
>(predictable) carrier to bear high-entropy (unpredictable) messages. A blank
>sheet of paper is a better vessel for a new message than one already covered
>with writing. In my book Telecosm (2000), I showed that the most predictable
>available information carriers were the regular waves of the electromagnetic
>spectrum and prophesied that all digital information would ultimately flow over
>it in some way. Whether across time (evolution) or across space
>(communication), information could not be borne by chemical processes alone,
>because these processes merged or blended the medium and the message, leaving
>the data illegible at the other end.
There’s a technical term for this kind of writing. We call it “bullshit”. He’s trying to handwave his way past the facts that disagree with him.
If you want to talk about information carried by a medium, that’s fine. But his arguments about “information can not be borne by chemical processes alone?” Gibberish.
DNA is a chemical that makes a rather nice communication channel. It’s got a common stable substrate on which you can superimpose any message you want – any information, any length. It’s an absolutely *wonderful* example of a medium for carrying information. But he can’t admit that; he can’t even really discuss it in detail, because it would blow his argument out of the water. Thus the handwaving “chemical processes can’t do it”, with absolutely no real argument for *why* a chemical process “merges the medium and the message”.
For another example of how this argument fails: consider a CD/RW drive in a computer. The medium is a piece of plastic with magnetic materials in it. The message is patterns of polarization of those materials. To “record” information on it, you heat it up, and you *modify the medium itself* by changing the polarization of the particles at a point.
Or best of all: take electromagnetic waves, his example of the “very best” communication medium. It’s a waveform, where we superimpose our signal on the wave – the wave isn’t like a piece of paper where we’ve stuck ink to its surface: we force it to carry information *by changing the wave itself*. The basic frequency of the wave, the carrier, is not modified, but the wave amplitudes *are* modified – it’s not just a simple wave anymore, we’ve combined the signal and the medium into something different.
What’s the difference between that and DNA? You can look at DNA as a long chain of sockets. Each socket must be filled with one of 4 different letters. When we “write” information onto DNA, we’re filling those sockets. We’ve changed the DNA by filling the sockets; but just like the case of radio waves, there’s a basic carrier (the underlying chain/carrier wave), and a signal coded onto it (the letters/wave amplitudes).
From this, he tries to go further, and start mixing in some computation theory, building on his lack of comprehension of information theory.
>I came to see that the computer offers an insuperable obstacle to Darwinian
>materialism. In a computer, as information theory shows, the content is
>manifestly independent of its material substrate. No possible knowledge of the
>computer’s materials can yield any information whatsoever about the actual
>content of its computations.
This is manifestly not true. In fact, there was a fascinating piece of work a few years ago where people were able to decode the cryptographic system used by a smartcard by using a combination of knowledge of its physical structure, and monitoring its power consumption. From these two things, they were able to backtrack to determine exactly what it was doing, and backtrack to stealing a supposedly inaccessible password.
>The failure of purely physical theories to describe or explain information
>reflects Shannon’s concept of entropy and his measure of “news.” Information is
>defined by its independence from physical determination: If it is determined,
>it is predictable and thus by definition not information. Yet Darwinian science
>seemed to be reducing all nature to material causes.
Again, gibberish, on many levels.
Shannon’s theory does *not* define information by its “independence from physical determination”. In fact, the best “information generators” that we know about are purely physical: radioactive decay and various quantum phenomena are the very best sources we’ve discovered so far for generating high-entropy information.
And even the most predictable, deterministic process produces information. It may be *a small amount* of information – deterministic processes are generally low-entropy wrt to information – but they do generate information.
And then, he proceeds to shoot himself in the foot. He’s insisted that chemical processes can’t be information carriers. But now he asserts that DNA is an information carrier in his sense:
>Biologists commonly blur the information into the slippery synecdoche of DNA, a
>material molecule, and imply that life is biochemistry rather than information
>processing. But even here, the deoxyribonucleic acid that bears the word is not
>itself the word. Like a sheet of paper or a computer memory chip, DNA bears
>messages but its chemistry is irrelevant to its content. The alphabet’s
>nucleotide “bases” form “words” without help from their bonds with the helical
>sugar-phosphate backbone that frames them. The genetic words are no more
>dictated by the chemistry of their frame than the words in Scrabble are
>determined by the chemistry of their wooden racks or by the force of gravity
>that holds them.
Yup, He says earlier “information could not be borne by chemical processes alone, because these processes merged or blended the medium and the message, leaving the data illegible at the other end.” And here he describes how DNA can carry information using nothing but a chemical process. Ooops.
And he keeps on babbling. Next he moves on to “irreducible complexity”, and even tries to use Chaitin as a support:
>Mathematician Gregory Chaitin, however, has shown that biology is irreducibly
>complex in a more fundamental way: Physical and chemical laws contain hugely
>less information than biological phenomena. Chaitin’s algorithmic information
>theory demonstrates not that particular biological devices are irreducibly
>complex but that all biology as a field is irreducibly complex. It is above
>physics and chemistry on the epistemological ladder and cannot be subsumed
>under chemical and physical rules. It harnesses chemistry and physics to its
>own purposes. As chemist Arthur Robinson, for 15 years a Linus Pauling
>collaborator, puts it: “Using physics and chemistry to model biology is like
>using lego blocks to model the World Trade Center.” The instrument is simply
>too crude.
This is, again, what’s technically known as “talking out your ass”. Chaitin’s theory demonstrates no such thing. Chaitin’s theory doesn’t even come close to discussing anything that could be interpreted as saying anything about biology or chemistry. Chaitin’s theory talks about two things: what computing devices are capable of doing; and what the fundamental limits of mathematical reasoning are.
One of the most amazing things about Chaitin’s theory is that it shows how *any* computing device – even something as simple as a [Turing machine][turing] can do all of the computations necessary to demonstrate the fundamental limits of any mathematical process. It doesn’t say “chemistry can’t explain biology”; in fact, it’s *can’t* say “chemistry can’t explain biology”.
[turing]: http://goodmath.blogspot.com/2006/03/playing-with-mathematical-machines.html
In fact, in this entire section, he never actually supports anything he says. It’s just empty babble. Biology is irreducibly complex. Berlinski is a genius who demonstrates IC in mathematics and biology. Chaitin supports the IC nature of biology. Blah, blah, blah. But in all of this, where he’s allegedly talking about how mathematical theories support his claim, he never actually *does any math*, or even talks about *how the theories he’s discussing applying to his subject*.

Site Banner

As you may have noticed, there’s a site banner up there now.
I only received one submission back when I requested people to submit banners. , and it just didn’t quite work for me. (Bit too dark, and I didn’t like the hint of a blurring effect on the letters.) Since no one else sent me anything, I finally broke down and threw something together myself. It’s OK, but I’m not wild about it. So I’m repeating my request:
Someone with artistic talent, *please* make me a banner. The requirements:

  1. The size should be roughly 760×90.
  2. Subdued colors; not glaringly bright. No hot pink. I tend to like blues and violets, but I’ll be happy with anything that doesn’t hurt my eyed.
  3. Easy to read text, including the name of the blog, and the subtitle that are currently there. I’d rather not have funny fonts mixed into the title.
  4. Something in the background that suggests the kind of math I do. Since my approach to math is much more focused on discrete math topics like structures and logic, I’d prefer to see something like graphs, category diagrams, topologies, or knots than equations.

The rewards for the person whose banner I use:

  1. You’ll be eternally credited in the “about” link on the blog.
  2. You can pick a topic for me to write a blog entry or series of entries about.
  3. If I ever collect the blog entries into a book, you’ll get a free signed copy.