While taking a break from some puzzling debugging, I decided to hit one of my favorite comedy sites, Answers in Genesis. I can pretty much always find something sufficiently stupid to amuse me on their site. Today, I came across a gem called [“Information, science and biology”][gitt], by the all too appropriately named “Werner Gitt”. It’s yet another attempt by a creationist twit to find some way to use information theory to prove that life must have been created by god.
It looks like the Gitt hasn’t actually *read* any real information theory, but has rather just read Dembski’s wretched mischaracterizations, and then regurgitated and expanded upon them. Dembski was bad enough; building on an incomplete understand of Dembski’s misrepresentations and errors is just astonishing.
Anyway, after butchering an introduction to Shannon theory, he moves onward.
>The highest information density known to us is that of the DNA
>(deoxyribonucleic acid) molecules of living cells. This chemical storage medium
>is 2 nm in diameter and has a 3.4 NM helix pitch (see Figure 1). This results
>in a volume of 10.68 x 10-21 cm3 per spiral. Each spiral contains ten chemical
>letters (nucleotides), resulting in a volumetric information density of 0.94 x
>1021 letters/cm3. In the genetic alphabet, the DNA molecules contain only the
>four nucleotide bases, that is, adenine, thymine, guanine and cytosine. The
>information content of such a letter is 2 bits/nucleotide. Thus, the
>statistical information density is 1.88 x 1021 bits/cm3.
This is, of course, utter gibberish. DNA is *not* the “highest information density known”. In fact, the concept of *information density* is not well-defined *at all*. How do you compare the “information density” of a DNA molecule with the information density of an electromagnetic wave emitted by a pulsar? It’s meaningless to compare. But even if we accept information as only physically encoded, Consider the information density of a crystal, like a diamond. A diamond is an incredibly compact crystal of carbon atoms. There are no perfect diamonds: all crystals contain irregularities and impurities. Consider how dense the information of that crystal is: the position of every flaw, every impurity, the positions of the subset of carbon atoms in the crystal that are carbon-14 as opposed to carbon-12. Considerably denser than DNA, huh?
After this is where it *really* starts to get silly. Our Gitt claims that Shannon theory is incomplete, because after all, it’s got a strictly *quantitative* measure of information: it doesn’t care about what the message *means*. So he sets out to “fix” that problem. He proposes five levels of information: statistics, syntax, semantics, pragmatics, and apobetics. He claims that Shannon theory (and in fact information theory *as a whole*) only concerns itself with the first; because it doesn’t differentiate between syntactically valid and invalid information.
Let’s take a quick run through the five, before I start mocking them.
1. Statistics. This is what information theory refers to as information content, expressed in terms of an event sequence (as I said, he’s following Dembski); so we’re looking at a series of events, each of which is receiving a character of a message, and the information added by each event is how surprising that event was. That’s why he calls it statistical.
2. Syntax. The structure of the language encoded by the message. At this level, it is assumed that every message is written in a *code*; you can distinguish between “valid” and “invalid” messages by checking whether they are valid strings of characters for the given code.
3. Semantics. What the message *means*.
4. Pragmatics. The *primitive intention* of the transmitter of the message; the specific events/actions that the transmitter wanted to occur as a result of sending the message.
5. Apobetics: The *purpose* of the message.
According to him, level 5 is the most important one.
Throughout the article, he constantly writes “theorems”. He clearly doesn’t understand what the word “theorem” means, because these things are just statements that he would *like* to be true, but which are unproven, and often unprovable. A few examples?
For example, if we look at the section about “syntax”, we find the following as theorems:
>Theorem 4: A code is an absolutely necessary condition for the representation
>Theorem 5: The assignment of the symbol set is based on convention and
>constitutes a mental process.
>Theorem 6: Once the code has been freely defined by convention, this definition
>must be strictly observed thereafter.
>Theorem 7: The code used must be known both to the transmitter and receiver if
>the information is to be understood.
>Theorem 8: Only those structures that are based on a code can represent
>information (because of Theorem 4). This is a necessary, but still inadequate,
>condition for the existence of information.
>These theorems already allow fundamental statements to be made at the level of
>the code. If, for example, a basic code is found in any system, it can be
>concluded that the system originates from a mental concept.
How do we conclude that a code is a necessary condition for the representation of information? We just assert it. Worse, how do we conclude that *only* things that are based on a code represent information? Again, just an assertion – but an *incredibly* strong one. He is asserting that *nothing* without a
structured encoding is information. And this is also the absolute crux of his argument: information only exists as a part of a code *designed by an intelligent process*.
Despite the fact that he claims to be completing Shannon theory, there is *nothing* to do with math in the rest of this article. It’s all words. Theorems like the ones quoted above, but becoming progressively more outrageous and unjustified.
For example, his theorem 11:
>The apobetic aspect of information is the most important, because it embraces
>the objective of the transmitter. The entire effort involved in the four lower
>levels is necessary only as a means to an end in order to achieve this
After this, we get to his conclusion, which is quite a prize.
>On the basis of Shannon’s information theory, which can now be regarded as
>being mathematically complete, we have extended the concept of information as
>far as the fifth level. The most important empirical principles relating to the
>concept of information have been defined in the form of theorems.
See, to him, a theorem is nothing but a “form”: a syntactic structure. And this whole article, to him, is mathematically complete.
>The Bible has long made it clear that the creation of the original groups of
>fully operational living creatures, programmed to transmit their information to
>their descendants, was the deliberate act of the mind and the will of the
>Creator, the great Logos Jesus Christ.
>We have already shown that life is overwhelmingly loaded with information; it
>should be clear that a rigorous application of the science of information is
>devastating to materialistic philosophy in the guise of evolution, and strongly
>supportive of Genesis creation.
That’s where he wanted to go all through this train-wreck. DNA is the highest-possible density information source. It’s a message originated by god, and transmitted by each generation to its children.
And as usual for the twits (or Gitts) that write this stuff, they’re pretending to put together logical/scientific/mathematical arguments for god; but they can only do it by specifically including the necessity of god as a premise. In this case, he asserts that DNA is a message; and a message must have an intelligent agent creating it. Since living things cannot be the original creators of the message, since the DNA had to be created before us. Therefore there must be a god.
Same old shit.
Bad, bad, bad math! AiG and Information Theory
While taking a break from some puzzling debugging, I decided to hit one of my favorite comedy sites, Answers in Genesis. I can pretty much always find something sufficiently stupid to amuse me on their site. Today, I came across a gem called [“Information, science and biology”][gitt], by the all too appropriately named “Werner Gitt”. It’s yet another attempt by a creationist twit to find some way to use information theory to prove that life must have been created by god.
And the information inside God came from where, exactly?
My mind boggles with the sheer lack of content these purportedly “scientific” papers display. AiG nonsense cannot be generated by any process akin to reason. . . the favourite tactic appears to be “proof by blatant assertion”. . . logical fallacies tumble upon one another in such rapid succession the reader cannot disentangle them. . . Ooh, my head hurts.
AiG is working on a Creation Museum which is scheduled to open next May.
You know, this has a striking similarity to all the disproofs of special relativity that one can find floating around (I tend to see those more readily because my background is in physics).
They take the words without really understanding them (like “time dilation”) and mush them around without using any math. Or the math that they do use simply does not connect to anything that they are saying, or is outright wrong.
In the end, crackpottery is crackpottery, whether it is creationism or relativity-“improvement”.
Clearly, Gitt’s paper violates his own Theorem 6. He’s not strictly observing the conventionally defined meaning of the code-word “theorem” (and probably several others, but that’s the most obvious).
Therefore, by his own logic, his paper contains no information.
Theorem 1: Chocolate crackle ice cream will spontaneously appear in front of me.
Theorem 2: Please?
Theorem 3: Fine, be that way, God! Sheesh, all I wanted was some ice cream — you’d think an omnipotent being could manage that!
Syntax, semantics, and pragmatics are terms of linguistics, which mean pretty much what he described.
Oh man. I read this blog daily, and every post is great and entertaining, but reading those “theorems” was the first time I ever laughed out loud while reading GB/BM! Thanks for finding that gem, Mark!
That was weird. I was sitting here typing up a response to the article when suddenly some chocolate ice cream appeared in front of me. I sat there stunned and some more appeared! I have no idea what’s going on, but the ice cream tastes nice.
Yes, the style of ‘theorems’ is used by some cranks, but here I think it is used to lend an air of ‘science’ to the ‘creation science’ of AiG. It *is* amasing that someone with a “doctorate in Engineering” should be so naive of what theorems are – or he is pretending either way. Cue Inigo Montoya.
I especially enjoyed that his 4th axiom contradicts the first ones on Shannon information, by requiring a code for representation of information and changing the definition completely. It is also the point where he sneaks in the design/intelligence dualism.
BTW, his information discussion on DNA information also forgets that the organism contains functional information that genes doesn’t immediately express, such as physical constraints at work in the cellular machinery, during development and as adult. And besides DNA info, the animal egg contains the cellular machinery for growth and cell division, and is packaged with maternal hormones that defines the basic orientation and much of the first development.
Just to mock the author further, the “precise sequence” of the genetic code is rather sloppy due to mutations (which a creationist detest to talk about, of course), and the “carefully defined” proteogenetic code has several nonstandard implementations (which shows evolutionary relationships between animals and between plants, again uncomfortable for a creationist: http://www.web-books.com/MoBio/Free/Ch3E1.htm ).
For those who are interested, the following messageboard thread features a discussion of Gitt’s information arguments. I link to it because one of the protagonists, a chap by the name of Jorge Fernandez, is apparently involved with Gitt and was invited to a recent discussion meeting on the subject (though didn’t attend).
Note that Jorge is rather notorious for being abusive and evasive and generally fairly clueless, despite his protestations otherwise. Still, the thread may be of interest:
That sounds suspiciously like something I read in book form a few years ago. It was presented to me as philosophy, not science, and the first part managed to be at least moderately interesting when read from that perspective, but as soon as he started talking about the existence of God he seemed to forget that he had started out with developing a framework to build an argument on and just went ahead and blatantly asserted his conclusions. I was Not Impressed.
(If you want to get an idea ignored by as many intelligent people as possible, AiG are the people to go to. Even if you can’t get them to actually endorse it, you can learn a lot from their methods.)
This kind of thing reinforces, to me, a revelation I had the other day. The less intelligent of those people who are convinced of the truth of some religion (the bold is to point out that I certainly don’t think all religious folks are like this), and want to convince others as well, fundamentally aren’t concerned with whether what they say makes sense or not.
By this I mean that to them, common conventions of what constitutes a logically valid argument are immaterial – any argument is valid or invalid according as its conclusion is or is not the one they have dedicated their lives to. To take evangelical Christianity as an example, one finds people desparate to convince others of the correctness of evangelical Christianity who do not care what they say in order to do it. All that matters is that they assert to you, or argue in however poor a way, the correctness of their beliefs.
The worst cases cannot even comprehend that what they are saying is not logically valid. If the conclusion reached is that Jesus exists and is saviour and the Bible is infallible, then the argument is valid and correct – and if not, then it is not. All they should need to do is assert that Jesus is the son of God, and any reasonable person would concur, so obviously is it true! The people they discuss it with in Church all agree, so why would others not? If their opponents in debate do not agree then all that can have gone wrong is that they did not express it plainly enough or in quite the right way, so all that is required is to re-state it in a more suitable form.
Ther is also the theory that some, perhaps more intelligent, realises that they are lying but believe that is okay to ‘save souls’. Blogs discussing creationists calls it “lying for jesus”.
If one believe this phenomena depends partly on what side of a possible cognitive dissonance one are. If fundies are succesfully practising one they aren’t lying as such. Still, those on the other side that doesn’t think it is possible to do something so nonrational disbelieve the dissonance, ie believe they are lying.
It also depends partly on that some fundies are so blatant in their efforts that it seems the most reasonable explanation is that they are indeed lying.
Damn you, Thomas Winwood! Daaaamn yooooooouuuu…
Jim Lippard: Syntax, semantics, and pragmatics are terms of linguistics, which mean pretty much what he described.
Ah, but “apobetics”! To paraphrase Scott Adams’ Dilbert: A doctor with a flashlight could show us where Werner Gitt obtained that word.
jackd: Quite right… they aren’t buying into “apobetics” even over at the Christian Forums!
You may be right when it comes to incorrect usage of the technical word “theorem”…but none of you have paused to address the point that you can’t get a computer to understand abstract information. Even if you write extensive code, you are really just giving the computer mindless instructions. If any of you can come up with a programming language that will understand layman sentences and paragraphs, I’d love to see it.
I think you could say that DNA has high information density in the same way that a 2 Gigabyte SD card has high information density–only DNA’s information density is a lot higher.
You are all using a logical fallacy called “ad hominem” to sidestep the difficult arguments that are presented. Perhaps fewer insulting comments, and more good, logical scientific discussion?
That’s the Chinese Room fallacy. Where, precisely, does consciousness reside? Can you point to it? Nope. It arises out of the entire system of our brain. No piece has consciousness, but the entire thing does. Same thing with computers (and the Chinese Room). If we have a sufficiently advanced computer and an appropriate program, we’ll get consciousness. The brain, however, is still orders of magnitude more powerful than our computers.
All right, DNA gets to use individual elements of only a handful of molecules. We haven’t gotten to that level yet. We will. Is there a point?
Reread Mark’s post. Yes, he makes fun of the guy. He also demolishes everything the paper says. Gitt is writing gibberish and trying to pass it off as math. Mark went through Gitt’s major points, and showed how they were absolutely wrong. 100%. If you want to challenge what Mark said, great, that’s what discussion is for. Explain Gitt’s paper to us. Instruct us on what we were missing. As skeptics, we really *do* love learning new things, and if Gitt’s ‘completion of Shannon Information Theory’ is correct, we’d embrace it. That’s how skepticism works. In the meantime, since it appears that Gitt’s paper is a load of gibberish, we’ll treat it as such.
As for us commenters… Well, we’re angry people. ^_^ And when you’ve already got Mark saying all the good stuff, there’s isn’t a whole lot of reason to say much else. Pop in to some of the other comment threads – we get into some heady stuff there.
My point, such as it is, is that *there is nothing to respond to* in Gitt’s article. He’s *claiming* to be talking about math and information theory; and yet there isn’t a *single* *statement* in the entire article that actually cites any evidence, does any math, or makes any argument to support his argument concerning the nature of information.
He *asserts* without support that all information is a message, and that a message can only be generated by an intelligent agent.
He *asserts* without support that DNA contains information *in his sense* of a message.
The then concludes from these two unsupported assertions that DNA must have been created by an intelligent agent.
Sorry, but that doesn’t even qualify as an argument in my book. That’s an *assertion*, not an argument. All of the key steps in the entire article are merely bald, unsupported assertions.
You say that conciousness resides only in the brain. How do you know this to be true? What scientific *proof* do you have? If the mind is only a computer, than how is the brain able to take over the jobs of parts of the brain that have been removed? You are doing exactly the same thing that you accuse Gitt of doing–making an assertion without evidence.
You seem to think that because “there isn’t a single statement…that actually cites any evidence…” that the entire article is gibberish. The problem is that YOU don’t use any evidence, either. In fact, you use *less* than he does!
You *assert* that Gitt got all his information from Dembski
You *assert* that, no, DNA is not the highest information density. (Where’s the evidence?)
You *assert* that information can be generated by a random process. Where’s the proof? (And don’t try that evolution argument, please.)
You *assert* that no math=no sense. I see no math in your article. I don’t even see a direct logical argument against what he says. You make a big deal about the incorrect usage of “theorum”, and that the guy’s missing some equations, but you sidestep his arguments in a major way.
You are quick to point out that his conclusions are based on assertions, but you never set out to actually disprove any of them. This renders your argument a red herring.
As good scientists (I assume you folks are scientists), It is good practice not to pre judge this argument because it is found on the “Answers in Genesis” website. You should read it objectively, and not discredit it because of where it’s found. Doing otherwise is bad science.
Actually, I argued that the statement that DNA is the highest information density is nonsense because the *information density* is not a well-defined term. Gitt makes the *claim* that DNA has the highest information density, but doesn’t bother to define *what* information density *means*. He claims to be arguing in terms of information theory; but information theory contains no notion of “information density” that would allow us to compare the information density of a radio wave emitted by a pulsar with the information density of a DNA molecule. Gitt’s argument there is *meaningless*; I can’t refute it, because as it stands, *it doesn’t mean anything*.
I can say “DNA has the lowest possible information entrigonopathy”, and this disproves that it could be created by anything but a random process. Can you refute that claim?
As for information generated by a random process: again, Gitt claims to be arguing in terms of information theory. In information theory, the very *definition* of information is based on randomness.
And I do *not* assert that no math = no sense. What I’m arguing is that *Gitt* is claiming to make an argument based on *information theory*, which is a field of mathematics. His own basic claim is that his argument is formulated in terms of mathematics. But his argument is mathematically meaningless. *That* is a serious problem for him.
His argument, such as it is, is entirely circular. It’s got two basic assumptions, which it doesn’t attempt to prove (or even to justify); those two unproven assumptions contain his conclusion. (His argument reduces to “All information is a message originated by an intelligent agent”, and “DNA contains information in the *Gitt* sense of being a message”; therefore, DNA must have been created by an intelligent agent.)
It’s a garbage argument, dressed up in the terminology of mathematics to give it false credibility. But it’s garbage: a circular argument based on a bunch of ill-defined and unsupported assertions.
Because we’ve never found anything else that causes consciousness? If we remove parts of the brain, your ‘consciousness’ dies. There’s nothing else we have found that, when disabled, does the same thing. Thus, we conclude that consciousness resides in the brain. If one could show that consciousness actually resides in something else such an ethereal mind or a divine soul, it would be a wonderful scientific discovery and all would trumpet its praises. But that’s never been shown, so we stick with what we know.
I don’t see how the first part of your sentence connects to the second. You *seem* to be asserting that a computer, be definition, is unable to recover from damage. That it is unable to reroute functions from damaged hardware to functioning hardware. Since the brain *can* do this, then the brain must not be a computer.
If this isn’t what you mean, please correct me. I don’t want to argue past you. But if it is, then you’re wrong. Parellel systems do that all the time. Look at Google – it’s composed of a ton of cheap bits of hardware. Pieces of it are *always* failing – it’s a truth of statistics when you get that size. And yet they don’t lose functionality from it, because they are able to route around the damage and use the still-functioning parts to perform what the damaged parts were supposed to. A desktop computer can’t do this, true, but that’s because it’s not designed to be parallel like Google’s systems or the brain is. It’s simply a consequence of the design of the system, not a failing of computers in general.
As for your critiques of Mark:
Well, it wasn’t quite an assertion – he did say “it look like”. The reason he did so, of course, is because Gitt is speaking gibberish about IT, which Dembski is also known to do. It has nothing to do with the argument itself, though.
First, he stated that the concept of ‘information density’ isn’t even defined and gave a reason and an example why. Can you tell whether DNA or an EM wave off of a pulsar is more information dense? Not until you define what, precisely, information density means. If we pretend that it has meaning, though, then he gave an example of something that would probably have a higher density – diamond.
I believe as Mark does, that information density doesn’t make any sense, at least not as Gitt is using it. It depends on what we’re encoding. If we’re looking at genetics, then DNA uses a nice-sized molecule for each unit of information (depending on how you look at it, either 1 or 2 bits per unit). But what if we instead were encoding a message into the errors in DNA? That would probably be lower density, though it uses the same medium. What if we encoded info directly into the individual atoms? Again, same medium, but higher density this time. Until you can actually define what information density means, you get wierdness like this.
It’s not an assertion, it’s straight from Shannon. Random processes generate *more* information than other processes typically, because it’s usually harder to compress a random string than a string generated from a given non-random process. Go read Mark’s post a few days ahead of this about Information Theory, where he responds to an email sent to him. The comments there are very good in explaining exactly what IT is, and what information is.
His actual words were this, “Despite the fact that he claims to be completing Shannon theory, there is nothing to do with math in the rest of this article. It’s all words. Theorems like the ones quoted above, but becoming progressively more outrageous and unjustified.”
As you can see, he did *not* say that no math=no sense, in any sense. He said that the guy is trying to complete a mathematical theory without using any math. That doesn’t make any sense. That’s what Mark is arguing against. Now, Mark’s argument doesn’t directly use any math either, but that’s because he’s not trying to complete a mathematical theory. He’s demolishing an argument, not trying to remake part of mathematics. Mark at least actually references Shannon theory, showing that he actually knows what he’s talking about. Or at least can fake it better than Gitt. ^_^
You’re really hung up on the term “information density”. I agree; information is not tangible, and cannot be referred to as taking up space. However, DNA is the most efficient means of information storage that we know of today, and that is what he’s GITTing at (excuse the pun!)
Can you give me a provable example of information (meaningful information, not noise) having been generated by random chance? As far as we know, it is impossible.
Let’s use a mathmatical example: Imagine a checkerboard. Now imagine that you have a cup full of checkers, both black and red. Suppose you want to create a pattern of checkers on this checkerboard. It would be easy enough. But let’s suppose that you must create this pattern by randomly throwing the checkers in the air, and letting them fall on the checkerboard. The chances of this pattern occurring are 3 to the 64th power, or 3.43 *10 to the 30th. This means that you’d have to throw the checkers on the checkerboard 6550831464233300 times per second…for at least ONE BILLION years. You can do the math, if you doubt me. The fact is, as far as we know, random chance never generates meaningful information.
Moving on to the brain: I say that the mind is more than the brain. You say that there is nothing more than the brain. Logically, it is impossible to prove your side, because you can’t prove that something dosen’t exist. On the other hand, I do have evidence in my favor.
I agree that Gitt has some problems. But in comparison, your agruments have a lot more.
See, here’s where you trip up. This is where *everyone* trips up. This is where Gitt, and everyone else who misuses information theory in support of creationism or ID trips up.
Information – Shannon Information – has NOTHING to do with the content. If there is any meaning, it doesn’t matter. It was an unfortunate choice of names, because information has several meanings in English, but it has one, very specific meaning in IT.
A much better word for it is Surprise. This was actually used in some of IT writings. Basically, you measure how surprised you would be at receiving the next value. If we have the string “1111111…”, it has a *very* low surprise value. We quickly come to expect that the next digit will be a 1. A string like “101001000100001…” would have higher surprise, because there’s a repeated pattern, but it’s more complex. A random string, though, surprises us with every digit, because we don’t have any idea what the next one will be. This is maximum Surprise.
Substitute the word Surprise for Information in IT, and you eliminate a lot of the ambiguity. It represents the same thing, but prevents the confusion of “useful information” versus “shannon information”.
Now, actually, Gitt doesn’t claim that IT uses the meaning of a message. (Everyone else does, though, including you.) This is what his theory is supposed to do – ‘complete’ Shannon IT to include meaning. However, he doesn’t use any math to do this. None at all. How can you *possibly* hope to ‘complete’ a mathematical theory without using math? It’s ridiculous! This is why Gitt’s paper is rubbish.
Responding to your other statement:
Science doesn’t prove anything. It shows that something is very likely to be true, by attempting to disprove it as hard as possible. All the information we have about how the brain works points to the brain being sufficient for consciousness. There have been many stuides on brain damage showing a loss of specific higher functions when you disable specific sections. I can point to some of those, if you wish.
On the other hand, I’ve never heard of any experiments that show that one can damage something besides the brain and still get loss of sections of consciousness. If there was any, it would lend credence to your hypothesis that the brain isn’t the entirety of consciousness. Would you mind citing some of the evidence that you say exists?
I think we might be suffering from miscommunication. Let’s take for example, two jpeg pictures. One is of plain static, while the other is of a family vacation to Hawaii. Both pictures will probably take up somewhat equal amounts of space on the computer, because they are both statistical information. To the computer, this static picture is no different from a normal picture, because the computer thinks in terms of ones and zeros.
But a person can easily tell the difference between the two pictures, because the second has semantic value. Though statisticaly and syntactically they may be the same, it is obvious that semantically the second picture contains information. That’s what Gitt is talking about.
Let’s take another example. Currently there is a search going on for extraterrestrial intelligence. If, as you say, all information is the same, than it will be impossible to distinguish alien signals from the noise of space. We have to be able to recognize information beyond the statistical and syntactical value.
Back to the brain. You misunderstood what I was saying. It has been proven (google it if you don’t believe me) that parts of the brain can take over the jobs of parts that have been removed. It is repeatable, and it is observable. It is something that a computer cannot do without the help of an outside agent. And, I have never heard any other explanation for how this happens other than that there must be more to the mind than the brain.
Xanthir mostly beat me to the punch here; he’s said what I was going to.
Gitt’s article *claims* to be arguing from information theory. It in fact claims to be *completing* information theory. So the argument *has* to use information theory. It doesn’t.
*Your* arguments defending Gitt are shifting the definition of information theory (as well as demonstrating how little you understand it). Information *is* randomness, *by definition* in information theory. You can scream and shout about meaning until you’re blue in the face; it doesn’t matter. Gitt’s argument is based on information theory, and so it’s got to use the information theory definition of randomness – or present an alternate mathematical formulation of information that subsumes the one from IT. Gitt didn’t do that, and you aren’t doing it.
WRT your misunderstanding of IT:
If you take a picture of random static and a picture of a Hawaiian beach, and encode them both in JPEG, they *won’t* be the same size. The static picture will be *dramatically* larger. Why? Because it contains *more information*.
In the Hawaiian beach photograph, there are large rectangular areas that are pretty much the same color; in the bitmap, those are represented as numerous copies of the same RGB values. JPEG does compression based on blocks, and it will recognize the duplication, and use it for compression. The static, on the other hand, will be *uncompressible*, because it doesn’t contain redundant information.
For a trivial example, I took the previous paragraph of this comment – it’s 398 bytes. Putting it through a simple compression (gzip – a *less* efficient compression than jpeg), I can reduce it to 272 bytes. Using a bit of Scheme code, I generated a sequence of 398 random bytes, and pumped it through gzip; I ended up with 402 bytes; the file was effectively uncompressible, but the gzip file format needs metadata, so there was a net add of 4 bytes).
Information *is* tangible in the sense that it can be measured. The problem with statements like “DNA has the highest information density” is that it *is not a meaningful statement, at all.
As I keep asking: how do you compare the “information density” of a strand of DNA to the information density of a radio wave? There are radio waves emitted by things like pulsars that are incredibly information rich. How do you compare the density of the information content of a strand of DNA to the density of the information content of the pulsar radio wave?
Actually, if you took two bitmaps with the images you described, both would be equal in size, because a bitmap is simply a description of every pixel in the picture (it’s a map of the bits ^_^). If they were jpegs, though, the vacation picture would be smaller than the static picture, because of the jpeg compression.
Think about that for a moment. Compression shows you how much information is actually in the file. The string “11111…” would be compressed to almost nothing, while a random string cannot be compressed at all (in general). If the vacation picture compresses to a smaller size than the random static, this indicates that the vacation picture has less information than the static. This is because the vacation picture consists of long pieces of similar information. The sky might consist of large patches of a couple shades of blue. A red shirt would be the same. On the other hand, the static picture has no large patterns. There is, in general, no place where you can say, “And now, repeat this string x number of times”.
Now, if want to talk about telling the difference between the pictures, we have to start talking about systems. Yeah, to an ordinary graphics program, the two pictures are basically the same. It can’t tell a difference, because all it does it look at the picture and display it. The entire system of picture+graphics program isn’t much more complex than the picture itself.
On the other hand, when a human looks at a picture, they bring to bear all the complexity of our pattern-recognition abilities and memories. Two pictures of static may look the same because they don’t trigger our abilities, but pictures of two different people will be recognized as different. The entire system of picture+human brains *is* a lot more complex than the picture itself.
As a third alternative, you have pattern-recognition software that *can* tell pictures of people apart from other pictures. When you use Google Image Search, by default it has SafeSearch turned on. This ability looks at pictures and determines whether or not they have enough skin on them to qualify as pornographic. If you showed it a picture of static, a picture of a clothed person, and a picture of a naked person, it would be able to tell which one showed a naked person (generally – the algorithm is defeated sometimes). There are others which can look at pictures and tell if the faces in the pictures match other pictures that are on file. In general, program *can* tell pictures apart – sometimes very general classes of pictures – if they’re designed to do so.
Or, in other words, all of this is already described in IT – it’s the information of the system. Gitt is trying to take all that info and cram it into the message itself.
Again, you’re confusing the standard meaning of information with the technical meaning. Information in IT can be represented just as well by Surprise. Information, in the way we usually talk about it, can’t. This is because the two are referring to completely different things. There is no hope of understanding Information Theory unless one understands what IT means by Information, and how that is different from the standard meaning of Information.
The problem is the same in evolution. The word evolution existed long before Darwin – it basically meant change or growth. This is what we mean when we talk about stellar evolution. The evolution that biologists talk about, though, is completely different, and a lot of confusion results from this.
Oh, I believe you on that score. It’s 100% true that the brain can recover from losing parts of itself. Other parts change to take over the missing part’s functions. This doesn’t, by itself, prove that consciousness exists outside of the brain. What it actually shows is that, although the brain has specialized sections to do particular things, in theory any part can take over. If you’ve never heard an explanation of it, you haven’t looked very hard. After a disablement like this, scans of the brain show that the brain activates differently. If, for example, looking at pictures of women would normally activate the removed section, then if the person recovers enough, you’ll find that entirely new sections of the brain activate upon women-looking that don’t activate under the same circumstances in normal people. There’s lots and lots of research on this.
On the other hand, this process certainly isn’t perfect. Sometimes people regain full use of the lost functions. Sometimes they gain partial use. Sometimes they never regain it. This is because the brain is not perfect.
However, you are definitely wrong about computers not being able to do it without an outside agent. Look up the Google File System. Or just click on that link. It is designed, as I said before, specifically to be able to lose sections of itself and automatically recover without losing any functionality. This is because, on a server farm as large as Google’s, it is guaranteed that a certain percentage of hardware will be broken at any one time.
As I said, your desktop computer won’t be able to do this, because it’s not designed to. Doing so requires massive parallelism, something that your brain and Google’s server farm both use. But there’s nothing special that prevents computers from doing so – it’s simply a matter of designing them correctly.
I apologize for the long comment. I was attempting to be comprehensive, and I do that best with examples and restatements from different angles. It also appears to have lost some line breaks that would have made it easier to read.
Mark: Happy to help! ^_^ Don’t have much else to do during the day besides talk to people who don’t know how to use computers, and fix mistakes that they’ve made.
I believe information density is a valid concept. It can be observed here:
And no doubt DNA has an impressive information capacity for its size, even more so for DNA’s read/write efficiency. But given a few billion years of evolution, I personally would have been disappointed by (and bet against) anything less.
However neither magnetic storage nor DNA is anywhere near the theoretical limit:
I also suspect that using light (or any RF wave) could go even furthur, as MCC suggests, although it might be tricky to store for later use.
Careful, you’re mixing definitions!
When we talk about capacity and density of disk drives and how they store information, we’re not talking about information in the IT sense. For example, a gigabyte of disk space could be taken up in repetitions of “1000100100”, and we’d say the disk was “storing 1G of information”; but to information theorists, that’s only a couple of dozen bytes of information at most!
The problem with information density in the IT sense is that IT talks about information basically as a bundle of bits. It doesn’t talk about space, or time. But to talk about density, you need to have some notion of information per *something*; per cm, per second, per ml – you need some unit to express density. The problem with things like Gitt’s claim is that he’s making a *blanket* assertion about information density without specifying *how* that can be computed in a meaningful way.
If you want to specify that DNA is the “densest” information storage medium, then you need to describe *how* you measure density in a way that allows you to meaningfully compare the information density of different things.
Even if you’re talking about something like bits/cm3, to make a claim like Gitt’s, you need to show *why* the density of DNA is the highest, rather than just asserting it. As I think I said in the original post, if you look at a diamond, it’s an incredibly dense crystal, but it’s full of imperfections and impurities. What’s the information density expressed by the structure of particles inside of a diamond in bits/cm3? More or less than DNA? What’s the information density of the splatter pattern of particle collisions in a supercollider? A collider slams together *atoms* of gold; from the collision of a handful of gold atoms, we gather *terabytes* of data.
You guys sure are pros at putting text on the screen fast 🙂
OK, I don’t claim to be an expert at information theory. I’m just taking what this guy says at face value.
You can sit and argue that static contains a lot of information. This is true. But it is base level information. The fact is, no one cares about watching static on tv, because static is base level, practically worthless information. What people care about is higher level information, and that’s what Gitt is talking about.
It is true that a “noise” picture takes up more space on a computer that a regular picture. What does this tell us? Computers can only deal in statistical information. The computer looks at a picture, and dosen’t care what the picture is of, only if there are repeating patterns. However, we look at a picture, and distinguish a pattern of semantical value.
You used the example of pattern-recognition software. I am writing a program right now that compares thousands of words against each other, and returns a boolean similarity falue (T = They are “similar”, F = They aren’t). My program is able to read from a thesaurus to find similar words, and use it to determing whether or not two words are similar. You might watch it run and say “Ha! The computer is determining the semantic value of these words!” But this is not true. Roget wrote a thesaurus, I converted it, and now the computer can read it. It’s not the computer’s doing, it’s Roget’s and mine. Same with the pattern-recognition software. The computer can make mistakes, but ask a person to tell if a picture is pornographic or not, and the person will get it right every time.
Because of the massive number of processors, etc. in Google’s farm, removing one hard drive from it would remove, say 1/10000 of the entire system. Therefore, the effects would be miniscule. But try removing a sixth of any computer system, and it will not recover on its own.
Again, I’m not an information expert, and I may be using the wrong term. But the way science is advanced is by questioning science.
There is no such thing as “base level information”. Information is information. Period.
You’re making the error of confusing *semantics* with *information*. Semantics is meaning; information is just data.
As I keep saying, over and over again in this thread: Gitt’s stated motivation is: “Information theory is incomplete, and I’m going to fix it”. What he does to supposedly complete the mathematical theory of information is *throw math out the window*, and make a bunch of unwarranted, unsupported assertions, label them as theorems, and say “Poof! See, god exists”.
Here’s a summary of his argument in a very small number of bullet points. Unless you can show me that there’s something invalid about how I’m summarizing, then his argument is bullshit.
1. Information theory is incomplete, because it only considers the “statistical” level of information.
2. If you consider the other levels of information, then it becomes necessary that information *only* appears in the context of a *message*.
3. A message can *only* be generated by an intelligent agent.
4. DNA contains information, not just at the *statistical* level of information theory, but at my higher levels where information is a message; DNA is therefore a message.
5. Since DNA is a message, it requires an intelligent agent to send it.
6. Since life couldn’t exist before DNA, that means that the agent must be something that predates life on earth – the creator of life.
The problem with this argument, as I keep saying, is that the assertion that information *only* exists as a message transmitted by an intelligent agent, *and* DNA contains information in this higher level semantic-message sense is hard-wiring the assumption that there is a god. For the argument to stand up, you need to support the argument that there is a meaningful distinction between “message information” and “statistical information”, and show that DNA contains “message information”. Gitt utterly fails to even *try* to make that argument.
In the interest of fair play I believe we need to give Gitt’s (that name still sounds funny) scientifically challenged position a boost.
So even if he is off by orders of magnitude, it’s not relevant.
I’d just give him this one too (again, not relevant) and I expect provably false.
However, The idea that the core information system of life would evolve to the limit of efficiency is practically a definition of natural selection.
Then he goes supernatural, this is all too awesome, he gives up and blames the big ID.
But, as to the formal definition of information, I’m not sure we have anything over him.
I feel you’re giving him too much credit (even if you’re not giving him much). As soon as I realized he can’t even use the word “theorem” properly, I felt confident concluding that the entire thing is, mathematically speaking, bullshit. There’s not a single rigorous statement of any kind evident in that piece of work.
Pedantically correcting myself, there’s not a single rigorous novel statement in that piece of work.
As Mark said, you’re still getting tripped up in incorrect definitions. Information is defined in a very specific way in IT. Information is the about of novel stuff. It’s the degree to which you’ll be surprised at the next bit. It’s the degree to which it resists compression. These are all equivalent ways of stating information. “Has meaning to a human” is not equivalent – it’s using a completely different definition of information, one that Information Theory does not use.
You must use this definition when you’re talking about Information Theory, or provide a way to translate this idea of information into your new version.
Gitt does not do this. He simply expands the definition and starts making conclusions about it. There is *no* definition of what his information truly represents, *no* way to translate between the original definition of information and his new one, and in general *no* math at all! You can’t extend a mathematical theory without using any math. This is the basic reason why Gitt’s paper is rubbish.
To me it seems that Gitt is doing the pseudo-math equivalent of seeing the face of the virgin Mary in his grilled cheese.
Again, you seem to really want to stick to the Information Theory definition of information. But, if you think about it, there is more to “information” than just Shannon’s definition. The dictionary defines information as:
1: Knowledge derived from study, experience, or instruction.
2: Knowledge of specific events or situations that has been gathered or received by communication; intelligence or news.
3: A collection of facts or data
4:The act of informing or the condition of being informed; communication of knowledge
The term “information” is not unique to information theory. Gitt makes it clear that he thinks that the IT definition of information applies only to, what he calls, the statistical level.
If you think that there is only one kind of information, than nothing has any meaning. You can look at my post, and look at a sequence of random characters, and get the same amount of information from both.
Part of Gitt’s argument is that you can’t explain Semantics, Pragmatics, or Apobetics mathmatically, because math deals only with statistical data.
Gitt’s argument is bullshit. How many times do I have to repeat the same thing?
His argument starts by saying he’s going to fill in gaps in information theory. He then writes a paper that uses the *terminology* of math, but without any *actual* mathematics.
And worse, his argument is completely circular. He *asserts* without support that all information is in the form of a message. He *asserts* without support that DNA contains information *in his higher level form that requires it to be a message*. He then uses that to conclude that God is the person who *sent* the message embedded in DNA.
That’s pure gibberish.
For an argument like Gitt’s to be meaningful, he would need to actually *define* his higher levels of information, and tell us how to *objectively recognize* the existence of information at each of his higher levels.
He doesn’t. He merely *asserts* that information is best described by his five levels (for no particular reason); and he asserts that information is always a message (for no particular reason), and he asserts that DNA is a message (for no particular reason).
Let me try a slightly different tack. Based on Gitt’s definitions, how can we identify something that contains information in the Shannon sense, but *not* in Gitt’s sense?
If I give you a file that’s a random bunch of bytes, is it information?
What if I then tell you that it’s actually encrypted, but I don’t know the password. Does it contain Gitt’s higher order information? What if I say it *might* be an encrypted file. How can you tell whether it contains Gitt information?
Those questions *can’t* be answered based on Gitt’s paper.
Optimistically, I would guess three more (you know, Pi times)
Generally I concur with you (MCC) with the following exception:
The message concept for information is a valid perspective. Even Shannon deals with it in this way, his efforts were all about getting a certain quantity of information from A to B within a quality limit (time, signal/noise).
Maybe we just need a different term for this slightly richer concept.
Axiom 1: digital-info-message is a collection of bits transfered from a generator to an observer (could be any kind of entity at either end) with a mutually agreed (or guessed) context (language, including boundaries if applicable).
Otherwise, as you say, gibberish.
Curiously, I am Ok with DNA as a d-i-m, and even that it arises from intelligent agency. But I don’t agree with Gitt since I believe an intelligent agent can easily be an emergent property of a complex chaotic system (e.g. primal soup), and would shamelessly offer myself as an example (and of course everyone else).
Logically, it is impossible for me to prove that static does not contain an incredibly advanced incryption. It is true that information can be found in 3-d stereograms, which look like static unless you know how to look at them.
However, take the enigma as an example. The enigma at the time was the most advanced encryption system that existed, and it made no sense to those without a decryption device. However, the code was *still broken* because they were able to find an intelligent pattern in the information.
It has been proven that a computer cannot generate truly random numbers. Eventually, it will start repeating itself. Noise usually repeats itself as well, whereas information with meaning *probably* will not.
Because evolution is *not* provable, I would like to see a better example of an intelligent agent arising from a complex chaotic system.
There’s the problem. Gitt’s “theory of information” requires that you be able to distinguish between noise (aka “statistical” information without meaning) and “meaningful” information (aka info with higher-level content). But he can’t show you how to do that.
So how does he conclude that DNA is *message* content with an intentional sender? He doesn’t. He just says that it must.
So he can’t tell us how to recognize information with an intentional sender; and he can’t tell us how to *disqualify* information that does *not* have an intelligent sender. So what value does his entire argument have?
(As an aside, part of the reason that Enigma was broken was because the allies were lucky enough to acquire an enigma machine!)
Not true. In fact, the *majority* of noise sources will *not* generate repeating patterns. Check Shannon’s papers, or any decent text on information theory.
In other words, “I’m going to eliminate from consideration the only demonstrable example that we have of an intelligent agent, and then declare that I win the argument if you can’t show me an example of an intelligent agent.”
Let’s put this in logical form:
“Some information is information that contains meaning”
Now, logically, the ONLY other statement that truly contradicts this is:
“NO information is information that contains meaning”
I know that that isn’t what you’re saying. Therefore, you must hold one of three positions:
1. “ALL information is information that contains meaning”
2. “Some information is not information that contains meaning”
3. [Gitt’s position] “Some information is information that contains meaning”
We know that DNA resembles a set of instructions. It has been called “the code of life”. The body is put together based on the instructiongs given by its DNA.
Imagine a teacher giving her students an assignment. She encrypts all the work that she wants her students to do, accompanied by a due date by using a graphite pencil to inscribe letter-like strokes on a sheet of paper that form english words. The students take this paper home, look at each stroke, determine what letter each stroke forms, combine stroke forms to determine words, put the words together to understand what the assignment is, and do the assignment.
If you watch this interaction, you know that the student was able to find a meaning in the assignment. Even if you couldn’t read, and had no concept of writing, you would be able to realize that the student was given instructions, and that the student did them. The same is true for DNA…exept DNA IS the assignment. The body is the student. Now…who is the teacher?
(By the way, the allies actually found some enigma codebooks, not an actual machine. Even if they had a machine, it would be statistically impossible to get the right combination of wheels and plugs at the right time.)
If you say that evolution is the only demonstratable example of a an intelligent agent arising from a complex chaotic system, and yet it is possible, than you have committed the fallacy of circular reasoning.
You cannot PROVE scientifically that evolution happened, because it is not repeatable.
You cannot PROVE scientifically that a complex chaotic system can act as an intelligent agent, because you don’t have an example that is repeatable.
In order to PROVE scientifically that an intelligent agent can arise out of a chaotic system, you must give an example that *is* repeatable.
Events don’t have to repeatable in order to draw scientific conclusions. Only observations. Science quite happily draws conclusions about historical, unrepeatable events all the time. But the observations used to draw those conclusions are repeatable.
This is the last time I’m going to explain this; I think you’re being quite deliberately obtuse here.
Gitt claims to make the argument that life *must* have been created by god. The way that he does this is by *purportedly* extending information theory, introducing new levels of information. *According to Gitt*, these new levels of information *require* a intelligent agent – they cannot be created without one.
He then asserts that DNA contains information *in this new sense* – this sense that requires the *deliberate* action of a communicating intelligence.
Asserting that DNA contains *meaningful* information is *not* the same thing as asserting that DNA contains Gitt’s notion of meaningful information. Gitt’s notion requires not just that it be meaningful – but that it be a message, deliberately transmitted by an intelligent agent.
An ice core drilled from antartica contains information in its layers. That information is *meaningful*; it records how much snow fell each year. But it clearly is *not* information in the Gitt message-transmission sense: the ice core was not deliberately laid down as a message from an intelligent agent.
Davis already covered the science issue, so I’ll just chime in in agreement. “Repeatability” in science doesn’t mean that we have to be able to repeat an event. I don’t think that anyone would say we can’t prove anything about how stars form because we can’t repeatably create a star. I don’t think anyone would say we can’t prove that a crater was caused by an asteroid impact because we can’t repeatably create craters by slamming asteroids into the earth on demand. The *observations* that lead us to those conclusions are repeatable. *That* is the repeatability criteria used by science.
You are not the first to suggest that a message from me does not provide evidence of intelligence.
I suspect a robust mathematical definition of an inteligent agent is out of my reach, but appears to be a requirement for your (Gitt’s) position. What if the “Intelligent Designer” was so dumb as to be no better than random?
How about a mountain stream (on a quest for the ocean) sends a message to the bedrock (“get out of my way”) using small particles (use of a tool implies intelligence) to carve the grand canyon, creating art!
P.S. You have it backwards, random noise repeats less than signal, just look at this thread!
(As scientists, I hope you know this)
No event is scientifically provable. It is only historically provable, and there is a difference. Scientific proof is 100%, absolutely, no question, I’ll-show-you-if-you-don’t-believe-me proof, because it is repeatable, you can watch it, and (ignoring experimental error) is accurate. Historical proof is more along the lines of “all the evidence points to this conclusion, so we’re 99% sure that it happened”. Evolution is not scientifically provable. It was an event. Even if it did happen, it will never happen again. (At least, not when you are able to see it). If you then try to say that you have “scientific proof” because evolution happened, than *you* are the one using bad science.
I’m sorry if my posts have led you to believe that I don’t understand what you’re saying. I read your post, and every time I understand it, and wonder why you keep putting the same thing up.
Gitt is stating the obvious. He is just assigning names to it.
Have you ever *seen* an intelligent agent rise from an unintelligent source? Neither has Gitt. What he says may not be scientifically provable, but all the evidence *is* in his favor.
I don’t want to get into an evolution/creation debate here because it’s off topic. But think about it! THE ONLY EXAMPLE YOU HAVE OF RANDOM CHANCE GENERATING MEANINGFUL “MESSAGE” INFORMATION IS EVOLUTION. Evolution is *not* scientifically provable! If you want to say that you have scientific proof that random chance can generate a meaningful message, the laws of science *demand* that you provide a repeatable, observable example! I don’t know how many times I need to say this–I would think that you guys are all experienced enough as scientists/mathmaticians to know it!
You said use of a tool “implies” intelligence. ‘Implies’ does not mean ‘proves’. And where is the information transfer? Even according to Shannon, there is no actual information transfer!
No. We do *not* have random chance generating “meaningful message information” in evolution. That’s the point you keep missing. Gitt asserts *without evidence* that DNA is *message* information. DNA *does* contain information. That information *does* have meaning. But making the leap from “DNA contains information with meaning” to “DNA contains deliberately transmitted message information” is *not* obvious, and neither you not Gitt has demonstrated it.
(And incidentally, “implies” does mean “proves”. Implication is one of the steps of the logical deductive process of proof in predicate logic: Given “∀ x : A(x) implies B(x)” and “A(y)” for some y, you can prove “B(y)”.)
Ah ha, the core of our disagreement.
If you say that DNA contains no (Gitt)message information, than you must also say that your computer code contains no message information. Computer code *is* information that contains a message, and it is also information that only an intelligent source can generate.
A computer is not capable of thinking up a program for itself, writing meaningful code to itself, and executing itself. (You may be able to write a program that does something similar, but remember that the result will be entirely predictable. It is you who ultimately wrote the code…not the computer)
The same is true for DNA. DNA is a code that has been “written”, “compiled” into DNA form, and “executed” by the body. To turn your back on this fact would be to discard all genetic science as we know it. Almost all biologists agree that DNA is a code, a set of instructions, to the body. And it is more complicated than any computer program that has ever been written.
If you have a blank computer with no programming on it, it will not create programming for itself. You can kick it, scream at it, coax it, or shake it, but it will *never* write a meaningful program on its own. How can it be true, then, that DNA wrote itself, compiled itself, and executed itself on its own?
(601, where is the math…or the evidence…that shows that implementation of a tool implies intelligence?)
You can’t just *say* “DNA is a code that has been written”. You need to *prove* it. You’re just continuing Gitt’s circular argument: “Messages need a sender”, “DNA is a message”. You need to show why DNA is a message. Asserting that it has meaning isn’t sufficient: there are plenty of natural sources of meaningful information that we don’t consider messages. Using a metaphor for program code as DNA isn’t sufficient; you need to show *why* it’s a valid metaphor. Just because you can say “DNA looks like instructions, just like a computer program; therefore anything that I can conclude about a computer program, I can conclude about DNA” is not valid logic. (It’s an example of the old fallacy: “All men are mortal. My dog is mortal. Therefore my dog is a man.”)
The fact that I don’t just blindly accept the assertion that DNA is a message is not denying that DNA is part of the biochemical process of life, that it performs a crucial function in that process, or that it encodes information that is needed by the biochemical processes of life. But none of that that requires that DNA be a message.
The point is that if you want to make the claim that something is a message, you need to provide a valid objective criteria by which one can determine whether or not something is a message.
Is an ice core a message? It clearly encodes information with meaning. But is it a message?
Is the radio wave emitted by a pulsar a message? It clearly encodes information with meaning. (The pulse encodes information that describes the size and speed of rotation of the star.)
If those two are *not* messages, then how do you differentiate between things that contain “meaningful” information that are messages from those that are not.
Because no scientific law has yet been established, It is very difficult to scientifically prove that DNA is, in fact, a message.
Here is a logical proof:
1. All information that contains a set of instructions is information that contains a purpose.
//(This is true by the very definition of ‘instructions’. All instructions are given with a purpose.)
2. All information that contains a purpose is generated by an intelligent source.
//(This is a scientific theory, if not a law, going by the scientific method. The observation *has* been made and tested multiple times. There is no contradicting evidence: i. e. we have *never* seen a message with purpose generated by an unintelligent agent. Thus far, there is no contradicting scientific evicence)
3. DNA is information that contains a set of instructions.
//(This has also been proven. We *know* that the body is build on DNA’s instructions.
Therefore: DNA is generated by an intelligent agent.
I know that you are going to disagree with my second premise. But what is the scientific method?
1. Make an observation, it becomes a hypothesis
2. Test the hypothesis
3. If the hypothesis passes these steps, it becomes a theory.
4. Test the theory
5. If the theory passes these steps, it becomes a scientific law.
6. If, at any time, the hypothesis/theory/law is contradicted by scientific evidence, it is rendered disproven.
My second premise has filled the requirements for it to become a law. You have given no **repeatable** scientific proof against it.
No, it’s not. You clearly don’t understand science very well if you think anything is ever that certain. There’s only one thing with that degree of certainty, and that’s mathematics. But mathematics is not science, even though it is an essential tool for science.
You’re equivocating left-and-right with your meanings here, in order to beg the question. Give me a rigorous definition of “instructions,” “purpose,” and “intelligent;” without rigorous definitions, your “proof” is garbage, as there are numerous senses of all these words.
Your statement of the scientific method is like the oversimplified version from a middle school science class, rather than what scientists actually do. Do you actually have any scientific training?
That’s what we call “definition games”. That is, your definition of “instructions” includes the notion of deliberate purpose imposed by an intelligent being. You then say that DNA is a set of “instructions” – but you do that using a different definition of instructions – the informal one that “instructions” consist of anything that describes how to construct something.
What you’ve omitted is the crucial step of demonstrating that your intelligent message definition of “instruction” is the same as the definition of “instruction” that you used for DNA.
So your argument ultimately invalid – exactly the same way that Gitt’s is. It’s got two fatal flaws. It consists of:
(1) All things that are “instruction1s” have a purpose.
(2) All things that have a purpose are created by an intelligent being.
(3) DNA consists of a set of “instruction2s”.
(4) Therefore DNA has a purpose
(5) Therefore DNA was created by an intelligent being.
The two errors?
First, “purpose” is set up with a circular definition. “purpose” is set up in step 1 to mean “created by an intelligence to perform a task”; so step 2 really means that if something was created by an intelligent being, then it was created by an intelligent being.
Second, “instruction1” and “instruction2” are not the same thing. You use the same word, but you use them in totally different senses: “instruction1” means “created to perform a specific task by an intelligent being”; “instruction2” means “information describing a sequence of steps”. You have *not* shown that “x ∈ instruction2 implies x ∈ instruction1. (In fact, you haven’t even really shown that DNA ∈ instruction2; you’ve just pointed at informal descriptions of DNA by biologists using the term. That’s not proof.)
I was just trying to get my point across. Scientific proof is as close as we can get to absolute proof, even if it is only 99.9998% acurate. But the “i’ll show you if you don’t believe me” part *is* true–something scientifically provable is repeatable. (How many times do I need to say it?)
I think the definitions of “instructions”, “purpose” and “intelligent” are kind of obvious, but I don’t mind posting them anyway.
INSTRUCTIONS: “An authoritative direction to be obeyed; an order” Directions, order, list of tasks.
PURPOSE: “A result or effect that is intended or desired; an intention” A motive, having an end result in mind.
INTELLIGENT: 1: “Showing sound judgment and rationality” 2: “Appealing to the intellect, intellectual”. The ability to judge, deduce, and produce beyond the amount of data inputed.
(Definitions in quotes are from the American Heritage Dicionary of the English Language)
I didn’t want to go into a detailed analysis of the scientific method. I gave a simple summary for a reason. If I missed something important there, please tell me. The *general* idea is: test a theory until it is proven wrong. You **still** haven’t shown me any *repeatable* scientific proof against what I said!
We’re playing the definition game.
We can go on for hours on why the other’s definition is wrong.
You are using the straw man fallacy in both cases. DNA is a set of instructions, because we can observe that the body follows the instructions, and that if the instructions are changed, the body changes accordingly. If you have a better mathmatical way of determining whether or not information is instructions (and if you can prove that DNA is not a set of instructions) I would love to see it.
You both are majorly ignoring my main points and focusing on the details.
I am not ignoring your main point: I’m focusing on the crucial things that make your point stand or fall. You’re claiming that you and Gitt have an actual defensible logical proof that DNA was created by an intelligent agent.
But you have no such thing. You have a circular argument that includes a major logical fallacy (affirming the consequent).
You’re *defining* instructions in a way that *requires* them to be created by an intelligent agent – so that anything that isn’t created by an intelligent agent isn’t instructions. But then you switch definitions – you assert that DNA is instructions because it directs a process. But you’ve never made the essential connection – between the intelligent agent and DNA.
DNA *looks like* instructions in the sense that it provides the guide for a series of chemical processes. But that doesn’t make it *a set of instructions created by an intelligent agent for a purpose*. It makes it a chemical that acts as a guide for a series of chemical processes.
I don’t know a mathematical way of determining whether or not information is a series of instructions. I don’t know a mathematical way to determine if information is *meaningful* in the Gitt sense of including his “higher levels” of information. In fact, I don’t believe that it’s *possible* mathematically to do that.
But Gitt (and you) are asserting that *you can* do that. You need to *show* that you can. If you cannot show an objective way to recognize the difference between information with “message meaning” and information without; or the difference between information that consists of “instructions” and information that doesn’t, then *you don’t have an argument at all*. Your entire structure depends critically on that distinction – you need to be able to show it in a better way than “Well, DNA sure *looks* like instructions, and since I’ve defined instructions as requiring an intelligent creator, then poof! god exists!”.
You can say it as many times as you like, but that doesn’t change the fact that your version of “scientifically provable” does not reflect what scientists actually do. You keep asking for events to be repeatable, while ignoring the fact that it is observations which must be repeatable. This is a hugely important distinction.
By your requirement, you could never scientifically prove anything outside the lab.
Yes, DNA contains instructions. The machinery of the cell endows them with semantics: this sequence corresponds to that amino acid or this protein, or some enzyme. The sender of the message is the machinery-message pair. The organism puts a copy of the message into each offspring. The purpose of the message is to create more copies of the message. It’s exactly like a chain letter: the purpose of the chain letter is to create more chain letters. It replicates using people and computers.
The major question is whether the first such message arose at random or by design. Supporters of evolution believe that it arose at random. Here’s an analogy.
Assume that there are bazillions of computers, all running the same program: it interprets e-mails as though they were programs. This is analagous to the chemical environment of earth, a massively parallel machine waiting for code to execute. It gets emails from neighbors, runs the code, and sends the results. The line is somewhat noisy, so there are mutations.
Depending on the instruction set, self-reproducing emails could be common or rare. That’s a question for the biochemists. But once one 2-quine appears, a program that prints its own output twice, there’s an exponential increase in the number of 2-quines being sent around. If there are N computers, then in about log N time, all of them are sending around this program. If a variation causes one copy to change from being a 2-quine to a 3-quine, then it reproduces exponentially faster and conquers the world. Code that’s fragile tends to malfunction when it gets mutated, so quines that incorporate error correction thrive.
Here’s a great page on quines:
If you look at these, you’ll usually see some code (cell machinery), usually printing stuff, as well as some string data (cell DNA) that describes the machinery.
Notice that there’s no need to have an “intelligent” sender in the sense you describe. The machine-message pair gets sent around because messages that aren’t self-replicating don’t get resent!
I very clearly defined “instructions” in my last post. Did you see it?
I am not simply asserting that instructions must be generated by an intelligent agent. On the contrary, I am trying to prove it logically. We’ve been over this multiple times, and you still have not provided any contrary evidence!
You said that it is impossible to prove whether information includes instructions mathmatically. Finally, something we agree on 🙂 I hope you also agree that it is possible to prove that information includes instructions logically. If you go by observation, noticing that the body operates on the basis of DNA, and that if you alter DNA the body changes, that DNA is a set of instructions.
Have you read my earlier posts? There is a difference between scientific proof and historical proof. You cannot prove scientifically that Columbus existed, but you can prove scientifically that the world is round. Scientific proof can disprove or help prove historical events historically. (For example, if we were able to prove that the world was flat, we could disprove the fact that Magellan sailed around it)
You aren’t really answering the question: “Where did the information come from?” Earlier in this thread, I posted an example. Let me repost it here:
Imagine a checkerboard. Now imagine that you have a cup full of checkers, both black and red. Suppose you want to create a pattern of checkers on this checkerboard. It would be easy enough. But let’s suppose that you must create this pattern by randomly throwing the checkers in the air, and letting them fall on the checkerboard. The chances of this pattern occurring are 3 to the 64th power, or 3.43 *10 to the 30th. This means that you’d have to throw the checkers on the checkerboard 6550831464233300 times per second…for at least ONE BILLION years. You can do the math, if you doubt me. The fact is, as far as we know, random chance never generates meaningful information.
Let’s say that we’re using the checkerboard to represent a piece of binary code (say, red = 0, black = 1). The chances that chance could generate binary code that is meaningful is INCREDIBLY small! Now imagine the chances that the meaningful instructions in DNA happened by chance. DNA is SO MUCH MORE complicated than any checkerboard pattern, and the odds would truly be gastronomic.
Remember, when you talk about evolution, you talk about starting with NOTHING. No pre built computer code, no emails. All this information must come from 100% pure randomness…no machinery to help it along.
Oh man, now we trot out the old canards.
First, evolution doesn’t start with nothing. Evolution starts with a self-reproducing system. Evolution doesn’t say anything about how this self-reproducing system comes into being, it just describes how this system changes and adapts over time. For origins, you’re talking about abiogenesis.
Second, theories of abiogenesis don’t assume that random atoms came together in a random manner. They typically assume that natural processes created various organic molecules (which is reproducible in any chemistry lab), and these molecules combined into more complex molecules. Eventually a self-reproducing molecule is produced, and it soon takes over. See Mike Stay’s comment above yours, talking about 2-quines.
A self-reproducing system quickly dominates the environment it finds itself in, because its children are *not* produced randomly like all the other molecules are. Now
I should hope that it is obvious to the readers at home that this is *exactly* what Mark was talking about when he complained about redefinition. Let me pull out the relevant quotes from this post:
“I am not simply asserting that instructions must be generated by an intelligent agent.”
“If you go by observation, noticing that the body operates on the basis of DNA, and that if you alter DNA the body changes, that DNA is a set of instructions.”
Yes, DNA gives precise details on how to produce a body. By definition, then it is a set of instructions. However, this does NOT show that DNA is a set of instructions generated by an intelligent agent.
*This* is one place where you keep messing up. You say you are proving one thing, but instead you continue to prove something similar which has very important differences in our discussion. Instructions are instructions – they tell you how to build something. Instruction generated by an intelligent agent, though, do a *lot* more. They *imply* a lot more. And they require more proof.
Are you saying that you aren’t starting with nothing? What is your theory of abiogenesis?
Sure, you might get some more complex molecules from random chance, but never anything as complicated as DNA. As a matter of fact, the only organic compounds that have been generated by chance are amino acids, and the like. But amino acids can’t reproduce by themselves. They only reproduce with the aid of DNA, RNA and tRNA. By the way, just what kinds of molecules that have been randomly created can reproduce?
This is a logical summation of my viewpoint on instructions:
All instructions are information that was created by an intelligent agent.
I assume that this is your viewpoint on instructions:
Some instructions are not information that were created by an intelligent agent.
Now, in order to defend your statement, you must provide an example of a set of instructions coming into being without the aid of an intelligent agent.
Remember, you *cannot* use evolution as an example, because evolution is what we are arguing about.
Your example also needs to be repeatable for it to be scientifically provable.
This is the point that everyone is avoiding. No one has yet provided such an example.
The burden of proof is on you. *You* are the one making an assertion, that *all instructions are information created by an intelligent agent*.
You and Gitt both make that assertion. You can’t just assert it and say “I’m right, I’m right, nyah nyah nyah”. That’s a very strong assertion, and you need to justify it.
You keep circling around the central issues, because you can’t address the real point. Both you and Gitt are claiming that there is an another kind of information, which can only be created by an intelligent agent.
But you cannot identify it. You can’t measure it. You can’t tell if it’s there unless you already know it is.
That’s the ultimate circularity of your argument. DNA is *instructions*, which have to have been created by an intelligent agent. Why? Because it contains *a special kind* of information. How do you know? Because *you assumed that from the start*. You *assume* that DNA is something special that could only have been created by an intelligent agent, and then use that assumption to prove that DNA is something special that could only have been created by an intelligent agent.
Imagine this: a completely ignorant scientist is doing an experiment on a dog, to see what a dog can breathe. He puts the dog in a chamber containing oxygen, in which the dog survives. He makes the statement:
“Oxygen is the only substance that this dog can breathe to survive!”
The scientist has very little proof currently. However, he then tests the dog in a chamber of pure carbon dioxide, in which the dog begins to suffocate. His statement now has a little more credibility.
The scientist moves on to test the dog in chambers of carbon monoxide, nitrogen, hydrogen (that poor dog!), all of which suffocate the dog. The scientist’s statement is now fairly credible.
Eventually, the scientist has tested the dog in every possible gas known to mankind. At this point, we can accept the scientist’s statement as true.
We’re not simply asserting without proof. Every single set of instructions that we have seen (excluding DNA for the purpose of the debate) have an intelligent creator.
Now, of course, if another scientist produced a gas called x, put the dog in a chamber with it, and showed that the dog could survive, the first scientist’s statement would be proven false.
Notice what you’re doing. You’re saying “You’re not right. I’m right, because you can’t prove that you’re right”
Like I said earlier, you have still not offered any contrary evidence. It’s more like you are asserting that we’re wrong. I, on the other hand, have (literally) worlds of evidence to support my position.
You haven’t been able to define your terms, at least not in a meaningful way.
If there is no way that you can tell me whether or not a given piece of information is a set of instructions, then there is no way that anyone can offer any contrary evidence. Because your definition of instructions is set up to require that it be something that you will interpret as being the product of an intelligent agent.
You insist on alternately phrasing things as mathematical or scientific arguments, according to what’s more convenient. And when you set it up as a scientific argument, you use a convenient mis-representation of how science works.
Gitt – and you – have claimed that there is a level of information, in the information theoretic sense, which describes information beyond the so-called statistical level. You’ve made all sorts of stupid mathematical errors talking about that (like the assertion that random noise will repeat, while information created by intelligence won’t). And you’ve been completely unable to show whether or not this “higher level” information is a meaningful concept.
But you claim to be able to draw conclusions based on that concept. You can’t tell me any way to recognize the difference between information with your higher level stuff, and information without it. The only thing you’ve been able to do is to *say* that DNA has “instruction” information, because it meets some informal ad-hoc definition. Which is very convenient, because it allows you to rule out any information that anyone proposes as “instructions”.
Yeah, as long as we’re talking about evolution, I’m not starting with nothing, because evolution is a process that operates on self-reproducing systems. It’s nonsense to talk about evolution before you have such a system set up. I’m not a biologist, so I don’t have any good theories on abiogenesis. I know that the process of evolution works, and for various reasons I’m reasonably certain that there was not any outside interference in life on earth, so self-reproducing molecules had to have started by themselves at some point in early Earth. Plus, as I argue below, biologists have synthesized various self-reproducing molecules themselves that aren’t excessively complex. They are *much* less complex than the DNA&family that life currently uses, and so presumably within the range of possibility.
The ‘and the like’ captures quite a bit. ^_^ Your last question basically asks “has life been created again?” The answer is, I don’t know, but I doubt it. First, earth conditions are quite a bit different now than they were billions of years ago. Second, life has become very efficient at consuming resources, so any brand-new self-reproducing molecule is pretty much going to starve and then be killed and eaten. The earth is pretty much guaranteed to never see a new form of life survive on it, because brand new life is pretty sucky at reproducing itself at first, at least when compared to current life which has had billions of years to improve itself.
Of course, we *have* created self-reproducing organic molecules in the lab. They typically resemble proto-RNA or peptides. A simple google search on the phrase “self-replicating molecules” will turn up many references to scientific papers in both computer science and molecular biology dealing with the synthesis of various self-replicators.
Grr! Sorry for the bad blockquote tags. >_http://dllab.caltech.edu/avida/
That kinda sends a chill up my spine, the natural universe can be so awesome (sans magic).
Also, it seems some of my own code has evolved a stealth defense against the regression test suite.
Let’s have a closer look at this.
What does Gitt mean by “re-presentation”? The word literally means “make present again”; but that’s not what we usually mean by the word. The ambassador of country X to country Y re-presents X in Y. This is the more usual meaning: a re-presentation is a substitute presentation that somehow can function as a full presentation.
If I own 100 cows, I can present those cows. That’s fine, if I have to do it at my farm, and I only have one farm. But say I have them at teo different locations, then it’s somewhat awkward with a direct presentation just to say that I own 100 cows. Instead I can draw 100 pictures of a cow on a piece of paper; that’s a re-presentation, an abstraction. I could go further and simply write “100 cows”, which would be an even higher abstraction level; but certainly also more practical.
If we consider spoken language, this does not comply very well with Gitt’s “theorem”a that apply somewhat better to written language; but even here, there’s a few problems for the good doctor.
The oldest Egyptian inscriptions we have exhibit a mixture of drawings and symbolic signs with no clear distinction between these. The hieroglyphs remained a mixture; but from the hieroglyphs were derived the hieratic script for mundane purposes, and from this was derived the Phoenician script, a fully abstract script.
To some extent, Gitt has things in the wrong order. It’s not the code that is necessary for the information; it’s a need to communicate information that necessitates the code. And that code may evolve over time.
Like IDists in general, Gitt appears to lack an historical sense, a sense that occasionally even human products evolve without that evolution necessarily being a conscious effort.
The Germanic equivalents of Lation words frequently sport an ‘h’, where Latin has ‘c’ (e.g. ‘hundred’ versus ‘centum’), and a ‘k’, where Latin had ‘g’ (e.g. ‘kin’ versus ‘genus’). Did any committee decide this? Hardly, it just happened without anybody designing it to happen.
How do we know that genetic code hasn’t evolved? The third base in a codon is in most cases of no significance. Isn’t it possible that a two-bases-per-codon code was more original than the present three-bases-per-codon?
In short, I find that Gitt is too quick to draw conclusions, and that he has a too narrow view of, what might count as information.
Well, just my $0.02.
I am defining instructions as something that must be generated by an intelligent agent. In short:
ALL instructions are instructions that are generated by an intelligent agent.
In order to disprove my statement, (as I have said before), you MUST provide a counterexample! Proving whether or not something is information is beyond science, as you yourself have said. Therefore, I am resorting to logic to prove my point.
Perhaps you could define your terms? Your continuing argument is that I haven’t defined my terms. Look at my posts! I have, more than once, laid out my definitions in dictionary format! On the other hand, you can only claim that I can’t define my terms. But exactly what is your definition of “instructions”?
Regardless of whether or not you are starting with nothing (and if you aren’t starting with nothing, what are you starting with?) you still have to deal with the fact that more complex (or, let’s say, more highly evolved) beings have a greater amount of meaningful information. This means that information was either generated randomly, or inserted into the system by something else.
If something more complicated than amino acids were produced by chance, I would have said so. When I say “and the like”, I refer to other similar, less relevant, and less complex structures. Sure, you can come up with a so-called “self reproducing molecule”. So a molecule can take its 1 kilobyte’s (or less) worth of information and reproduce it…if it’s in the right circumstances. Now, show me a system that can reproduce terabytes of data contained in millions of microprocessors, and create a completely seperate physical entity, using a system that was not created by an intelligent source.
I don’t really thing Gitt’s “theorum 4” is even worth debating about. A code can take on almost any form, and the only way we can detect it is by reading the code.Even if you are right, it’s not the core of his argument.
You don’t get to make claims, and put the burden on other people to prove them. Your goal is to prove that DNA is generated by an intelligent agent. You need to show that DNA is generated by an intelligent agent.
I can’t show you a counterexample – because *you* cannot give me a meaningful definition of what an *instruction* is. Your definition of “instruction” is basically “something that I recognize as an instruction”. Without an *objective* method of recognizing whether or not something is an instruction, then there is no way to create a counterexample – because you can just simply respond: that’s not an instruction.
For example: I think that if we view DNA as a set of instructions for managing the chemical processes of living things, then a prion is a set of instructions for how to fold a protein. When the correct protein contacts a prion, it goes through a specific sequence of geometric transformations resulting in a different fold shape.
Is a prion instructions? Why or why not?
To put it another way, with a bit of silliness. With due credit to Douglas Adams, suppose I claim that that the universe was sneezed out the nose of a being called the great purple arklesneezure.
As evidence for this, I say that every mucousic substance in the universe is created by the biological processes of intelligently-created beings. To prove me wrong, all you need to do is show me a mucousic substance that was created artificially.
What’s a mucousic substance? Well, *obviously*, it’s something that dripped out of a stuffy nose.
So when my daughter watches repeats on Nickelodeon of that show where they drop buckets of green slime on people, I conclude that there must be something at nickelodeon with a really nasty cold. Because that slime is clearly mucousic – I mean, it’s slimy and viscous, almost exactly the consistency of the goo that drips out of my nose when I have a really bad sinus infection. The color is a bit bright, but that could just be the lighting in the studio.
Now, most people would argue that the slime on Nick is just a bunch of goo that some prop guy mixed together, not that there’s someone with a cold sitting with a bucket under their nose. But the show isn’t produced anymore, and so there are no extant samples of the slime that they used. Can you prove to me that the slime is mucousic, but that it wasn’t dripped from somethings nose?
I *am* showing you that DNA is created by an intelligent agent. As I have stated multiple times, I am attempting to prove the theory that all instructions are generated by intelligent agents.
Look at it this way. Using the Naive Bayes algorithm for prediction, we look at all examples of instructions. The variable “n” will represent all the examples of instructions that were generated by an intelligent agent. Now, unless you can give an example of instructions that were not generated by an intelligent agent, the chances of DNA not being generated by an intelligent agent would be 1/n. And, since I can provide just about as many examples as I want, I’d say that I’m supported by overwhelming odds.
A prion would not be instructions, but a means for carrying them out. A jig for a woodworker is not the instrutctions for how to cut the wood. Information was used to create this “jig”, but it is not instructions in itself.
On to your example of the Great Purple Arklesneezure. While is was a nice try, it is a *very* faulty analogy. If, say, you wanted to try to prove that the world was composed of a mucoscic substance, you would have to offer a lot of proof. The only thing you have is the Nickelodeon slime example. Not only is it easy to disprove (I could find evidence online, talk to the prop guys who mixed the goo, ask the people on whom the goo was dumped, etc.) but even if it were true, the possibilities would still be 118++/n(1) against your hypothesis.
(Remember, n represents provable examples. I just thought I’d use the number of elements on the Periodic table as the numerator.)
Therefore, the chances are 118 to 1 that the earth is made out of mucus (and I’m being generous, here!)
Looking at my hypothesis again, the chances are about, oh let’s say 0/5,000,000, in my favor.
And by the way, I actually think you might agree with me. You know that this post is generated by a human being, and not a computer. I can even come up with a set of instructions: “Mark, go eat some ice cream”. How possible do you really think it is, that that was generated by random chance?
*Why* is a prion *not* a set of instructions?
As I said – you have *not* provided a real definition of what a set of instructions *is*, or how to recognize it.
I can’t come up with a counterexample if your definition of instruction is “something that Peanut thinks is an instruction”. You have yet to provide any meaningful, objective definition or means of recognizing an instruction.
A prion is not very different from DNA in terms of *what it does* chemically. A prion is a chemical that directs another chemical through a complex and specific sequence of transformations. That sequence of transformations is embedded in the structure of the prion.
And why is a jig *not* an instruction? It tells the woodworker *what to do* in a very specific and precise way. A jig for cutting a specific shape is actually remarkably similar in many ways to a DNA molecule. The DNA molecule contains a “structural template” which is used to assemble a protein.
What makes DNA instructions, but jigs and prions *not* instructions? Beyond, of course, the fact that for your argument, you *want* DNA to be recognized as instructions, but you *don’t* want prions to be?
WRT the silly mucous example… The point is, there is *no way* you can prove that there’s something mucosic that didn’t come from a nose unless *I tell you what mucosic means*. If I don’t tell you, then no matter what “counterexample” you produce, I can just say “Nope, not mucosic”. Pretty much like you just did with the prions.
You continue to insist that DNA is instructions, and instructions only come from intelligent beings. But you refuse to define instructions in any way more meaningful that “Something that Mister Peanut recognizes as instructions”.
Challenging me to provide an example of something as undefined as your idea of instructions is meaningless.
Isn’t almost any catalyst a set of instructions?
I provide an example of an intruction(or several) of how to play the game of Nim perfectly, produced randomly guided by evolution in the post “Evolving a Nim player” at shuusaku.blogspot.com
I can think of a definition of “instructions” under which DNA would qualify as “instructions” but prions and enzymes would not.
…’course if we were to go by the definition I’m thinking of, RNA (which can catalyze reactions and stuff) doesn’t qualify as “instructions” either… so that’s not gonna be able to help Mr. Peanut much.
What’s an “intelligent agent”?