A Glance at the Work of Dembski and Marks

Both in comments, and via email, I’ve received numerous requests to take a look at
the work of Dembski and Marks, published through Professor Marks’s website. The site is
called the “Evolutionary Informatics Laboratory”. Before getting to the paper, it’s worth
taking just a moment to understand its provenance – there’s something deeply fishy about
the “laboratory” that published this work. It’s not a lab – it’s a website; it was funded
under very peculiar circumstances, and hired Dembski as a “post-doc”, despite his being a full-time professor at a different university. Marks claims that his work for
the EIL is all done on his own time, and has nothing to do with his faculty position at the university. It’s all quite bizarre. For details, see here.

On to the work. Marks and Dembski have submitted three papers. They’re all
in a very similar vein (as one would expect for three papers written in a short period
of time by collaborators – there’s nothing at all peculiar about the similarity). The
basic idea behind all of them is to look at search in the context of evolutionary
algorithms, and to analyze it using an information theoretic approach. I’ve
picked out the first one listed on their site: Conservation of Information in Search: Measuring the Cost of Success


There’s two ways of looking at this work: on a purely technical level, and in terms of its
presentation.

On a technical level, it’s not bad. Not great by any stretch, but it’s entirely reasonable. The idea
of it is actually pretter clever. They start with NFL. NFL says, roughly, that if you don’t know anything about the search space, you can’t select a search that will perform better than a random walk.
If we have a search for a given search space that does perform better than a random walk,
in information theoretic terms, we can say that the search encodes information
about the search space. How can we quantify the information encoded in a search algorithm
that allows it to perform as well as it does?

So, for example, think about a search algorithm like Newton’s method. It generally homes in extremely
rapidly on the roots of a polynomial equation – dramatically better than one would expect in a random
walk. For example, if we look at something like y = x2 – 2, starting with an approximation of a
zero at x=1, we can get to a very good approximation in just two iterations. What information is encoded
in Newton’s method? Among other things, it’s working in a Euclidean space on a continuous, differentiable
curve. That’s rather a lot of information. We can actually quantify that in information theoretic
terms by computing the average time to find a root in a random walk, compared to the average time
to find a root in Newton’s method.

Further, when a search performs worse than what is predicted by a random walk, we can
say that with respect to the particular search task, that the search encodes negative information – that it actually contains some assumptions about the locations of the target that
actively push it away, and prevent it from finding the target as quickly as a random walk would.

That’s the technical meat of the paper. And I’ve got to say, it’s not bad. I was expecting something really awful – but it’s not. As I said earlier, it’s far from being a great paper. But technically, it’s reasonable.

Then there’s the presentation side of it. And from that perspective, it’s awful. Virtually every
statement in the paper is spun in a thoroughly dishonest way. Throughout the paper, they constantly make
statements about how information must be deliberately encoded into the search by the programmer.
It’s clear the direction that they intend to go – they want to say that biological evolution can
only work if information was coded into the process by God. For example, an evolution to use beta alanine as a catylist for digestion would of been already predisposed by information placed in DNA. Here’s an example from the first
paragraph of the paper:

Search algorithms, including evolutionary searches, do not
generate free information. Instead, they consume information,
incurring it as a cost. Over 50 years ago, Leon Brillouin, a
pioneer in information theory, made this very point: “The
[computing] machine does not create any new information,
but it performs a very valuable transformation of known
information” When Brillouin’s insight is applied to search
algorithms that do not employ specific information about the
problem being addressed, one finds that no search performs
consistently better than any other. Accordingly, there is no
“magic-bullet” search algorithm that successfully resolves all
problems.

That’s the first one, and the least objectionable. But just half a page later, we find:

The significance of COI [MarkCC: Conservation of Information – not Dembski’s version, but from someone
named English]
has been debated since its popularization through the NFLT [30]. On the one hand, COI has a
leveling effect, rendering the average performance of all algorithms equivalent. On the other hand,
certain search techniques perform remarkably well, distinguishing themselves from others. There is a
tension here, but no contradiction. For instance, particle swarm optimization [10] and genetic algorithms
[13], [26] perform well on a wide spectrum of problems. Yet, there is no discrepancy between the
successful experience of practitioners with such versatile search algorithms and the COI imposed inability
of the search algorithms themselves to create novel information [5], [9], [11]. Such information does not
magically materialize but instead results from the action of the programmer who prescribes how knowledge
about the problem gets folded into the search algorithm.

That’s where you can really see where they’re going. “Information does not magically materialize, but
instead results from the action of the programmer”. The paper harps on that idea to an
inappropriate degree. The paper is supposedly about quantifying the information that
makes a search algorithm perform in a particular way – but they just hammer on the idea
that the information was deliberately put there, and that it can’t come from
nowhere.

It’s true that information in a search algorithm can’t come from nowhere. But it’s
not a particularly deep point. To go back to Newton’s method: Newton’s method of root
finding certainly codes all kinds of information into the search – because it was created
in a particular domain, and encodes that domain. You can actually model orbital dynamics
as a search for an equilibrium point – it doesn’t require anyone to encode in
the law of gravitation; it’s already a part of the system. Similarly in biological
evolution, you can certainly model the amount of information encoded in the process – which
includes all sorts of information about chemistry, reproductive dynamics, etc.; but since those
things are encoded into the universe, you don’t need to find an intelligent agent
to have coded them into evolution: they’re an intrinsic part of the system in which
evolution occurs. You can think of it as being like a computer program: computer programs
don’t need to specifically add code into a program to specify the fact that the computer they’re going to run on has 16 registers; every program for the computer has that wired into
it, because it’s a fact of the “universe” for the program. For anything in our universe, the basic
facts of our universe – of basic forces, of chemistry, are encoded in their existence. For anything on earth, facts about the earth, the sun, the moon – are encoded into their very existence.

Dembski and Marks try to make a big deal out of the fact that all of this information is quantifiable.
Of course it’s quantifiable. The amount of information encoded into the structure of the universe
is quantifiable too. And it’s extremely interesting to see just how you can compute how much information
is encoded into things. I like that aspect of the paper. But it doesn’t imply anything about
the origin of the information: in this simple initial quantification, information theory cannot distinguish between environmental information which is inevitably encoded, and information
which was added by the deliberate actions of an intelligent agent. Information theory can
quantify information – but it can’t characterize its source.

If I were a reviewer, would I accept the paper? It’s hard to say. I’m not an information theorist; so
I could easily be missing some major flaw. The style of the paper is very different from any other
information theory paper that I’ve ever read – it’s got a very strong rhetorical bent to it which is very
unusual. I also don’t know where they submitted it, so I don’t know what the reviewing standards are – the
reviewing standards of different journals are quite different. If this were submitted to a theoretical
computer science journal like the ones I typically read, where the normal ranking system is (reject/accept
with changes and second review/weak accept with changes/ strong accept with changes/strong accept), I
would probably rank it either “accept with changes and second review” or “weak accept with changes”.

So as much as I’d love to trash them, a quick read of the paper seems to show
that it’s a mediocre paper, with an interesting idea. The writing sucks: it was
written to try to make a point that it can’t make technically, and it makes that point with all the subtlety of a sledgehammer, despite the fact that the actual technical content of the paper can’t support it.

0 thoughts on “A Glance at the Work of Dembski and Marks

  1. Andrew

    Even if the information theoretic part is okay, I would be appalled to see this paper in a journal. What if someone wrote an (otherwise) excellent paper about a new quantum programming technique, but constantly throughout the paper kept saying how it proved coat hangers are edible?

    Reply
  2. krisztian pinter

    one can say: this paper tells a lot of interesting and true things. the only problem is that true things in it are not interesting, and interesting things are not true.

    Reply
  3. secondclass

    Mark,
    Your analysis is very well stated. As you point out, the technical side of their work and the presentation thereof are two separate matters. I agree that the presentation has some serious problems. It serves to obscure rather than clarify concepts that I find rather trivial, and it seems intended to create an impression of ID-friendliness.
    As for the technical side, they simply explore the various ways that searches can be adjusted to increase their efficiency. I’m sure that this has been done many times before, even in homework assignments.
    The parts that I have problems with are:
    a) Casting the concepts in terms of information
    b) The quantification of the increase in efficiency
    c) The arbitrary nature of selecting a baseline search
    In classical information theory, one information measure of a message is the surprisal, which is the negative log of the probability of the message. This is the measure that D&M use, but they use it in a strange way: They take the probability that a search will succeed — that is, the “message” is binary, either “success” or “failure” — but instead of associating the information with the success/failure outcome of the search, they associate it with the search itself, which is confusing to say the least.
    What’s even more confusing is their definition of active information. AI is not the negative log of a probability; rather, it’s the negative log of a ratio of probabilities. So how do we pinpoint the “message” that contains this information?
    Here’s an attempt. The event A associated with the active information is defined such that P(A)=P(B)/P(E) where B is the success of a baseline search and E is the success of an efficient search. Let’s pretend that the parameters of the efficient search were chosen from all possible parameters; we’ll call this selection event X. NFL tells us that P(X)*P(E|X) = P(X). After the negative log transformation, I(A) this paper, they explore two blind searches, one of which is much more effective than the other (or so they say) because of its search structure. Which of those should be used as a baseline? It seems rather arbitrary.
    (If you’ve read this far, I’ll throw in another tidbit. I’m pretty sure that the numerical results and conclusions in the paper cited above are completely wrong, so D&M will have to rewrite it. You heard it here first.)

    Reply
  4. secondclass

    Dang less-than’s and greater-than’s. Here’s another try:
    “Here’s an attempt. The event A associated with the active information is defined such that P(A) = P(B)/P(E) where B is the success of a baseline search and E is the success of an efficient search. Let’s pretend that the parameters of the efficient search were chosen from all possible parameters; we’ll call this selection event X. NFL tells us that P(X)*P(E|X) <= P(B), so P(X) <= P(B)/P(E|X), so P(A) >= P(A). After the negative log transformation, I(A) <= I(X). That is, the active information measure gives us a lower bound on the information associated with the selection of the particular search.”

    Reply
  5. Lepht

    i gotta say, i’m with Andrew. the technical ideas in the paper are acceptable, yeah, but they’re being abused (blatantly and inelegantly abused at that) in a vain attempt at making a theological point. it ruins the paper’s conclusions, it stops any less critical reader from getting the most out of the information they provide and it gives them free publicity; it doesn’t belong in a computing journal, imo.
    Lepht

    Reply
  6. n3w6

    So, to generalize, what if one would write a very simple but very general algorithm (let’s call it POKE-AROUND) that would quickly but randomly create other algorithms, a tiny fraction of which might turn out to be useful for search. Would it be conceivable that, in time, our POKE-AROUND algorithm would stumble across a more efficient search algorithm that may actually encode by chance, some knowledge about the search space? And because POKE-AROUND itself is a very simple program, can it be that itself could be a result of chance, especially given a long time?

    Reply
  7. Mark C. Chu-Carroll

    n3w6:
    Nope. Using a meta-search to find a search doesn’t work. A meta-search that chooses from some set of options or random inputs to find a search is really just itself another search – and falls victim to NFL in the same way as any other search.
    The thing is, that’s not a problem. Dembski likes to try to claim NFL as a much stronger result than it actually is. NFL only talks about the properties of searches averaged over all possible search spaces. It doesn’t saying anything about how searchable particular sets of spaces are.
    For some search spaces, it’s very easy to find search algorithms that converge on solutions very quickly. NFL relies on the fact that you’re talking about performance averaged over all possible search spaces – and like most mathematical structures, most theoretically possible search spaces are highly irregular, and have no properties that are easily exploitable.
    Think of it like this: most functions – the overwhelming majority of functions – are neither differentiable nor continuous. Given an arbitrary function which you know nothing about, there’s no way to find its zeros faster than just randomly guessing until you get one. But if you’re working with polynomials, then you can easily find its zeros using a simple search process.
    Search is exactly the same way. Given a search space about which you know nothing, there’s no way to pick an algorithm that will do better than random. But if you know that your landscape is a smooth, continuous, differentiable surface in R3, there are a ton of search algorithms that
    will perform better than random.

    Reply
  8. Mu2

    “those things are encoded into the universe”
    Yep, and who encoded the universe …
    I think it’s just another way to say “if there’s evidence of evolutions it’s only because God wanted it to be that way”. Same reason he went through the trouble of burying those fern leaves in the coal; artistic license so to speak.

    Reply
  9. secondclass

    Correction to my correction: Replace “so P(A) >= P(A)” with “so P(A) >= P(X)” in my second post above.

    Reply
  10. Jason Rosenhouse

    Mark-
    Interesting post, but you should prepare yourself psychologically for the inevitable distortion of what you wrote over at Dembski’s blog. A while back I wrote a similar post about one of Dembski’s papers, describing things in much the same way as you did. I concluded that technically the paper was acceptable, but that it was abysmally written and that its broader conclusions were not correct. Along the way I remarked that the proofs seemed to be correct and that Dembski knew how to manipulate his symbols. It wasn’t long before Salvador Cordova was gushing over at Uncommon Descent that I had said Dembski’s paper was correct. Ugh.
    Also, I think you mean “random search” as opposed to “random walk.” A random walk usually refers to a situation where you are moving through some discrete space in which from every point you have known probabilities of moving to certain subsequent points. A random search is when you choose the points to sample at random from the search space. Thus, there is no connection between the point you sample next and the previous points you have already sampled. I think it was the latter meaning that is intended in the NFL theorems.

    Reply
  11. RBH

    Let me put your critique in a nutshell to check my understanding. If a given search method works better than a random walk in some search space, that fact tells us about (some) properties of the search space. However, it tells us nothing about where those properties came from, and nothing about how that search method came to be used on that search problem. Is that it?

    Reply
  12. Jonathan Vos Post

    See Mathworld for the hotlinks and citations to the mathematical literature for this excerpted definition:
    Random Walk. From MathWorld–A Wolfram Web Resource.
    “A random process consisting of a sequence of discrete steps of fixed length. The random thermal perturbations in a liquid are responsible for a random walk phenomenon known as Brownian motion, and the collisions of molecules in a gas are a random walk responsible for diffusion. Random walks have interesting mathematical properties that vary greatly depending on the dimension in which the walk occurs and whether it is confined to a lattice.”

    Reply
  13. David vun Kannon

    Also see the recent PT thread
    http://www.pandasthumb.org/archives/2007/09/how_does_evolut.html
    where at least one Tom English weighs in, not so much on the math as on the Baylor controversy.
    I agree that the technical points about “active information” are valid, but only mildly interesting. re his Chengdu keynote, I find it hard to believe that the “ev” algorithm contributed negative information per these definitions. Is Marks being misleading or is he cherry picking an example, the way UD bloggers like to harp on Dawkins’ WEASEL?
    ev is itself an intersting case. Schneider makes some effort to make it a “biologically realistic” model, and makes some argument that this realism is important. But then so much about the model isn’t realistic, it kind of undercuts his argument.

    Reply
  14. Tom English

    Mark,
    I appreciate your balanced response. I’m “somebody named English” who wrote in 1996 that NFL is a consequence of conservation of Shannon information in search. I’m affiliated now with Bob Marks and the EvoInfo (virtual) lab, but I’m an adversary of ID. You can find more about that at The Panda’s Thumb.

    NFL only talks about the properties of searches averaged over all possible search spaces. It doesn’t saying anything about how searchable particular sets of spaces are.

    Not to nitpick, but in the NFL framework the search space (aka solution space) is the domain of the cost functions, which is fixed. I wish Wolpert and Macready hadn’t spoken so much of averages in their plain-language remarks. Their theorems actually show that all algorithms have identically distributed results. A search result is, loosely, the sequence of costs obtained by the algorithm, and search performance is a function of the search result. If all algorithms have identically distributed results, then all algorithms have identically distributed performance and identical average performance.

    For some search spaces, it’s very easy to find search algorithms that converge on solutions very quickly.

    Easy for an intelligent agent like you, but easy for some algorithm? Dembski wants precisely to show that you, and not an algorithm, can easily design a search algorithm with higher performance.

    … and like most mathematical structures, most theoretically possible search spaces are highly irregular, and have no properties that are easily exploitable.

    Yes, almost all cost functions are algorithmically random, or nearly so. A consequence is that for the typical cost function, almost all algorithms obtain good solutions rapidly. (Intuitively, there are just as many good solutions as bad ones, and they’re scattered all about. It’s hard not to bump into one.) To put it another way, all search algorithms are almost universally efficacious. See Optimization Is Easy and Learning Is Hard in the Typical Function.
    My next paper will demonstrate clearly that this is a theoretical result, not a practical one.

    Given a search space about which you know nothing, there’s no way to pick an algorithm that will do better than random.

    Some care is required in speaking of an algorithm being better or worse than random search. There is the performance of a particular realization of random search of a cost function, and there is also the expected performance of random search of that function.
    When there is no information to suggest that any (deterministic) algorithm will perform better than any other on a cost function to be drawn randomly (not necessarily uniformly), selecting an algorithm uniformly is the best you can do. But a single run of random search is precisely equivalent to uniformly selecting a deterministic algorithm and running it (see No More Lunch). So picking an algorithm randomly vs. applying random search is a distinction without a difference.
    Hope I didn’t talk your ear off. I love this stuff.

    Reply
  15. Torbjörn Larsson, OM

    The Newton’s method comparison impressed me with how much information the problem space contributes. In evolution the problem space is constrained by both physical and chemical laws.
    And as evolution is also a natural process there is a lot of contribution of inherent information here too. A lot of mechanisms aren’t accounted for in these papers. If we measure information as randomness, there is randomness inherent in evolution. Both in evolutionary mechanisms, such as crossovers in sexual reproduction, and evolutionary processes, such as fixation during selection, or drift.
    For the papers the discussion we have here and elsewhere will help creationists generally (the interest and rational analysis stroking their egos) and specifically (in making the papers better). But as they can’t support what they pretend to support (teleology in evolution) it will not matter much.
    Perhaps they can find that biologically inspired software models have amounts of “artificial” information inserted. But they don’t need to as targets can be randomly selected and constraints are natural. After all, evolutionary theory is explicitly non-teleological in its description of biological systems behavior so we can make natural models the same.
    Ironically, observations in sciences are most often artificially produced from experiments, and they can still be used to test predictions. But ID wants to tear the whole of science down, so they ignore that.

    Reply
  16. Torbjörn Larsson, OM

    Is Marks being misleading or is he cherry picking an example … Schneider makes some effort to make it a “biologically realistic” model, and makes some argument that this realism is important. But then so much about the model isn’t realistic, it kind of undercuts his argument.

    He is cherry-picking for all I can see. He runs a comparison with ev over a wide range of parameters outside the area Schneider expressly says will model the biological situation and hence evolution in action. There isn’t really a discussion in the paper, but in the conclusion Marks and Dembski weasel-word the following:

    The success of ev was not due to its evolutionary search procedure but to a fortunate matching
    between the search structure and the problem being solved.

    Which is exactly true, of course. The fortunate matching is when the parameters match the biological model.
    Also, Marks totally disregards that the ev perceptron itself models the genetic machinery that has earlier resulted from evolution. So as I understand it there is at least two technical problems in that paper.
    Yes, Schneider’s model isn’t fully realistic, he discusse a lot of approximations and omissions. Also, he assumes (which is the fortunate matching) that the program should mimic independent variation. This is the usual situation in nature, but note that the biological theory itself neither demands or requires that, only “variation”.
    But as a demonstration that the genome accepts Shannon information from the environment, it is an interesting experiment. (I wouldn’t say test, since evolutionary theory doesn’t concern information.) But I wish Schneider had picked a simpler and more clearcut model to demonstrate it with.

    Reply
  17. Torbjörn Larsson, OM

    Oops. Too hasty there:

    He is cherry-picking for all I can see.

    As it was put here, misleading or cherry-picking, he is also misleading IMO. The cherry-picking is in using an evolutionary algorithm modeling a natural system instead of examples of designed algorithms. The misleading is in the discussion of Schneiders choice of parameters and the weasel words in the conclusion.

    biological theory itself neither demands or requires that,

    biological theory itself neither demand or predicts that,

    Reply
  18. Jonathan Vos Post

    When I beta tested John Holland’s book on the Genetic Algorithm (1975-1976) and was the first to use it to find solutions to an unsolved problem in the scientific literature, I was guided by Oliver Selfridge (Father of Machine Perception).
    He had a bunch of grad students compete in a learning problem, the repeated “coin guessing” problem (2×2 payoff bimatrix, but similar in learning complexity to rock-paper-scissors).
    My program came in second of over a dozen competing. It was a GA with its own parameters coded into the “gene” string, which varied its chromosome length and other parameters based on the history of the competition.
    Mine lost to a program that bundled many smaller programs within it, and passed the token between them to the one whose score would have been best so far if it had been the one with the token all along.
    The (British) author of the winner was hired by IBM, and moved to Florida to work on the not-yet-released PC.
    Few people today seem to understand the implicit parallelism in GA (in that it is exponentially evolving sampled “schema” of chromosomes in a larger search space concurrently with evolving chromosomes in the base search space).
    I still have unanswered questions on the meta-GA which evolves its own parameters concurrently with evolving populations. My questions and partial results of 1/3 century ago have been cited in refereed papers by Prof. Philip V. Fellman.
    In the No Free Lunch Theorem, and its abuses, I am still unsure about the definition of “algorithm” and “search space” and “cost function” and the like. I suspect that there are hidden assumptions about distributions and the spaces of possible spaces.

    Reply
  19. Coin

    Easy for an intelligent agent like you, but easy for some algorithm? Dembski wants precisely to show that you, and not an algorithm, can easily design a search algorithm with higher performance.
    What is an “intelligent agent”?

    Reply
  20. Tom English

    What is an “intelligent agent”?

    Hmm, coin. Know anything about collective intelligences (COINs)?
    I was trying to echo Dembski. I should have written “an intelligence like yours” to stay clear of embodiment and agency. In intelligent design, an intelligence is a supernatural (ID proponents used to say “non-natural,” and now say “non-material”) source of information. If the active information in a search seems too much to have arisen naturally, Dembski will say it must have come from an intelligence.
    Few who seriously investigate intelligence in animals believe there is any one thing that constitutes intelligence, or that intelligence is anything but a hypothetical construct. The norm is to define intelligence operationally, and definitions differ hugely from study to study. Unfortunately, some very bright scientists and engineers slip into treating intelligence as a vital essence that inheres in some systems and not in others. They haplessly play into the hands of ID advocates who are better philosophers than they.
    No doubt many ID proponents secretly equate “unembodied” intelligence with spirit. That is, humans are able to create information because they, created in the image of God, are spiritual, and not just physical entities.

    Reply
  21. Tom English

    Hi, David.

    I find it hard to believe that the “ev” algorithm contributed negative information per these definitions.

    The measure is relative, and is negative simply because the ev doesn’t perform as well as random search does on average.
    There have been several times over the years that I have suggested in reviews of conference papers that the authors compare their fancy new algorithms to random search. Given that random search is, loosely speaking, the average search, using it to establish a baseline makes a lot of sense.

    Reply
  22. Anonymous

    Tom English:
    “So picking an algorithm randomly vs. applying random search is a distinction without a difference.”
    Isn’t this true when the algorithms being selected contain no information about the target, but not true if it does?

    Reply
  23. Anonymous

    Tom English:
    “No doubt many ID proponents secretly equate “unembodied” intelligence with spirit. That is, humans are able to create information because they, created in the image of God, are spiritual, and not just physical entities.”
    Well said. So why don’t you agree?

    Reply
  24. Mark C. Chu-Carroll

    Anonymous:

    Tom English:
    “No doubt many ID proponents secretly equate “unembodied” intelligence with spirit. That is, humans are able to create information because they, created in the image of God, are spiritual, and not just physical entities.”
    Well said. So why don’t you agree?

    That would be because it’s nonsense. First – when we’re talking about information theory, the idea of “unembodied”, “spiritual”, “not just physical” are all undefinable concepts. They just don’t mean anything in terms of the theory. If you want to adopt information theory for an argument, you’re stuck working in terms of the concepts that are defined in the framework of information theroy.
    Second, according to the definition of information in information theory, information is *constantly* produced by what are presumed to be un-intelligent, purely physical entities, by natural processes that are effectively random.
    The ID folks want to create some special distinguished kind of “information” which can only be produced by intelligent agents. That’s really the idea behind specified complexity, irreducibly complexity, and several other similar arguments. The problem is, they can’t define what an intelligent agent *is* by anything other than a silly circular argument. What’s an intelligent agent according to Dembski? An agent that can produce specified complexity. What’s specified complexity? Complexity which has a property that could only be created by an intelligent agent.
    They drown those ideas in dreadful prose and massive amounts of hedging, to try to distract people from noticing the ultimate circularity of it. But look at anything by Dembski: does he *ever* offer a precise definition of specification, which doesn’t contradict his definition of complexity?

    Reply
  25. Mark C. Chu-Carroll

    Tom English:
    You’re entirely welcome to talk my ear off all you want.
    Two of my favorite things on my blog are having people who know more than me drop by and teach me something; and having someone involved in something I’m writing about come by to join the conversation.

    Reply
  26. Jonathan Vos Post

    It is unlikely that this conversation will resolve the question: “is intelligence the result of entirely physical processes?”
    That is a metaphysical question.
    Most neurophysiologists assume that intelligence can be reduced to an emergent property of neurons (possibly with DNA, RNA, Protein interaction of some sort as well as electrochemical) of specific structure in a network of specific structure which learns by specific structural changes.
    Most practioners or theorists of Artificial Intelligence assume that intelligence is an emergent property of software (perhaps in AI languages) running on commercial hardware.
    The argument about existence or nonexistence of “spirits” of various kinds, elves, angels, demons, gods, hinges on the metaphysical stance.
    The argument about animal rights, on the basis that animals are intelligent in the same way (albeit a different quantity) as humans hinges on the metaphysical stance.
    After spending several years operating within the cult of Strong AI, back in the early and mid 1970s in grad school, I have retreated to being a strong AI agnostic.
    The late Alex the Parrot slightly shifted my belief in animal intelligence in the direction that John Lilly tried to persuade me decades earlier about dolphins.
    I think that ID is a metaphysical stance trying to pretend that it is a Scientific Theory. It is rather difficult to apply Math to Metaphysics. I have joked here before about Theomathematics and Theophysics. But the ID advocates are not joking.

    Reply
  27. Lino D'Ischia

    Anonymous says:

    First – when we’re talking about information theory, the idea of “unembodied”, “spiritual”, “not just physical” are all undefinable concepts. They just don’t mean anything in terms of the theory.

    Really? Have you read what Dembski says about “unembodied designers” in NFL? It’s quite brilliant, you know.

    If you want to adopt information theory for an argument, you’re stuck working in terms of the concepts that are defined in the framework of information theroy.

    Dembski has no problem incorporating “unembodied designers” into information theory via quantum mechanical probabilities. You should read it.

    Second, according to the definition of information in information theory, information is *constantly* produced by what are presumed to be un-intelligent, purely physical entities, by natural processes that are effectively random.

    Keith Devlin, I believe it is, wrote a review of NFL. In it he points out the rather severe limitations of both Shannon information and Kolmogorov complexity. CSI, is a much more realistic concept of what we generally mean by “information”. Let’ remember that both Shannon and Kolmogorov were dealing with digital codes; hardly the stuff of normal day life (except for code writers).

    The ID folks want to create some special distinguished kind of “information” which can only be produced by intelligent agents. That’s really the idea behind specified complexity, irreducibly complexity, and several other similar arguments. The problem is, they can’t define what an intelligent agent *is* by anything other than a silly circular argument. What’s an intelligent agent according to Dembski? An agent that can produce specified complexity. What’s specified complexity? Complexity which has a property that could only be created by an intelligent agent.

    Really? Is it a circular argument? Let’s see: we find in nature something that is both complex and specified by some independent pattern; and if the complexity is of sufficient magnitude, then design is inferred. What’s circular about that? It’s the conjunction of a specified pattern and a high level of complexity that allows us to draw such an inference. There is no special property of complexity. Complexity ends up being simply the inverse of Shannon information. You seem entirely comfortable with that notion, right?

    They drown those ideas in dreadful prose and massive amounts of hedging, to try to distract people from noticing the ultimate circularity of it. But look at anything by Dembski: does he *ever* offer a precise definition of specification, which doesn’t contradict his definition of complexity?

    The problem with defining ‘specification’ is that it involves a simultaneous intellectual act, and to define its mathematical constituents is not easy, nor does it lend itself to simple exposition. It’s generally the recognition of a pattern which induces a rejection region in the extremal ends of a uniform probability distribution of such magnitude as to exceed the universal probability bound of 1 in 10^-150.
    Now, if you want circularity, how’s this: Who survives? The fittest. Who are the fittest? Those who survive.

    Reply
  28. Mark C. Chu-Carroll

    Lino:
    I’ve read Dembski’s NFL stuff, and I’ve commented on it on this blog multiple times. It doesn’t do anything to define just what an “unembodied” intelligence is.
    Specified complexity is, as I’ve argued numerous times, a
    nonsensical term. Dembski is remarkably careful in presentations and writings to never precisely define just what specification means.
    There’s a good reason for that. Because specification, as
    he defines it informally, translated into formal terms, means one of two things.
    One possibility, the more charitable one, is that specification is a kind of subset property of information. That is, a specification of a system is a partial description of it – a description which includes some set of properties that the full information must have. A system that matches the specification contains the properties described by the specification – in information theoretic terms, the embodiment of the specification contains a superset of the information in the specification. The problem with this one is that under this definition of specification, every complex system is specifiable. You can always extract a subset of the information in a system, and use it to create a specification of that system; and every specification can be realized by an infinite number of complex systems. If everything complex has specified complexity; and every specification can be realized by a variety of complex systems, then SC is useless and meaningless.
    The other possible sense of specification is the opposite of complexity. Under this definition, a
    specifiable system is a system that can be completely described by a simple specification. But if the specification is simple and completely describes the system, then according to information theary, the system cannot by complex. Using this definition (which Dembski implies is the correct one in several papers, while leaving enough weasel-space to wiggle out),
    a system with “specified complexity” is a system which
    has both high information content (complex) and low information content (specification) at the same time.
    And I’ll point out that you engage in exactly the same kind of weaseling as Dembski. You can’t define specification. You want to claim that Dembski’s math defines some new kind of information theory, and that that theory gives you a handle on how to capture ideas which cannot be represented in conventional information theory. But you can’t give a mathematical definition. You can’t define what specification means, or how to compute it. Why is that?
    Finally, your question about the circularity of survival of the fittest: any time you reduce a complex scientific theory down to a trivial one-sentence description, you’re throwing out important parts of it. If all evolution said was embodied in “survival of the fittest”, then you’d be right that it would be an empty, meaningless thing that explained nothing: the individuals that live to reproduce are the individuals that live to reproduce.
    But in fact, that description of evolution is an example of the first possible definition of “specification” as given above. The fact that some individuals survive and some do not, and only the ones that survive reproduce – is a crucial
    ingredient in the process of evolution. You could call it a specification of one necessary aspect. But just like that definition of specification doesn’t do what Dembski wants it to do, it doesn’t work well in this case. Because it’s incomplete, and can be matched by both the real observed phenomenon of evolution, and numerous other phenomena as well.
    “Survival of the fittest” leaves out crucial parts of the real definition of evolution. Evolution isn’t just the fact that some survive and reproduce, and some don’t. It also includes change: the population of individuals is undergoing a constant process of change. Every individual has mutations in their genes. When those mutations help, the individual might manage to survive when others wouldn’t. When those mutations hurt, the individual might not survive where others would. The effect of change combined with differential success means that the genetic makeup of the population is changing over time.
    Even that is a simplification, but a far more informative and complete one than “survival of the fittest”. And it demonstrates why there’s no real circularity.
    On the other hand, Dembski, by refusing to provide real definitions of specification, intelligence, etc., turns his
    voluminous writings into a meaningless pile of rubbish, because at its foundation, it has no meaning. Because it lacks any actual meaningful foundations, the whole thing collapses under its own weight. It’s just a smoke-screen, trying to hide the fact that there’s nothing really there.

    Reply
  29. Torbjörn Larsson, OM

    Lino D’Ischia :

    Dembski has no problem incorporating “unembodied designers” into information theory via quantum mechanical probabilities.

    And they differ from classical probabilities how?

    It’s generally the recognition of a pattern which induces a rejection region in the extremal ends of a uniform probability distribution of such magnitude as to exceed the universal probability bound of 1 in 10^-150.

    This is crap á la Dembski.
    First, there is no “universal probability bound”. Sometimes it is useful to exclude improbable events, but that is always made in a specific model which tells you what limits to use.
    Second, you assume that the process you observe has a uniform probability. That is uncommon in natural processes. Every energy driven process that dissipates energy will see the system visit improbable states where it is driven. Dissipation requires such states or the energy would be conserved. And the biosphere is energy driven by the sun and dissipating into space.
    Third, we know that selection enhances evolution rates so that new traits appears and fixates on much shorter time scales than the above bound implies. For example, human populations have evolved lactose tolerance several times in recent history, when effective population sizes have been a few thousand in the herders areas. So we are discussing evolution rates for new traits of at least 10^-6 traits/generation or so, in sexual populations. Those traits aren’t planned but is the process response to the environment.

    Reply
  30. Flex

    Mark C. Chu-Carroll wrote, “It also includes change: the population of individuals is undergoing a constant process of change.”
    Which was precisely the argument that disconcerted a couple of evengelicals who came to my door yesterday.
    I pointed out that things designed by humans are largely identical. There is little variation in the shape of a door or a window, extruded vinyl siding is incredibly homogenous; things that are designed by an intelligence are often made as identical copies to the best of our abilities. (Which is one of the points of the six-sigma initiatives.)
    Things growing by natural processes have far greater differences than those designed by man. (With obvious exceptions of course.)
    I pulled a few leaves off the ivy and showed them the vast differences found even on the same plant. Size, shape, and color, were all explainable by natural processes but not even close to what we see in items which are designed.
    I didn’t convince them, but I think they might have seen my point. To them, I suspect, it made their creator even more impressive.

    Reply
  31. Tom English

    “So picking an algorithm randomly vs. applying random search is a distinction without a difference.”
    Isn’t this true when the algorithms being selected contain no information about the target, but not true if it does?

    I’m giving this a very casual response. For any algorithm with information there’s a corresponding algorithm with misinformation (negative information). If you fix the cost function and randomly draw algorithms a large number of times, the positive and negative information cancel one another out.

    Reply
  32. Tom English

    Mark,
    I’ve read and enjoyed your comments many times. When I was in grad school, a friend of mine used to say, as he headed off to teach, “Well, guess I’ll go stomp me out some ignorance.” You keep on stomping, guy.
    Tom

    Reply
  33. David vun Kannon

    Tom,
    Thanks for responding to my comment. I understand the claim that ev was relatively worse than random search and therefore contributed “negative information”. My question, looking at the slides in chengdu.ppt on the EvoInfo resources page, was how that claim is supported. Even allowing for all the skipped steps that I would expect in a keynote, not a rigorous presentation, I find Marks’ claims difficult to believe. The numbers thrown around on those slides just don’t make a coherent argument to me.
    I’m happy to discuss the weakneses of ev if it comes to that, just like I’m happy to discuss the weaknesses of WEASEL. That’s what I call cherrypicking a weak example. However, if Marks’ numbers are wrong, that is what I would (charitably) call misleading.

    Reply
  34. Jonathan Vos Post

    http://www.thenation.com/doc/20071008/hacking
    Root and Branch
    by IAN HACKING
    The Nation
    [from the October 8, 2007 issue]
    First the bright side. The anti-Darwin movement has racked up one astounding achievement. It has made a significant proportion of American parents care about what their children are taught in school. And this is not a question of sex or salacious novels; the parents want their children to be taught the truth. None of your fancy literary high jinks here, with truth being “relative.” No, this is about the real McCoy.
    According to a USA Today/Gallup poll conducted this year, more than half of Americans believe God created the first human beings less than 10,000 years ago. Why should they pay for schools that teach the opposite? These people have a definite and distinct idea in mind. Most of the other half of the population would be hard-pressed to say anything clear or coherent about the idea of evolution that they support, but they do want children to learn what biologists have found out about life on earth. Both sides want children to learn the truth, as best as it is known today.
    The debate about who decides what gets taught is fascinating, albeit excruciating for those who have to defend the schools against bunkum. Democracy, as Plato keenly observed, is a pain for those who know better. The public debate about evolution itself, as opposed to whether to teach it, is something else. It is boring, demeaning and insufferably dull.
    [truncated]
    The Discovery Institute, a conservative think tank, states that “neo-Darwinism” posits “the existence of a single Tree of Life with its roots in a Last Universal Common Ancestor.” That tree of life is enemy number one, for it puts human beings in the same tree of descent as every other kind of organism, “making a monkey out of man,” as the rhetoric goes. Enemy number two is “the sufficiency of small-scale random variation and natural selection to explain major changes in organismal form and function.” This is the doctrine that all forms of life, including ours, arise by chance. Never underestimate the extraordinary implausibility of both these theses. They are, quite literally, awesome.
    [truncated]

    Reply
  35. Lino D'Ischia

    Jeff Shallit:
    “Dembski’s CSI is utter crap. See my paper with Elsberry, http://www.talkreason.org/articles/eandsdembski.pdf , which explains in detail why CSI is incoherent and doesn’t have the properties Dembski claims.
    It’s taken me a little while to work through your paper; so sorry for the delay. As to the paper, I don’t see any substantive criticism by you and Elsberry that makes any serious dents in Dembski’s explanation of CSI. What I detect in your criticism, in most instances, is a confusion between the notion of “information” and “Complex-Specified-Information” = CSI, and between “specifying” and the more formal “specification”. Now, having said that, the whole notion of what a “specification” is is no easy task. (That’s what I alluded to in the previous post.) So it’s very understandable that there is a struggle to fully grasp the concept (it proves to be a rather slippery concept), but most, if not all, of your objections, I believe, can be countered.
    Not being of the mind to write a 90 page paper to rebut every argument you make, I would be happy to discuss any of these arguments with you. Just select one.
    If I may, to get things started, I’ll just give one (almost glaring) example of where you fail to distinguish between “information” and CSI, with the result that your argument ends up dissolving away.
    In Section 9, “The Law of Conservation of Information”, your argument runs along these lines: Ω0 ⊆ Σ*, where Σ and Δ are finite alphabets . . . Dembski justifies his assertion by transfomring the probability space Ω1 by ∫-1. This is reasonable under the causal-history-based interpretation. But under the uniform probability interpretation, we may not even know that j is formed by fi. In fact, it may not even be mathematically meanignful to perform this transform, since j is being viewd as part of larger unifrom probability space, and f -1may not even be defined there.
    This error in reasoning can be illustrated as follows. Given a binary string x we may encode it in “pseudo-unary” as follows: append a 1 on the front of x, treat the result as a number n represented in base 2, and then write down n 1’s followed by a 0. . . . If we let f: Σ* → Σ * be the mapping on binary strings giving a unary encoding, then it is easy to see that f can generte CSI. For example, suppose we consider an 10-bit binary string chosen randomly and uniformly from the space of all such strings, of cardinality 1024. The CSI in such a string is clearly at most 10 bits. Now, however, we transform this space using f. The result is a space of strings of varying length l, with 1025≤ l≤ 2048. If we viewed this event f(i) for some i we would , under the uniform probability interpretation of CSI, interpret it as being chosen from the space of all strings of length l. But now we cannot even apply f-1 to any of these strings, other than f (i)! Furthermore, because of the simple structure of f(i) (all 1’s followed by a 0), it would presumably be easily specified by a target with tiny probability. The result is that f (i) would be CSI, but i would not be.”
    The first error I see is that you have equated CSI with a 10-bit string. But Dembski very clearly assigns an upper probability bound of 10150, or 2500, or 500 bits. You acknowledge the upper probability bound in Section 11 (CSI and Biology). Since 10-bits falls well short of the 500 bits necessary, it is meaningless to speak of CSI. IOW, both i and f(i) do not exhibit CSI. Now, if you were to use string lengths i of sufficient length (i.e., ≥500), using this “pseudo-unary” program, we would find that the output f(i) would then be between 10140 and 10150 1’s. Now there are only 1080 particles in the entire universe, so even if you lined up all the atoms that exist in the world, you would be way short of what you needed.
    The second error occurs in the next paragraph on p. 26 where you invoke the Caputo case as a instance of “specification”, much like you did in the penultimate sentence I quoted above. The “reference class of all possibilities” in the Caputo case was about a half a trillion. The 40 D’s and 1 R was simply one “event” that belonged to that reference class. In order for CSI to be present, the reference class would have to be comprised of at least 10150 elements/events. So, indeed, the 40 D’s and 1 R of the Caputo case is certainly “specified”, but it doesn’t constitute a “specification” because the “rejection region” it defines is not of sufficient complexity.
    The third error I see again involves “specification”. As I just mentioned, in Dembski’s technical defintion of CSI, a “specification” is a true “specification” when the pattern that is identified by the intelligent agent induces a rejection region such that, including replicational resources and specificational resources, the improbability of the conceptual event that coincides with the physical event is less probable than 1 in 10150.
    I’ve already gone farther than I intended. But, before I leave, I want to ask you something about your SAI, formulated in Appendix A. Below are two bit strings, A and B. Using any compression programs you have available to you (I have none; or if I do have them available I sure don’t know how to get to them), which of the two ends up with the smallest input string; i.e., which has the greater SAI? And, then, if you can tell me, which of the two is “designed”?
    Here they are:
    A:
    1001110111010101111101001
    1011000110110011101111011
    0110111111001101010000110
    1100111110100010100001101
    1001111100110101000011010
    0010101000011110111110101
    0111010001111100111101010
    11101110001011110
    B:
    1001001101101000101011111
    1111110101000101111101001
    0110010100101100101110101
    0110010111100000001010101
    0111110101001000110110011
    0110100111110100110101011
    0010001111110111111011010
    00001110100100111
    A:
    1001110111010101111101001
    1011000110110011101111011
    0110111111001101010000110
    1100111110100010100001101
    1001111100110101000011010
    0010101000011110111110101
    0111010001111100111101010
    11101110001011110
    B:
    1001001101101000101011111
    1111110101000101111101001
    0110010100101100101110101
    0110010111100000001010101
    0111110101001000110110011
    0110100111110100110101011
    0010001111110111111011010
    00001110100100111

    Reply
  36. Lino D'Ischia

    Sorry. I don’t know how the two bit-strings got duplicated. But that is what it is: a simple duplication. So please ignore the repeat.

    Reply
  37. Unsympathetic reader

    I’d heard that Salvador was going offline. Is that true?
    In any case, with regard to disembodied designers, I do wonder what the bandwidth of information transfer is “at the limit” as the energy approaches zero.

    Reply
  38. Lino D'Ischia

    Unsympathetic Reader:
    I’d heard that Salvador was going offline. Is that true?
    In any case, with regard to disembodied designers, I do wonder what the bandwidth of information transfer is “at the limit” as the energy approaches zero.

    If you’re interested in just what “unembodied designers” can do, Dembski talks about that very thing in NFL. He has a very interesting QM take on it. It’s really quite brilliant.
    As to Sal, what kind of commentary on the biology community is it when someone like Sal has to disappear from blogs so as to not threaten his newly-started up university education?
    Is this modern-day Lysenkoism?

    Reply
  39. Mark C. Chu-Carroll

    If you’re interested in just what “unembodied designers” can do, Dembski talks about that very thing in NFL. He has a very interesting QM take on it. It’s really quite brilliant.

    Sure, it’s quite brilliant, provided what you mean by “quite brilliant” is utter nonsense cleverly written to make it appear as if it says something deep while actually saying absolutely nothing.
    Dembski is a master at weaseling around, making compelling looking arguments while leaving enough gaping holes in the argument to allow him to weasel out of any possibly critique.
    One of the sad things about quantum theory is how it’s become a magnet for liars. Because pretty much no one really understands it, it’s easy for people like Dembski to jump in, wave his hands around shouting “quantum, quantum”, and pretending that it somehow supports what he’s saying.
    As for what Sal’s disappearance says about the biology community, I’d argue that what is really says is: “If you want to have any chance of being taken seriously as a researcher, you probably don’t want to be known as a slimy,
    quote-mining, lying sycophant to a bunch of loonie-tune assholes”.

    Reply
  40. Anonymous

    Mark C. Chu-Carroll:
    “One of the sad things about quantum theory is how it’s become a magnet for liars. Because pretty much no one really understands it, it’s easy for people like Dembski to jump in, wave his hands around shouting “quantum, quantum”, and pretending that it somehow supports what he’s saying.”
    Mark, I would agree with you on this point. You quite frequently find people extending and extrapolating QM to places and in ways that should never be. But what Dembski does is quite legitimate. He simply points out that the statistical nature of QM permits events taking place that don’t involve the imparting of energy but simply a rearranging of the elements of the probability distribution. I don’t think I would have ever thought of it.

    Reply
  41. creeky belly

    He simply points out that the statistical nature of QM permits events taking place that don’t involve the imparting of energy but simply a rearranging of the elements of the probability distribution.
    The last part of your sentence doesn’t make any sense. Are you talking about measurement of entangled states? This is a cop-out, which distribution?
    Here’s the quote from Dembski (via talk.origins):
    “Thermodynamic limitations do apply if we are dealing with embodied designers who need to output energy to transmit information. But unembodied designers who co-opt random processes and induce them to exhibit specified complexity are not required to expend any energy. For them the problem of “moving the particles” simply does not arise. Indeed, they are utterly free from the charge of counterfactual substitution, in which natural laws dictate that particles would have to move one way but ended up moving another because an unembodied designer intervened. Indeterminism means that an unembodied designer can substantively affect the structure of the physical world by imparting information without imparting energy.” [p. 341]
    “For now, however, quantum theory is probably the best place to locate indeterminism.” [p. 336]
    The problem: the processes are not random, they’re stochastic. The results will follow QM distributions.
    Where is this information being imparted? In atoms? Fermions? Bosons? Spin states? Momentum states? Will I always roll a spin-up? 1st excited state? Left circular polarization? You still need energy to create the perturbation that would favor a quantum state with certain information.
    Consider teleportation: If the “unembodied designer” wanted to simply copy his quantum information into the quantum information of another atom, he would still require two extra atoms(or photons, electrons, Josephsen Junctions) along with some pertubations to both couple and change the atoms’ state.

    Reply
  42. creeky belly

    …Josephsen Junctions) along with some pertubations to both couple and change the atoms’ state.
    I should mention that the perturbations in this case are creating the two extra atoms, since they can’t be co-opted from others (what state would they be in?).
    More information on teleportation here.

    Reply
  43. Jeffrey Shallit

    Lino:
    Well, I’ll give you credit for one thing: at least you’ve actually read the paper and responded to it, which is more than Dembski has done.
    To respond to your critiques: first, you claim that one must have 500 bits to constitute CSI. I say, take that up with Dembski, then because on page 159 of his book “Intelligent Design”, Dembski says, “The sixteen-digit number on your VISA card is an example of CSI”.
    Second, you object to our simple example of how CSI can be generated if one doesn’t specify the probability space correctly. But you have failed to understand the objection. The point is that f(i), when viewed as an element of the space of binary strings, does exhibit CSI, since it has 1024 bits. i itself does not because it is too short, but that is precisely our point! Here we have constructed CSI out of applying a function to something that isn’t — something that Dembski claims is impossible.
    As for specification, I think you also fail to understand that Dembskian concept. Specification only deals with the assignment of an event to a subset of a reference class of events; there is nothing inherent in a specification that says it must refer to a subset with low probability. Go read section 1.4 of No Free Lunch again. Or go to page 111, where Dembski writes, “The ‘complexity’ in ‘specified complexity’ is a measure of improbability”. So if the word complexity refers to improbability, it follows that the specified part must not, it itself, be related to probability.
    As for your last question, I think you are confused. I am not claiming that Dembski or SAI can “detect design”. It is the whole point of our paper that “detecting design” is not something one can determine by mathematical arguments alone.

    Reply
  44. 386sx

    You still need energy to create the perturbation that would favor a quantum state with certain information.
    No you don’t. Unembodied designers can do whatever the hell they want. All Dembski said was that, “for now, however, blah blah indeterminism, blah blah.” (I’m paraphrasing.)
    Note the tentative “for now, blah blah blah blah.”
    Lol, “unembodied designers”. What hooey!

    Reply
  45. Tyler DiPietro

    “Indeterminism means that an unembodied designer can substantively affect the structure of the physical world by imparting information without imparting energy.”
    So essentially, the “unembodied designer” is a perpetual motion machine?

    Reply
  46. Mark C. Chu-Carroll

    Anonymous:
    That’s exactly what I mean by chanting “quantum, quantum” while waving hands around.
    Quantum physics says that there’s some level where we don’t understand what’s going on, and which we can only describe in terms of a probability distribution.
    Dembski’s argument is, basically, saying that because we don’t understand what’s happening on that level, that he can stick the actions of his “disembodied designer” into that unexplained level.
    It’s a clever way of arguing, because it’s playing with something that is, genuinely, deeply mysterious. And since we don’t have a particularly good understanding of what’s going on on that level – even the best experts find it largely incomprehensible – it’s very hard for a layman to make any argument against it. So the laymen can’t really respond. But Dembski *also* doesn’t actually show where/how his “unembodied designer” fits into the intricate and subtle math of quantum physics – so it’s too vague for an expert to form a good argument against.
    In other words, it’s classic Dembski. It sounds very impressive, it’s full of obfuscatory math to make it look and sound complicated, but it’s so vague and ultimately meaningless that you can’t pin it down enough to conclusively debunk it as the nonsense that it is: any attempt at debunking it will simply be met with “But that’s not what I meant”.

    Reply
  47. Lino D'Ischia

    Jeff:
    To respond to your response: First, you dispute my claim that 500 bits of information are necessary to have CSI. But then in responding to my second objection, you say: “The point is that f(i), when viewed as an element of the space of binary strings, does exhibit CSI, since it has 1024 bits.” And in disputing my claim you quote Dembski from his book “Intelligent Design”, which is why, I guess, in the preamble of your paper you indicate that unless Dembski refutes something from his prior writings, you consider everything he wrote in play (since, as is clear to anyone who compares, the section on Visa cards and phone numbers in “Intelligent Desgin” has been deleted from NFL).
    Secondly, in responding to my objection to your example of the “pseudo-unary” function, you say that the output represents 1024 bits, far beyond the 500 bits necessary. But, of course, these are 1024 “pseudo-bits”, since the output is a unary output in binary form. Prescinding from this for the moment, for the sake of argument, let’s say this really did represent 1024 bits of information. The question is this: Does this, or does it not, represent CSI? I guess you think that this output bit string represents CSI because, like Caputo’s string of 40 D’s and 1 R, this bit string is “specified”. Well here, as I mentioned the first time, I would say you’ve missed the technical meaning of “specification”. CSI is an ordered pair of events (T,E) with T inducing a rejection function that in turn forms a rejection region within the reference class of events. IOW, CSI represents the conjunction of a physical event and a conceptual event. [This is all abundantly clear in “No Free Lunch”]. In the case of this 1024 bit-string, which represents the “physical event”, what is the “conceptual event” that describes it and, in describing it, induces a rejection region? You don’t provide any such description nor rejection region. We’re left with one-half of CSI, and so we can’t call an ordinary bit-string CSI.
    Further, as is clear in Dembski’s discussion on pp. 152-154, T induces a rejection region onto Ω0. In the example you use, no such rejection region is mentioned or specified in any way. So, for the sake of argument, let’s say that your definition of a 10-bit string, the input parameter, represents T0, the rejection region in the reference class Ω0. The cardinality of this rejection region, as you point out, is 1024. Now let us suppose that the “conceptual event”, C0 falls in this rejection region, and is identical with the physical event E0. Then the probablity of C0 = E0 =1 in 1024. Now let’s look at the output reference class Ω1. The function f transforms T0 to T1, the rejection region in Ω1, C0 to C1, and E0 to E1. Now if the size of the rejection region hasn’t changed, then the probability measure of the CSI involved doesn’t change, and so the CSI remains unchanged in terms of bits. T0, defined on Ω0 as a 10-bit binary string, has 1024 elements. T1 defined on Ω0, the “pseudo-unary” output also involves just 1024 elements; that is, the unary strings of all ones ending with a zero, with length between l=1025 to l = 2048. You allude to this indirectly when, in your paper, you and Elsberry write: “If we viewed the event f(i) for some i we would, under the uniform probability interpretation of CSI, interpret it as being chosen from the space of all strings of length l. But now we cannot even apply f-1 to any of these strings, other than f(i)!” If it were any different from this, then the probability of T0 would be different from that of T1. In both cases, however, it is 1 in 1024, as Dembski claims it should be.
    Third, we come back to “specification” and what that entails. Here’s what you say about specification: “Specification only deals with the assignment of an event to a subset of a reference class of events; there is nothing inherent in a specification that says it must refer to a subset with low probability.” This is not how I read Dembski. Specification indeed involves the identification of a rejection region (subset) within some reference class of events. But this is called “T”, as in the example above. Now, for there to be a specification, there now has to be a physical event, E, which, as you state, falls into the rejection region. And it is imperative that the rejection region be of fantastically low probability. We agree, you and I, on an event, E, falling into a prescribed region of a reference class of events. But to say that there is “nothing inherent in a specification that says it must refer to a subset with low probability” is too vague, and hence misleading. By phrasing it this way, you almost imply a refutation of what Dembski means by a rejection region, yet, if properly understood, there isn’t a problem. Here’s what I mean: yes, the rejection region can consist of events of infinitely high individual probability(is that what you mean by “[there is] nothing inherent in a specification that says it must refer to a subset with low probability”?); yet, nevertheless, it’s possible for these events of infinitely high individual probability to form a subset/rejection region, relative to the overall reference class, that is nevertheless extremal, that is, of extremely low probability, and which, thusly, constitutes a legitimate rejection region. (There are two tails in a Gaussian distribution) Dembski makes this point quite clear I think.
    In advising me to go to page 111, where I would find Dembski writing, “The ‘complexity’ in ‘specified complexity’ is a measure of improbability”, were you aware that Dembski, in the paragraph you quote from, was stating Elliot Sober’s criticism of his mathematical work? From what I read, the words in question are meant to be taken as Sober’s words, not Dembski’s. But maybe “talk.origin” wasn’t careful enough and accidentally “quote-mined” Dembski. Obviously, any conclusions you might want to draw based on this quote lose any force they might otherwise have had.
    Finally, Dembski makes the claim that the identification of CSI allows us to infer design. You’re now saying that SAI can’t do that. So, obviously, SAI and CSI differ in this regard. But you might object saying: “Well Dembski might think he can do that, but there’s no way he can do that.” Well, if that’s the case, then I’m not sure why you would bother to compare them. My only conclusion is that you see some value in the concept of CSI, but, that you see problems with it, and, for the sake of the much more limited objective of identifying ‘information’, SAI is better equipped.
    So, then, I won’t ask you to tell me which one is designed. However, which one, according to SAI, has more information? A, or B?
    A:
    1001110111010101111101001
    1011000110110011101111011
    0110111111001101010000110
    1100111110100010100001101
    1001111100110101000011010
    0010101000011110111110101
    0111010001111100111101010
    11101110001011110
    B:
    1001001101101000101011111
    1111110101000101111101001
    0110010100101100101110101
    0110010111100000001010101
    0111110101001000110110011
    0110100111110100110101011
    0010001111110111111011010
    00001110100100111

    Reply
  48. Lino D'Ischia

    creeky belly:
    “The problem: the processes are not random, they’re stochastic. The results will follow QM distributions.
    Where is this information being imparted? In atoms? Fermions? Bosons? Spin states? Momentum states? Will I always roll a spin-up? 1st excited state? Left circular polarization? You still need energy to create the perturbation that would favor a quantum state with certain information.”

    Here’s what Dembski writes on pp. 340-341:
    “Consider, for instance, a device that outputs 0s and 1s and for which our best science tells us that the bits are independent and identically distributed so that 0s and 1s each have probability 1/2. (The device is therefore an idealized coin tossing machine; note that quantum mechanics offers us such a device in the form of photons shot at a polaroid filter whose angle of polarization is 45 degrees in relation to the polarization of the photons–half the photons will go through the filter, counting as a “1”; the others will not, counting as a “0”.) Now, what happens if we control for all possible physical interference with this device, and nevertheless the bit string that this device outputs yields and English text-file in ASCII code that delineates the cure for cancer (and thus a clear instance of specified complexity)? We have therefore precluded that a designer imparted a positive amount of energy (however miniscule) to influence the ouput of the device. . . . Any bit when viewed in isolation is the result of an irreducibly chance-driven process. And yet the arrangement of the bits in sequence cannot reasonably be attributed to chance and in fact points unmistakably to an intelligent designer.” (For those interested, you should read the fuller account in Section 6.5 of NFL)
    Mark C. Chu-Carroll:
    “So the laymen can’t really respond. But Dembski *also* doesn’t actually show where/how his “unembodied designer” fits into the intricate and subtle math of quantum physics – so it’s too vague for an expert to form a good argument against.”
    I think what Dembski describes simply suggests that it is possible for an “unembodied designer” to “act”, that is, impart information, without energy being imparted. Remember, his jump-off point for this is Paul Davies remark that “At some point God has to move the particles.”

    Reply
  49. 386sx

    It’s a clever way of arguing, because it’s playing with something that is, genuinely, deeply mysterious.
    Sorry but I don’t see what’s so clever about it. His “unembodied desiger” (lol) is immune from everything and…
    “For now, however, quantum theory is probably the best place to locate indeterminism.”
    …”for now”, he will stick it in quantum theory. But if that doesn’t work out very well then hey too bad because the unembodied designer didn’t need no quantum theory anyway because it is immune from everything no matter what.
    What is so clever about that? It sounds frakking stupid to me. Shrug!

    Reply
  50. Tyler DiPietro

    “But, of course, these are 1024 “pseudo-bits”, since the output is a unary output in binary form. Prescinding from this for the moment, for the sake of argument, let’s say this really did represent 1024 bits of information.”
    Well, no. From what I read in the paper, Shallit is describing a reversible encoding function, not much different than bijective mappings of bits onto the natural numbers. As a description method it does indeed represent 1024 bits of information. But that, of course, presumes a Kolmogorov definition of “information”. I’m not entirely sure what definition you are implying here.
    “In the case of this 1024 bit-string, which represents the “physical event”, what is the “conceptual event” that describes it and, in describing it, induces a rejection region?”
    The problem here is that you are assuming that Dembski’s CSI can meaningfully measure physical events. That hasn’t been demonstrated by any means, much less from Dembski himself.
    “If it were any different from this, then the probability of T0 would be different from that of T1. In both cases, however, it is 1 in 1024, as Dembski claims it should be.”
    Well, no. A factorial calculation of all possible strings of length between 1025 and 2048 forms a set with cardinality that dwarfes all possible strings of length 10. This is a pretty elementary error, and may be the cause of confusion.

    Reply
  51. Tyler DiPietro

    “As a description method it does indeed represent 1024 bits of information.”
    This should read, “as the result of a reversible encoding function, it does indeed represent 1024 bits of information.”

    Reply
  52. Coin

    “Consider, for instance, a device that outputs 0s and 1s and for which our best science tells us that the bits are independent and identically distributed so that 0s and 1s each have probability 1/2. (The device is therefore an idealized coin tossing machine… Now, what happens if we control for all possible physical interference with this device, and nevertheless the bit string that this device outputs yields and English text-file in ASCII code that delineates the cure for cancer (and thus a clear instance of specified complexity)?
    Okay, reasonable enough then: Obtain the cure for cancer by scrying, and then you can make an argument about the scientific value of Intelligent Design.
    Until that happens, though, you don’t appear to have actually answered creeky belly’s question. Cheeky belly’s question as I understood it was, how can an anything, unembodied or otherwise, inspire a stochastic process to output something specific instead of random data as according to its probability distribution? Instead of answering, you simply gave a hypothetical example in which a stochastic process does output a specific message, rather than randomly as according to its probability distribution. Okay. How did that happen? How does it work? If the example occurred according to the actual laws of quantum physics, you’d have to do it the way cheeky belly described– you’d have to expend energy to perform some operation that changes the probability distribution. Right?
    I think what Dembski describes simply suggests that it is possible for an “unembodied designer” to “act”, that is, impart information, without energy being imparted.
    Okay, so God can perform miracles. But why does God need quantum physics to perform miracles? Why does it make any more sense to suggest that supernatural beasties can influence the outcome of a stochastic system, than it would to suggest they can influence the outcome of a classical one? As long as we’re making up new laws of physics, why not think big?

    Reply
  53. Lino D'Ischia

    Tyler DiPietro:
    Well, no. A factorial calculation of all possible strings of length between 1025 and 2048 forms a set with cardinality that dwarfes all possible strings of length 10. This is a pretty elementary error, and may be the cause of confusion.
    It’s essentially a unary output. What factorial calculation does this imply?
    Coin:
    Okay. How did that happen? How does it work? If the example occurred according to the actual laws of quantum physics, you’d have to do it the way cheeky belly described– you’d have to expend energy to perform some operation that changes the probability distribution. Right?
    No, you wouldn’t have to expend any energy. That’s really the point. And, to find out “how” it happened—well, good luck, because QM, being limited by its probabilistic interpretation, and because of the Uncertainty Principle, just doesn’t allow much peering around to see just what happened. That’s why I said it’s brilliant. Wish I had thought of it first!
    Coin:
    “Okay, so God can perform miracles. But why does God need quantum physics to perform miracles? Why does it make any more sense to suggest that supernatural beasties can influence the outcome of a stochastic system, than it would to suggest they can influence the outcome of a classical one? As long as we’re making up new laws of physics, why not think big?
    Well, we know that universe was built from the smallest particles upwards, not from the largest downwards. So, you would have to think that if an “unembodied designer” is going to tinker, it would be done at the particle (QM) level, and not at the classical level. When you’re at the classical level, that kind of tinkering, indeed, implies what is normally meant by “miracle.”

    Reply
  54. Tyler DiPietro

    “It’s essentially a unary output. What factorial calculation does this imply?”
    1. The output of the encoding is unary, but neither this nor the encoding is known the observer. All is known is that it is one of all possible strings of binary digits of length l, where 1025 l n objects. Shallit was talking about two completely different sets when talking about the number of possible permutations of binary digits in strings of length 10 (which has cardinality 1024) and possible permutations of binary digits in strings of all possible lengths between 1025 and 2048.

    Reply
  55. Tyler DiPietro

    The end of my first point was also eaten.
    Continuing: Furthermore, the reversal (decoding) of the function f-1 is defined for f(i) but for no other strings in the set. Thus f(i) has CSI while it’s image does not, but contradicts Dembski’s claim that functions cannot generate CSI.

    Reply
  56. Anonymous

    As a biologist, I usually skip over ID arguments since they would clutter my brain with useless information — it’s hard enough keeping relevant information at hand. But, this discussion piqued my interest since it relates to some things I have been thinking about recently.
    Reading through the Dembski and Marks paper at least taught me where those “bit scores” in sequence searches come from. One ID argument is the big number low probability of rationalization that chance cannot explain even the evolution of a single enzyme. In the paper, D&M use the cytochrome c protein with ~400 bits of information with the search space reduced to ~110 bits because of the different frequencies of amino acids arising out of the genetic code. They then go on to say that the ~10^-35 probability is still small. However, a very quick search with a bacterial cytochrome c I found an example of two functionally identical proteins from different bacteria. The “bit score” was 51 bits out of a possible of 122 bits for an exact match with 44/169 identical amino acids. This is not a raw bit score, which seems to be used in D&M, but is dependent upon the size of the database searched. The probability of finding a sequence in the protein database with the same bit score is actually 4×10^-5, much smaller than the ~10^-35 quoted in the paper. For a real description of the probabilities see the BLAST short course at NCBI.
    My point here is that the set of sequences that can perform the same catalytic reaction is larger than is usually assumed in ID arguments. From a practical point of view, most biologists that use these searches for similar proteins really don’t need to know the specifics of the probabilities. They can use it as one uses a scale to weigh something – it gives a useful measure but you don’t need to know Newtonian physics to use them.
    Setting up a search algorithm that mimics evolution is difficult: How does one score the virtual protein in terms of functionality so that the program selects the winners in a population? For a typical enzyme, there are only a handful of amino acids that actually contact its substrate. Typically, changing these amino acids results in inactivation, but because of the chemical redundancy in the 20 amino acids there are “conservative” changes to a similar amino acid that may not significantly affect function.
    In essence, the basic requirements for function is that a few amino acids are positioned correctly in an environment that is conducive for catalysis. Thus, the other amino acids are there to provide the structure and environment, and the rules measuring these parameters are very flexible. Hidden Markov methods have been used to deal with this flexibility and several programs have been built so that you can take any protein sequence and ask “What sort of function may this protein have?” At best, these analyses only assign a protein to a family of enzymes catalyzing the same molecular rearrangements but with different substrates. One extreme example is the active site of RNA polymerase. The yeast and bacterial enzymes share little sequence identity, but their active site structures can be aligned with only a few Angstroms difference.
    If it is difficult to estimate the probability of a specific sequence having function X, are there independent ways to arrive at the probability? Enter evolution in a test tube. A while ago, Keefe and Szostak reported a method where they were able to translate proteins in vitro and keep the protein attached to the message from which it was translated. They used this to select for proteins that bound to ATP from a pot of ~10^12 different proteins 80 amino acids in length. They obtained 4 different proteins whose sequences did not match any known ATP binding site. They and others have followed up on this, but it has not caught on widely because it is technically difficult with many potentially good binding sites lost due to insolubility of the protein. With regard to this discussion, the main point is that for any specific function there may be many different sequences with different structures that are functionally equivalent. In other words, the set of sequences that can perform a specific function is likely much larger than what can be estimated from known examples, leading to a relatively low probability to obtain something that works. Low enough so that ID need not be invoked.

    Reply
  57. Coin

    No, you wouldn’t have to expend any energy.
    Why not?
    And, to find out “how” it happened—well, good luck, because QM, being limited by its probabilistic interpretation, and because of the Uncertainty Principle, just doesn’t allow much peering around to see just what happened.
    People ask for an explanation of how something, which you claim to be possible, works; you repeatedly respond only with “well, if it happened, you wouldn’t be able to explain why”. Do you not see why people might not take this entirely seriously?
    You’ve given us no reason to expect this “rearranging of the elements of the probability distribution” thing is possible or would ever happen– no reason to expect there is any need to “peer around to see what happened”. Rather than saying anything to convince us the effect you’re describing exists, you’re just giving us claimed reasons why the effect you’re describing would be undetectable and inexplicable. But makes the proposition of its existence less credible, not more; you’re basically reinforcing what MarkCC said in comment #50.
    That’s why I said it’s brilliant. Wish I had thought of it first!
    Um, well it’s not like it’s that original– I mean, I can think of two noteworthy science fiction novels that centrally henge on the basic idea you’ve been describing here (sentient beings gain the ability to preferentially pick among the potential ways quantum waveforms can “randomly” collapse, thus obtaining godlike powers), and both of them predate at least The Design Inference

    Reply
  58. 386sx

    Um, well it’s not like it’s that original– I mean, I can think of two noteworthy science fiction novels that centrally henge on the basic idea you’ve been describing here (sentient beings gain the ability to preferentially pick among the potential ways quantum waveforms can “randomly” collapse, thus obtaining godlike powers), and both of them predate at least The Design Inference…
    Right on man. I don’t think it’s “original”. I don’t think it’s “brilliant”. I just think it’s stupid. 🙂

    Reply
  59. Lino D'Ischia

    Tyler DiPietro:
    “The factorial calculation is used for calculating all possible combinations of n objects. Shallit was talking about two completely different sets when talking about the number of possible permutations of binary digits in strings of length 10 (which has cardinality 1024) and possible permutations of binary digits in strings of all possible lengths between 1025 and 2048.
    Furthermore, the reversal (decoding) of the function f-1 is defined for f(i) but for no other strings in the set. Thus f(i) has CSI while it’s image does not, but contradicts Dembski’s claim that functions cannot generate CSI.”
    The confusion here is that reference classes and rejection regions are being equated. Once you separate these two out, it is all conformable with what Dembski writes.
    In Shallit’s example, he makes no mention of a rejection region. You cannot have a “specification” a la Dembski if you don’t describe, mathematically, some rejection region. So, if you don’t describe a rejection region, then it is impossible to talk about CSI, even if you’re talking about a million bits of “information” (which is not equatable with CSI). If you then set as the rejection region in Ω0, the space of all binary digits (which is then the same as Ω1) the subset of the first ten binary positions, a ten-bit string, this encompasses 1024 elements. After the function is performed, the rejection region in the “pseudo-unary” output has 1024 elements as well. This “rejection region” resides in the space of all binary numbers between l= 1024 to 2048. But this is simply a subset of the reference class of all binary numbers. When you reverse (invert) the function, the function only operates on those 1024 elements of the new subspace that constitute the new rejection region. Since the event E is one of those 1024 elements in the one rejection region and then the other, the probabilities remain the same, and hence if there is sufficient complexity (improbability) to constitute CSI in the one rejection region, then it will still represent CSI in the other rejection region.

    Reply
  60. Lino D'Ischia

    Anonymous:
    ” The probability of finding a sequence in the protein database with the same bit score is actually 4×10^-5, much smaller than the ~10^-35 quoted in the paper.
    Your point seems to be that evolution can simply build new function upon old function and that these two are not separated by much improbability.
    What you’re leaving out here is the importance of cytochrome c. Cytochrome c is essential to cell duplication, and hence, is one of the most highly conserved proteins encountered. This is why, I’m sure, Dembski and Marks worked with cytochrome c. Whatever your thoughts regarding the ease of switching from enzyme function A to enzyme function B, it is all completely irrelevant if cytochrome c doesn’t exist. So the improbability of its coming into existence sets a minimum barrier for all future function. But thanks for pointing out their paper. I didn’t know they were working with proteins. I’ll have to now go and read it.

    Reply
  61. Lino D'Ischia

    Coin:
    “No, you wouldn’t have to expend any energy.”
    Why not?
    Dembski makes it clear that no energy was needed in the case of photons. Are photons particles? Do they have momentum? Well, their binary output should turn out to be 1/2 went up and 1/2 went down even when ‘coding for a cancer cure’. Lo and behold, that’s exactly what binary strings end up doing: averaging out to 1/2 0s and 1/2 1s. The probability distribution is undisturbed, hence no addition of energy.
    Coin:
    “Rather than saying anything to convince us the effect you’re describing exists, you’re just giving us claimed reasons why the effect you’re describing would be undetectable and inexplicable.
    Haven’t you ever heard of “spooky” quantum effects? In your last comment about collapsing wave-functions and such, you’ve touched on the problem of QM: no one knows how this collapse takes place. It’s hidden from us. That’s just the nature of the beast. If I point out what is undetectable and inexplicable, it is because nature is such. I can’t help that.

    Reply
  62. Mark C. Chu-Carroll

    Lino:
    The problem is that you’re playing the same old quantum mystery game that I criticized Dembski for originally.
    That is – you don’t have *any* mechanism for what you’re claiming. You don’t have any math to support its possibility. All you have is that wonderful word – “quantum”, and the mystery that surrounds it. Anyone can project anything that they want into that mystery, and for the moment, no one can disprove it, because we just don’t understand it.
    That’s why I say that far from being a “brilliant” discussion of the capabilities of a disembodied designer that it’s nothing but a clever-sounding smoke-screen. Dembski isn’t saying anything remotely deep or interesting – he’s just taking advantage of something we don’t understand to shove his pre-conceived idea of a designer behind the curtain.
    If Dembski actually bothered to do the math – to show what he’s claiming that his intelligent designer can do without expending any energy, how much of an effect it could produce, how it works mathematically – then it might be an interesting argument. But that’s not what he does. He doesn’t demonstrate *any* understanding of the actual math of quantum phenomena. It’s just a smokescreen: invoke the magic “quantum”, wave your hands around, and you’re a brilliant scientist!

    Reply
  63. Unsympathetic reader

    Anonymous writes: “He [Dembski] simply points out that the statistical nature of QM permits events taking place that don’t involve the imparting of energy but simply a rearranging of the elements of the probability distribution.
    As creeky belly notes, this would make teleportation possible. And FTL communication. Whee!
    And how is the probability distribution altered non-energetically? Morphic resonance? Lino D’Ischia, I’m afraid the Dembski quote provides nothing in the way of a mechanism for altering the probability distributions. You can rephrase his words but there’s still no “there” there. Is Dembski a believer in psychic phenomena too? After all, if someone can alter any probability distribution at will, they can easily control the firing of neurons in the brain. Maybe ID has application in explaining “visions” and occult phenomena as well?
    In any case, perhaps Bill and the DI should consider brewing a really warm cup of tea and recording the swirling patterns for signs of a Bible Code or something. At least then he might have a leg to stand upon.
    I agree that it’s a pity Salvador left. He was a useful foil. But if you make a habit of saying many ridiculous things that reveal horrendously bad scientific judgement, it’s going to catch up with you eventually.

    Reply
  64. Anonymous

    Mark CC:
    “That is – you don’t have *any* mechanism for what you’re claiming. You don’t have any math to support its possibility. All you have is that wonderful word – “quantum”, and the mystery that surrounds it. Anyone can project anything that they want into that mystery, and for the moment, no one can disprove it, because we just don’t understand it.”
    Isn’t this a silly contestation on your part?
    “Yes, you say that an ’embodied designer’ can act without being detected. Well that’s fine and dandy. But what mechanism do you propose? Without a mechanism, this is just hand-waving?”
    So what you’re asking for is that I come up with a mechanism for an action that is undectectable. This is ludicrous. If an action is undectectable, how would you know that your mechanism accounts for it. What if you were wrong: how would you know that? This is just silliness.
    There’s another way of looking at this, for those hung up on ‘energy’. We live in a world that can only be tested down to certain limits. There is, after all, a Planck time and Planck length. For example, the human eye has a flicker rate of 57 flickers/sec. That’s why electricity is 60 cycles/sec, so that even though the light is going on and off 60 times a second, as far as we’re concerned, it’s on all the time. Well, let us suppose that the universe has some kind of a flicker rate, operating at flicker rate likely greater than the inverse of the Planck time. (BTW, I think this kind of ‘flickering’ is what lies at the heart of quantum tunneling.) Now, the Heisenberg Uncertainty Principle is generally described using position and momentum, but it can also be applied to energy and time. For energy, it’s: ΔE x Δ t = h/2π. So, if you have an ‘unembodied designer’ who can act ‘infinitely fast’, then an ‘infinite’ amount of energy could be imported into the universe without detection. Just like we can’t detect the on/off behavior of your average light bulb, so we wouldn’t be able to observe such an input of energy if it is done infinitely fast, or even nearly so.
    Absurd, you say. Well, let me ask you this: we know that it is possible for particles to travel faster than light. Since the universe is expanding at a fast rate, light traveling along the fabric of space is then traveling super-luminally because space itself has a velocity. So, please explain to me where this added space comes from allowing for the expansion of the universe. Do you have any kind of answer at all? And, if you do, please indicate to me the ‘mechanism’ that’s at work.

    Reply
  65. Anonymous

    Unsympathetic Reader:
    “As creeky belly notes, this would make teleportation possible. And FTL communication. Whee!”
    You accuse me of “horrendously bad scientific judgment”. Well, UR, do you read scientific literature? It has already been proposed that the theoretic capability exists to teleport an atom from over here, to over there. It seems that it will just be a matter of time before scientists are able to do this. Of course, this is a long way from ‘teleporting’ Mr. Spock up to the Enterprise’, but it is ‘teleportation’ nonetheless.
    And in an article that appeared in the last two weeks, scientists working with quantum tunneling say that they have measured FTL travel. So, whose scientific judgment is really in question here?

    Reply
  66. Lino D'Ischia

    I got the message: “Thanks for signing in Lino D’Ischia”, and there was no slot for a name, and yet it still came up anonymous. I’m lost. Anyway, post# 71 is mine.

    Reply
  67. Jeffrey Shallit

    Lino:
    I agree with you entirely that Dembski’s claims have changed over time; that’s what makes it so hard to write a definitive refutation. To a Dembski believer, no claim is subject to refutation because one can always find another passage in Dembski’s works that implies the exact opposite. If credit cards are not an example of CSI, why did Dembski say so? And if he no longer believes that to be the case, maybe you can point me to a passage in his voluminous writings where he explicitly disavows this? Thanks.
    You say, You don’t provide any such description nor rejection region. We’re left with one-half of CSI, and so we can’t call an ordinary bit-string CSI. I’m sorry, I thought you would be bright enough to provide the details yourself. The specification is “all bit strings of a given length with at most one 0”. This is exactly the same as the specification given by Dembski for the Caputo case, so you can’t turn around and say this is not a specification. The rejection region is exactly the same as in the Caputo case; namely, I define a function f that counts the number of 0’s in the string, and I set up a rejection region corresponding to f(x) exactly the same as in the Caputo case, that is, f(x)

    Reply
  68. Unsympathetic reader

    Lino writes: “You accuse me of “horrendously bad scientific judgment”. Well, UR, do you read scientific literature?
    I was speaking of Salvador. Case in point: His continual droning about a decay in the speed of light that points to a young universe. There are many others (i.e. defense of Walt Brown on kcfs.org & etc.)
    Yes I read the literature. About “atom teleportation”: It’s about the transfer of *quantum states* between particles. This is different from altering the probability distributions such that all the atoms of an object localize a few feet (or miles) to the left or right. Funny thing about that work on quantum state transfer: It involves a testable mechanism and embodied designers.
    Lino elsewhere: “So what you’re asking for is that I come up with a mechanism for an action that is undectectable. This is ludicrous. If an action is undectectable, how would you know that your mechanism accounts for it. What if you were wrong: how would you know that? This is just silliness.
    The silliness is proposing an undetectable mechanism for an event and thinking one can leave it at that, not in asking for a mechanism or details about the actual feasiblity. One can posit a million undetectable mechanisms much like an emperor can wear any number of “invisible” clothes.

    Reply
  69. Mark C. Chu-Carroll

    Lino:
    You’re making my argument for me.
    The point is: you’ve said that Dembski has made a brilliant argument for the capabilities of an “unembodied designer”. In fact, he hasn’t – he hasn’t described the capabilities of anything. What he’s done is wave his hands and declare that
    his unembodied designer exists in a realm where his actions and capabilities are completely beyond our ability to describe or understand. That’s not a description of the capabilities of the designer – that’s just a typical quantum handwave.
    In terms of the understood math of quantum physics: what can the designer do without expending any energy? The answer to that isn’t the usual quantum babble about “Oh, he can tweak the stuff we can’t observe” – in *the math* of quantum physics, what does Dembski say about the capabilities of the unembodied designer?
    If he doesn’t say anything in terms of the actual math, then he isn’t saying anything: he’s just blowing smoke. You see, when it comes to quantum phenomena, we don’t know what’s going on. We don’t understand it. All we have is math: we have some very good mathematical descriptions of how things behave. Any actual arguments about what can and can’t happen at a quantum level can *only* be done in math.

    Reply
  70. Unsympathetic reader

    Hmm…
    One thing is for sure, Mark: You needn’t propose a mechanism for events that you don’t know happened*.
    *e.g. Unembodied designers loading CSI to into objects such as plant chromosomes in order to create pretty fragrances.

    Reply
  71. secondclass

    Several responses to Lino:
    First, you dispute my claim that 500 bits of information are necessary to have CSI.
    I think Dembski would dispute it also. You can probably find a quote in which Dembski sets the CSI threshold at 500 bits, but his method and examples say otherwise. His most oft-used example of specified complexity, the Caputo sequence, falls far short of 500 bits, as you yourself pointed out. According to the GCEA, the determination of “complexity” takes into account probabilistic resources, not necessarily the UPB.
    As I just mentioned, in Dembski’s technical defintion of CSI, a “specification” is a true “specification” when the pattern that is identified by the intelligent agent induces a rejection region such that, including replicational resources and specificational resources, the improbability of the conceptual event that coincides with the physical event is less probable than 1 in 10^150.
    This is incorrect. You can take into account the actual probabilistic resources, or you can just use the number 10^150, not both.
    Here’s what you say about specification: “Specification only deals with the assignment of an event to a subset of a reference class of events; there is nothing inherent in a specification that says it must refer to a subset with low probability.” This is not how I read Dembski.
    Shallit’s usage is consistent with Dembski’s in TDI and NFL, but in this paper Dembski’s usage of “specification” includes probability. AFAIK, that usage was unique to that paper, and he hasn’t used it that way since.
    But, of course, these are 1024 “pseudo-bits”, since the output is a unary output in binary form.
    The point is that we don’t know that these are “pseudo-bits” because we don’t know the causal story. Dembski himself admits that CSI is not conserved when the causal story is unknown in his response to Erik Tellgren, which raises the question of how CSI can be useful in inferring design.
    “The ‘complexity’ in ‘specified complexity’ is a measure of improbability”, were you aware that Dembski, in the paragraph you quote from, was stating Elliot Sober’s criticism of his mathematical work? From what I read, the words in question are meant to be taken as Sober’s words, not Dembski’s.
    Dembski equates “specified complexity” with “specified improbability”. See, for instance, endnote 15 of his Dover expert report.
    My only conclusion is that you see some value in the concept of CSI, but, that you see problems with it, and, for the sake of the much more limited objective of identifying ‘information’, SAI is better equipped.
    So, then, I won’t ask you to tell me which one is designed. However, which one, according to SAI, has more information? A, or B?
    SAI is well-defined (relative to a reference UTM) but not computable. CSI, on the other hand, is not well-defined. I think the point of SAI is to show what the CSI concept might look like if it were fleshed out with an unambiguous technical definition, and show that pinning it down thusly reveals its inability to indicate design.
    You cannot have a “specification” a la Dembski if you don’t describe, mathematically, some rejection region.
    Since Dembski has never mathematically described the rejection region for the flagellum, can we conclude that he has failed to show that it’s specified?

    Reply
  72. Lino D'Ischia

    Unsympathetic Reader:
    “The silliness is proposing an undetectable mechanism for an event and thinking one can leave it at that, not in asking for a mechanism or details about the actual feasiblity.”
    Let’s say you ran the experiment that Dembski outlines, using a polarization filter to turn photons into a 0 and 1 machine. Out pops the cure for cancer in binary code, maybe even in English. What would be the mechanism producing that? Could you come up with it in a million years? No. So why is a mechanism being demanded. That is silliness. What Dembski suggests is that there might be a way for “unembodied intelligence” to express itself bodily without any residue. Why not take it for what it’s worth, a plausbile interpretation, and not get silly and ask for a mechanism that is impossible to provide?

    Reply
  73. Anonymous

    Mark CC:
    “Any actual arguments about what can and can’t happen at a quantum level can *only* be done in math.”
    You make my point: did the cure for cancer violate the math involved in quantum mechanics?

    Reply
  74. Unsympathetic reader

    Lino: “Let’s say you ran the experiment that Dembski outlines, using a polarization filter to turn photons into a 0 and 1 machine. Out pops the cure for cancer in binary code, maybe even in English. What would be the mechanism producing that?i>”
    Invisible Pink Unicorns. Why bother with quantum handwaving when IPUs explain everything just as well? Sheesh.
    So why is a mechanism being demanded.
    It’s not that an mechanism for a miracle is demanded, it’s that Dembski *himself* outlined a highly questionable mechanism. Neither he nor you have seen fit to flesh out this ‘brilliant’ idea with actual, um, facts. If he’d like to retract that idea as being merely a highly speculative, trial balloon, that is fine with me because so far he’s not even demonstrated that any phenomenon requires such an explanation.
    Here’s another idea: *You* run photon polarization experiment and get back to us if anything interesting pops up. It sounds like a great way to demonstrate the existence of an unembodied designer. I guess the only question that would remain then is who would get the patent for the cancer cure?

    Reply
  75. Tyler DiPietro

    “But this is simply a subset of the reference class of all binary numbers. When you reverse (invert) the function, the function only operates on those 1024 elements of the new subspace that constitute the new rejection region.”
    That’s the point. The reason the pseudo-unary output is significant is because the decoding function is only defined for those outputs. Identifying a binary string that decodes into the preimage is an event (physical, if you’d prefer) that exhibits CSI as Dembski defines it, yet this observation contradicts his claim that functions cannot generate CSI.
    “Since the event E is one of those 1024 elements in the one rejection region and then the other, the probabilities remain the same, and hence if there is sufficient complexity (improbability) to constitute CSI in the one rejection region, then it will still represent CSI in the other rejection region.”
    The rejection region is corresponds, as far as I can tell, to all the outputs of 10 bit strings from f(x). It’s trivially true that both the images and targets of the function would have the same cardinality. When the latter set is embedded within a the broader class of all binary strings of length between 1025 and 2048, you have a larger class where the elements of said set are “improbable”, a la Dembski’s formulation, and thus constitute CSI.

    Reply
  76. creeky belly

    Now, what happens if we control for all possible physical interference with this device, and nevertheless the bit string that this device outputs yields and English text-file in ASCII code that delineates the cure for cancer (and thus a clear instance of specified complexity)?
    Let’s say you ran the experiment that Dembski outlines, using a polarization filter to turn photons into a 0 and 1 machine. Out pops the cure for cancer in binary code, maybe even in English. What would be the mechanism producing that? Could you come up with it in a million years? No. So why is a mechanism being demanded. That is silliness. What Dembski suggests is that there might be a way for “unembodied intelligence” to express itself bodily without any residue. Why not take it for what it’s worth, a plausbile interpretation, and not get silly and ask for a mechanism that is impossible to provide?
    This is monkeys on a typewriter. If the output of the detector is 50/50, there is no way to fix the output before hand, except by changing the polarization. That’s the definition of stochastic, and it’s why they’ve been able to make true stochastic, random number generators. Quantum cryptography relies on being able to detect eavesdroppers in a quantum circuit, and the no-cloning theorem prevents you from both measuring and transmitting a quantum state.
    But here’s what I would do: I would set up the machine with a BB84 protocol and perform key validity checks. If information is being introduced on the channel, it will be detected through error rates. Just because it’s quantum, doesn’t mean we can’t figure out what’s inside the box!
    BTW, polarization is a biproduct of going through a material with asymmetric oscillation susceptibility, so how is a new polarization induced without a change in the transverse EM field?

    Reply
  77. secondclass

    Dembski, as quoted by Lino:
    Now, what happens if we control for all possible physical interference with this device, and nevertheless the bit string that this device outputs yields and English text-file in ASCII code that delineates the cure for cancer (and thus a clear instance of specified complexity)?

    And yet the arrangement of the bits in sequence cannot reasonably be attributed to chance and in fact points unmistakably to an intelligent designer.

    I see a few problems with this. One problem is that ASCII English text does not have the statistical properties predicted by QM, so the proposed result would indicate that QM is wrong. But this problem seems reparable — we’ll just suppose that the output does have the right statistical properties, and that it yields English text when it’s run through a standard decompression algorithm.
    But does that really fix the problem? According my understanding of mainstream QM, the output of the polarized filter is genuinely undetermined. If there is a causal agent, embodied or not, that can determine it, then QM needs to be revised to account for it. Why is Dembski okay with tweaking QM but not the conservation of energy?
    And then we have a problem of semantics in the description of the experiment: “…what happens if we control for all possible physical interference…” What’s the distinction between physical and non-physical interference? Can non-physical entities cause physical effects? We’ve crossed from science into metaphysics here.
    And finally, we have the ever-present problem in Dembski’s conclusion of design, namely that Dembski has never given a definition of design that makes sense. He can’t logically conclude design if he doesn’t have a logically coherent definition of the word.

    Reply
  78. Mark C. Chu-Carroll

    Lino:

    You make my point: did the cure for cancer violate the math involved in quantum mechanics?

    Probably, yes.
    But since you’re not doing the math to show how that text is allegedly being generated in terms of quantum phenomena, I can’t answer the question.
    If I assert that x2+y=z has no real roots, can you prove that I’m wrong? It should be easy, after all, quadratic equations are well understood, right?
    Obviously, you can’t prove that. Because I’ve deliberately
    left the problem underdefined – so any attempt to prove that there are real roots to that equation is easy for me to refute.
    That’s the same game that you and Dembski are playing. You’re creating a fake phenomenon, and claiming that it could be the result of some unspecified phenomenon without breaking the rules of quantum behavior. But you’re not doing any math. The actual rules of quantum behavior are very intricate and very subtle. As far as I understand quantum physics, I don’t think that what you’re describing would be at all consistent with the mathematical properties of actual quantum behavior. But since you’re not specifying what the actual mathematical properties of your scenario are, anything that I can say to show that can be refuted by saying “but that’s not what I meant”.
    If you want to make the argument that what you’re describing is anything more than an attempt to hide bullshit behind the curtain of the word “quantum” – you need to actually do enough math to show us that what you’re claiming is actually what you’re claiming it is. Can you describe any way that your scenario is actually consistent with quantum phenomena, in terms of the actual math of quantum phenomena?
    Pretty much anything that you can come up with that specifically describes some way that a quantum process could output

    Reply
  79. Blake Stacey

    Coin (#62):

    Um, well it’s not like it’s that original– I mean, I can think of two noteworthy science fiction novels that centrally henge on the basic idea you’ve been describing here (sentient beings gain the ability to preferentially pick among the potential ways quantum waveforms can “randomly” collapse, thus obtaining godlike powers), and both of them predate at least The Design Inference

    I’m guessing that one of them is Greg Egan’s Quarantine, but I’m drawing a blank on the other.

    Reply
  80. Jonathan Vos Post

    Re: #62, #85,
    There are many fine novels which touch on this, of which I’ve read most. I am personally fond of Moving Mars and Hard Questions (where the viewpoint character is locked into a box and experiences being a Schrodinger cat) for the egotistical reason that I was an informant to the authors of those two as they were writing, and Ian Watson credited me as such in his acknowledgments, just as Greg Bear did in an earlier novel (The Forge of God) where my wife and I appear under our own names as characters.
    To use the list from here
    http://nextquant.wordpress.com/quantum-computer-sci-fi/
    * Brasyl (2007) by Ian McDonald
    Features illegal quantum computing and parallel universes.
    * Simple Genius (2007) by David Baldacci
    This recent thriller describes quantum computers as being worth countries going to war for.
    * Shanghai Dream (2005) by Sahr Johnny
    Quantum neural networks achieve a breakthrough in Artificial Intelligence.
    * The Traveler (2005) by John Twelve Hawks
    Quantum computer communicates with other realms and tracks interdimensional travel.
    * The Labyrinth Key (2004) by Howard V. Hendrix
    Quantum computer and the Cold War between China and the U.S.
    * Blind Lake (2003) by Robert Charles Wilson
    Self-improving neural quantum supercomputers allow visual observation of distant planets.
    * Dante’s Equation (2003) by Jane Jensen
    A quantum computer named Quey is used to solve a previously intractable physics problem. The book also involves parallel universes.
    * Hominids (Neanderthal Parallax) (2003) by Robert J. Sawyer
    A failed quantum computer experiment transfers a Neanderthal scientist from a parallel universe into our world.
    * The Footprints of God (2003) by Greg Iles
    The secret Trinity Project involves some of the best minds in the world in order to create the first practical quantum computer. Quantum OS Trinity by D-Wave Systems was called after The Trinity Project.
    * Light (2002) by M. John Harrison
    A serial killer invents a quantum computer that enables interplanetary travel.
    * Schild’s Ladder (2002) by Greg Egan
    Future humans abandon physical bodies and trasfer their minds to a quantum computer named Qusps.
    * Finity (1999) by John Barnes
    Using quantum computers one can jump into an alternate parallel universe.
    * Timeline (1999) by Michael Crichton
    Quantum computer “faxes” objects and persons into parallel universes.
    * Digital Fortress (1998) by Dan Brown
    NSA operates a code-breaking quantum computer named TRANSLTR.
    * Factoring Humanity (1998) by Robert J. Sawyer
    Quantum computers are used for integer factorization and code breaking. Their working principle is based on parallel universes.
    * Hard Questions (1996) by Ian Watson
    A powerful quantum computer operates in parallel universes, becomes self-aware and creates own realities.
    * Moving Mars (1993) by Greg Bear
    The book features self-aware quantum computers.
    * Quarantine (1992) by Greg Egan
    One of the first sci-fi books using the concept of quantum computation.

    Reply
  81. David vun Kannon

    Desperately trying to drag the thread back on topic…
    The point of the analysis in the Marks and Dembski paper is that ev works, but slowly. So slowly that random search is faster (on average). This is something that ev’s author acknowledges.
    The implicit admission of “it works slowly” is that “it works”. Marks and Dembski are very careful to avoid saying that ev is not relevant to biology, even if it is a crude model.
    Schneider’s slowpoke ev is at least trying for biological relevance. The random search comparison is not at all a relevant model of how biology works.
    (And why is ev so slow? Ridiculously small population size and no crossover (asexual reproduction) are to blame.)
    There might be some shifting of goalposts here, but at least it is in the right direction. Previously, UD stalwarts such as DaveScot and GilDodgen held out that evolutionary algorithms all worked by front loading and hiding the goal in the code. If the discussion has moved on to “yeah, but they are so slow they are not realistic” that is an improvement.
    BTW, Marks and Dembski are wrong about how many trials ev took in the runs in Schneider’s paper. They quote a number around 45,000, arrived at by multiplying the population size by the number of generations. Actually the number of trials was half that, since only half the population got replaced in each generation. It doesn’t inspire confidence in the rest of their math.
    An interesting way to test Marks and Dembski’s assertion on how fast random search can solve the binding problem of ev is to set the population size to some number greater than 439. Since the first generation of ev’s population is generated randomly, a population greater than 439 should on average contain at least one perfect scoring individual, according to M&D. A population of 4390 should contain 10 on average. Since ev’s source code is available on the internet, it would have been a good check for M&D to include in their paper.

    Reply
  82. Jonathan Vos Post

    David vun Kannon:
    Yes, I had wandered away from the centroid of the thread.
    True, the evolutionary algorithm runs much faster with cross-over, i.e. sexual reproduction, as I verified in my 1973-19777 doctoral work, and was frustrated that the MIT AI lab did not consider when I got them to do their own EA work. Don’t know about Dembski, et al, but my parents reproduced sexually.
    How MUCH faster is the question at the core of the evolution of sexual reproduction, and those species which have both sexual and asexual (including vertebrates).
    Classical Population Genetics has plenty of equations about evolutionary rate as a function of population size. Infinite populations and continuous evolution are another matter, with a different set of known results and open problems.
    Dembski et al are either genuinely ignorant of all of the above, or pretending to be, or mildly aware but totally confused. Hard to tell. The Institutional No Free Lunch Theorem says that it is arbitrarily hard to distinguish between malice and incompetence.

    Reply
  83. SteveF

    With regards to the speed of evolution, people might find the following paper to be of interest:
    Kashtan, N. et al. (2007) Varying environments can speed up evolution. PNAS, 104, 13711-13716.
    Simulations of biological evolution, in which computers are used to evolve systems toward a goal, often require many generations to achieve even simple goals. It is therefore of interest to look for generic ways, compatible with natural conditions, in which evolution in simulations can be speeded. Here, we study the impact of temporally varying goals on the speed of evolution, defined as the number of generations needed for an initially random population to achieve a given goal. Using computer simulations, we find that evolution toward goals that change over time can, in certain cases, dramatically speed up evolution compared with evolution toward a fixed goal. The highest speedup is found under modularly varying goals, in which goals change over time such that each new goal shares some of the subproblems with the previous goal. The speedup increases with the complexity of the goal: the harder the problem, the larger the speedup. Modularly varying goals seem to push populations away from local fitness maxima, and guide them toward evolvable and modular solutions. This study suggests that varying environments might significantly contribute to the speed of natural evolution. In addition, it suggests a way to accelerate optimization algorithms and improve evolutionary approaches in engineering.

    Reply
  84. secondclass

    David vun Kannon:
    BTW, Marks and Dembski are wrong about how many trials ev took in the runs in Schneider’s paper. They quote a number around 45,000, arrived at by multiplying the population size by the number of generations. Actually the number of trials was half that, since only half the population got replaced in each generation. It doesn’t inspire confidence in the rest of their math.
    My understanding is that the losing half of the population is replaced by copies of the winning half, and then about 3/4 of the population experiences a point mutation. After that, the whole population is evaluated again. If we don’t count repeated evaluations for unmutated organisms, then the correct number of queries should be about 34000. I suspect, though, that Marks and Dembski didn’t exclude repeated queries for unmutated organisms.
    An interesting way to test Marks and Dembski’s assertion on how fast random search can solve the binding problem of ev is to set the population size to some number greater than 439. Since the first generation of ev’s population is generated randomly, a population greater than 439 should on average contain at least one perfect scoring individual, according to M&D. A population of 4390 should contain 10 on average. Since ev’s source code is available on the internet, it would have been a good check for M&D to include in their paper.
    Yes, that’s a clever way to quickly check M&D’s results. Out of curiosity, what do you think would happen if we tried this? Do you intuitively think that a population of 4390 would contain 10 perfect organisms? (I’m not looking for a right or wrong answer; I’m just curious to know what your intuition tells you.)

    Reply
  85. Coin

    Stacey / Vos Post:
    The first book I was thinking of was indeed Quarantine.
    The other was Robert Anton Wilson’s Schrödinger’s Cat trilogy.
    Since Schrödinger’s Cat was written way back in the 1980-1981 period, it might actually be possible to say its use of the idea was somewhat original and creative. However, I can’t say this for sure without knowing when exactly Blake Stacey was in the sixth grade…

    Reply
  86. Lino D'Ischia

    Jeff:
    I take offense at your comment that I would not accept any refutation of what Dembski has written. In fact, please point any such defects out.
    On the other hand, you seem to imply that unless Dembski publicly backs away from the examples that he didn’t carry over into “NFL” from “The Design Inference”, then somehow that represents a refutation of Dembski. This is an argument from ignorance. You don’t know why he left it out. You’re inferring the rest.
    As to design and CSI, or, specified complexity, here’s a quote from Dembski: “The aim of this book has been to elucidate, make precise, and justify the key emprical marker that reliably signals design, namely, specified complexity.” Well, when phone numbers, or credit card numbers are doled out, doesn’t that process involve design? Doesn’t someone have to sit down, calculate probabilities, and make a decision as to how many numbers are needed to provide a reference class of permutations sufficient to provide enough telephone numbers to area providers in the first place, and to provide enough probability to render the numbers used so improbable as to avoid being generated artificially by outsiders in the case of credit card numbers? Dembski links, in the quote, specified complexity with design. It is in that sense, that both the credit card numbers and phone numbers represent specified complexity–but, let’s remember, we’re dealing with two known instances of design. (I suppose this is what you mean by “causal history”) But, when someone invokes CSI, it generally is meant to refer to specified complexity involving 500 bits of information. So, in our everyday world, intelligent agents are everyday generating specified complexity, CSI, if you will. For example, evolutionary algorithms represent CSI being generated. But, if one wants to “infer” design from some physical event found in our world–i.e., without knowing the causal history–then there must be at least 500 bits of information involved before such a design inference is made.
    Bottom line, my opinion is Dembski retracted his statements not because they were wrong, or because he misspoke, but because critics like you can use such feeble distinctions to confuse others.
    You write: “The specification is “all bit strings of a given length with at most one 0”. Yes, wonderful. And that generates a rejection region consisting of 1024 elements (events). But, again, that is one-half of what is needed. On page 144, Dembski writes: “In this case the ordered pair (T,E) constitutes . . . complex specified information. Formally justifying this is straightforward: Since T and E are identical, T clearly subsumes E [Dembski’s dealing with the prime number sequence from the film “Contact”; normally T and E wouldn’t be identical, but instead T would just simply contain (subsume) E]; as a sequence of prime numbers, T is readily seen to be detachable from E [which is just a 1000 long bit string] and therefore constitutes a specification …. finally, by having probability 1 in 21000 or approximately 1 in 10300 and is therefore complex….” T, here, is the extremal set of the function f coninciding with the rejection region. Okay, so we have T–i.e., “all bit strings of a given length with at most one 0”–so where is E? You haven’t provided E, the “physical event”. You still have provided but one-half of the ordered pair that is needed to properly define specified-complexity.
    You mention the Caputo case. Well, on p. 80, we find: “In step #1, a subject S, here the New Jersey Supreme Court, learns that the following event has occurred:
    DDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDDD.” A little farther down the page we find: “In step #3, the New Jersey Supreme Court identifies a rejection function f that counts the number of Democrats given the top ballot line in any sequence in Ω. This function induces a rejection region R that is an extremal set of the form Tγ = {ω ∈ Ω | f(ω) ≥ γ} where γ= 40. In other words R = T40 = {ω ∈ Ω | f(ω) ≥ 40}. The rejection region R includes E.”
    T and E are not the same thing. You provide the rejection region; as in the Caputo case, it is a simple counting function. But where is E? You know, the “DDDDDDDDDDDDDDDDDDDDRDDDDDDDDDDDDDDDDDDD”?
    As to your contention that there is nothing in Dembski’s works saying that the rejection region has to be of low probability, well, how about this:
    “The ordered pair (T,E) now constitutes specified information provided that the event E is included in the event T and provided that T can be identified independently of E . . . Moreover, if T also has high complexity (or correspondingly small probability) . . . , then (T,E) constitutes complex-specified information, or CSI.” (p. 142)
    Are you prepared to say that you’ve misunderstood Dembski?
    Regarding SAI, I asked you to determine whether bit string A or B was designed. You said SAI couldn’t do that. I asked then that you determine which one had more information. You respond by saying SAI is “about distinguishing strings that would likely arise from the flipping of a fair coin, versus those that probably don’t.” Then you add that “Kolmogorov complexity is not a computable quantity; it can only be approximated”. Contrary to what you assert, I did in fact read the section carefully. And, indeed, I noticed that you said it was only approximate. And, of course, my immediate thought was, “Well why bother defining SAI?” One of the two bit strings is, in fact, the product of a series of coin flips, the other is not. I suppose that your response about “approximate” means that you can’t distinguish between the two using normal compression programs. Well, Dembski’s ideas are about detecting design–which SAI can’t do–and has something to do with determining the difference between randomly generated bit strings, i.e., those arising from coin flips, and those that don’t. But then you say that SAI is only approximate. You wrote: “the purpose of SAI is simply to show how, if one wanted, one could put Dembski’s ideas on a theoretically sound footing.” Have you really done that?

    Reply
  87. Lino D'Ischia

    Mark:
    I asked you if the experiment Dembski describes violates the math of quantum mechanics. You haven’t answered this question yet.

    Reply
  88. Coin

    I asked you if the experiment Dembski describes violates the math of quantum mechanics. You haven’t answered this question yet.
    In comment #84, MarkCC wrote:

    Probably, yes.
    But since you’re not doing the math to show how that text is allegedly being generated in terms of quantum phenomena, I can’t answer the question.

    Since you’re apparently having trouble understanding what this means: The answer appears to be yes, this would violate the math; but you’ve described the question much too vaguely for anyone to give anything but a vague answer. If you would actually explain what it is that is happening in your hypothetical “experiment”, then maybe someone can answer as to whether it’s possible or not.
    (In the meanwhile, I think part of the problem here is that you don’t seem to understand what it means for a theory to make “predictions” in a probabilistic setting. You seem to think that just because quantum physics expects the behavior of the photon-code system described to be stochastic, that quantum physics is effectively saying “ANYTHING can happen”; that is, you’re thinking that because the math of quantum physics assigns a nonzero probability to the idea of a certain thing happening, then that thing happening would be compatible with the math of quantum physics. You’re missing or ignoring that even though quantum physics admits an absurdly wide range of things as possible with nonzero probability, it still specifies different probabilities to different things; those probabilities represent a prediction, and we can test that prediction, for example by performing a test multiple times and comparing the frequencies of different kinds of events against their computed probabilities. You’re also ignoring that some events may have computed probabilities which are vanishingly small, and although this isn’t the same thing as “impossible” this equates to a prediction that this event will not occur on any particular single trial. (Which if I’m understanding your perspective right makes your approach actually kind of ironic for a Dembski supporter, since so much of what Dembski tries to say is bound up in conflating “improbable” with “impossible”!) You seem to be trying to maneuver someone into admitting that there is a nonzero probability of a random number generator outputting a binary string describing a cure for cancer, taking this as an admission of “possibility”, and then moving forward from there into some sort of “a-HA! so then…” statement.)
    On the other hand, you seem to imply that unless Dembski publicly backs away from the examples that he didn’t carry over into “NFL” from “The Design Inference”, then somehow that represents a refutation of Dembski.
    What we’re saying is that unless Dembski comes out and explains what the hell it is he’s trying to say, one way or the other, then no one of his contradictory statements can be anything but meaningless. In order to say whether anything Dembski proposes is right, wrong, possible, or impossible, we first need him to specifically tell us what it is he’s proposing. You know– the exact same thing you’re refusing to do here; when you’re demanding the people here take a position on whether “the experiment Dembski describes violates the math of quantum mechanics”, while simultaneously refusing to describe the experiment in specific enough terms to tell whether the experiment violates any math, or can be described in terms of math at all.

    Reply
  89. Jeffrey Shallit

    Lino:
    You don’t seem to be following the argument. You say You haven’t provided E, the “physical event”. But I have. The event is the receipt of the bit string we are discussing. If your complaint is the completely trivial one that I did not say what precise physical event this corresponds to, then let it be a signal received from outer space, in exactly the same way that Dembski discusses the supposed signal involving prime numbers. Or let it be a succession of observations of temperatures in Tucson, Arizona, with “1” representing “temperature above 80 degrees F” and “0” representing otherwise. The point is, it doesn’t matter, as long as some physical event corresponds to the string, what the physical event is. I’m sorry, I regard objections like these as completely stupid.
    Your quote from Dembski demonstrates exactly the point I have been making about separating the notion of specification from the notion of low probability. According to Dembski, the definition of specification itself says nothing about probability. It is only when one wants to go to “CSI” or inferring design that considerations of probability become important. I find it truly remarkable that you can quote this passage from Dembski, which demonstrates exactly what I have been saying all along, as evidence for your mistaken view. Remember? You were the one who claimed something I wrote was not a specification because it wasn’t low enough probability. Are you now prepared to admit your error?
    Finally, you ask what SAI is good for. Well, once again, but more wearily this time, I suggest you read the paper of Li and Vitanyi on the universal distribution. You can lead a horse to water, but ….
    For your strings, the only compression routine I have easy access to, Unix’s compress, provides the same compression for A and B, so the approximation to SAI for both strings is the same. Based on the asymmetric distribution of 0’s and 1’s, I suspect neither string arose from flipping a fair coin. A’s low count of 0’s bounds only about 1% of all strings; B’s only about 6%. (And somehow I can’t imagine you sitting there flipping a coin 193 times, recording the result.) But please note that SAI is not “information”, it is “anti-information”. Dembski’s notion of information is the opposite of the notion used by every other information theorist.

    Reply
  90. Mark C. Chu-Carroll

    Lino:
    Yes, I did answer the question about whether Dembski’s experiment violates the math of quantum physics. The answer is that *I can’t say* whether it violates the math – because it’s too vague.
    It goes back to one of my favorite sayings here on the blog: the worst math is no math.
    You’re trying to defend Dembski’s “unembodied designer” using a scenario that supposedly describes how quantum phenomena could produce a particular output by being tweaked by some non-corporeal entity without violating the laws describing the behavior of quantum phenomena. You’re describing that phenomena in extremely vague, non-mathematical terms – and then asking whether or not your scenario is compatible with the math of quantum physics.
    One answer would be “yes”, simply because the description asserts that it’s compatible. If the phenomena is described as “Doing X without violating the rules of quantum physics”, then we can shallowly say “Since the description says it doesn’t violate the math of quantum stuff, it doesn’t violate the math of quantum stuff.”
    That’s an answer – but it’s a very unsatisfactory answer. Because the description could be utter nonsense: there might be no such phenomena – no possible way to do X without violating the rules of quantum behavior.
    The thing is – we can’t say with any certainty whether or not X is actually possible under the math of quantum phenomena. Because X is ill-defined.
    Quantum phenomena are impossible to describe in an accurate way using informal prose. They are intuition defying, nigh-incomprehensible things. Subtle differences between different cases can have dramatic impacts. The only way to really describe quantum phenomena is to drop into the language of math, and describe things precisely in terms of the mathematical structures that seem to govern the phenomena.
    You’re not giving us a description of your scenario in terms of the math. And as long as you don’t do that, there’s no way to answer your question in a genuine way.
    To return to metaphor; if I give you the equation “x2+y=z” where y and z are specific fixed values, and tell you that there are real values of X for which this is true, can you tell me if I’m right or wrong? Are there real values of X which satisfy that equation?
    Obviously not. If y=2 and z=1, then the equation reduces to
    x2=-1 – the canonical example of an non-real solution. If y=1 and z=2, then x=1 – clearly a real solution. If I don’t tell you anything about the values of y and z, then you can’t say.
    This is a very typical example of one of Dembski’s standard ploys. He throws around a lot of mathematical terminology that looks really impressive, but carefully leaves it underdefined. That way, when someone refutes his argument, he can just hand-wave his way past it, by saying “That’s not what I meant”.
    Dembski has never precisely defined specified complexity. It’s remarkable when you look over his books and papers and various writings and lectures: he’s absolutely scrupulous about leaving holes in the definition. And when someone – like Shallit – refutes his nonsense, he pulls exactly the scam that I mentioned above. Any refutation of his CSI-based gibberish is met by “But that’s not *really* CSI.”, supported by the wiggle-room of the imprecise definition.
    Just go look at your responses in this thread. They’re typical of what Dembski would do. Notice the constant quibbling about definitions. No matter how anyone criticizes Dembski’s work, it’s always met by “No, that’s not what the definition of CSI says”.

    Reply
  91. David vun Kannon

    secondclass in #91,
    I think you are correct. I assumed that only the new half of the population was mutated, but rereading the paper I agree with you.
    My intuition is that Marks and Dembski are close to correct (maybe off by an order of magnitude). A very small population solves the problem easily. A larger population would solve it very fast, perhaps immediately.
    ev is not a great test of “what can EAs do?”, it closer to “can we simulate the evolution of transcription without doing full-on molecular modeling”. So the result should be read as “evolution of transcription is easy (modulo not trying to simulate how the molecules go bump)”, not as “EAs suck, therefore special creation is necessary to explain life”, which is the subtext put on it by M&D.
    Viewed as an issue in allocation of trials, if you have an optimisation problem L bits in size, the first O(L) trials should be random (this makes NFL bigots happy) and then you can sit back and think about what to do next. EAs with O(L) population size do this. David Goldberg’s wonderful book The Design of Innovation has sizing results such as this.
    Since the cost of a trial will eventually dominate other costs (for interesting/hard problems) minimising the number of trials is important. If L > 10^9, what to do? IlliGAL has an interesting blog entry on the subject at http://www.illigal.uiuc.edu/web/blog/2007/07/11/billion-bit-paper-wins-best-paper-award-in-eda-track-at-gecco-2007/.

    Reply
  92. secondclass

    Lino:
    As I just mentioned, in Dembski’s technical defintion of CSI, a “specification” is a true “specification” when the pattern that is identified by the intelligent agent induces a rejection region such that, including replicational resources and specificational resources, the improbability of the conceptual event that coincides with the physical event is less probable than 1 in 10^150.
    This is incorrect. You either take into account the actual probabilistic resources or you use the number 10^150, not both.
    You cannot have a “specification” a la Dembski if you don’t describe, mathematically, some rejection region.
    So much for Dembski’s flagellum analysis.
    On the other hand, you seem to imply that unless Dembski publicly backs away from the examples that he didn’t carry over into “NFL” from “The Design Inference”, then somehow that represents a refutation of Dembski. This is an argument from ignorance. You don’t know why he left it out. You’re inferring the rest.
    First of all, it was not in TDI that Dembski said that phone numbers and Visa card numbers are CSI. TDI was pre-CSI.
    Second, most of TDI was not carried over into NFL. It would seem that, by your logic, Dembski is not accountable for most of TDI.
    But, if one wants to “infer” design from some physical event found in our world–i.e., without knowing the causal history–then there must be at least 500 bits of information involved before such a design inference is made.
    Interesting, then, that Dembski has no problem inferring design from the Caputo sequence, which falls far short of 500 bits.
    Bottom line, my opinion is Dembski retracted his statements not because they were wrong, or because he misspoke, but because critics like you can use such feeble distinctions to confuse others.
    Where did Dembski retract his statements?

    Reply
  93. Lino D'Ischia

    Jeff:
    I’m sorry, I thought you were bright enough to realize that Step #1, in which the event E occurs, comes before Step #3, in which the rejection function f and rejection region R are formulated. Why do you insist on doing things backwards? It is E, and the circumstances surrounding the production of E, that induces the rejection function and region. You suggest that to ask for an event is somehow “trivial”. Really? Let’s look at the two examples you blithely throw out: (1) “the signal received from outer space, in exactly the same way that Dembski discusses the supposed signal involving prime numbers”, and (2) “s succession of observations of temperatures in Tucson, Arizona, with “1” representing “temperature above 80 degrees F” and “0” representing otherwise”. Well, how do these “events” correspond to the putative rejection region you’ve provided–that is, a 10 bit binary string, and a “pseudo-unary” string of length 1025 ≤ l ≥ 2048?
    For case (1): In the “psuedo-unary” output, we certainly have the bit string length to accommodate the 1000 0s and1s that make up the “signal involving prime numbers”, BUT, the output bit strings are restricted to strings having but one 0, and having that at the end of the string. Obviously, none of the 1024 output bit strings correspond the Dembski’s example since there isn’t enough room for all the 0s in the “event”. If we now consider the “input” bit strings, although binary, and thus allowing all the 0s we need, the bit string length is limited to 10, and we need 1000.
    For case (2): In the “pseudo-unary” output, if applied to Arizona weather, according to the circumstances involved in the event E, considering that the “output” bit strings are strings of all 1s ending with a 0, that means for l = 1025, the temperature in Tucson, Arizona would have to have been over 80 degrees F for almost three years consecutively. Not very likely, eh? For the “input” bit string, since it permits both 0s and 1s, this means a very reasonable fluctuation in temperatures; but, for only 10 days, which would mean that the event E is not to be found in the “output” bit strings. It’s been lost.
    In both cases, something is terribly wrong. Why? It’s because the function you’ve been using is for integers, and for nothing else. Now if the event E was an integer between 0 and 1024, say 237, then this could be identified in both the input and output bit strings (the input would simply be the binary equivalent of said integer, and in the output, said integer would be determined by counting the number of 1s, and subtracting 1024 from that).
    With the event E=237, we would then see the probability of this event as being 1 in 1024 for both the input and output strings, meaning that its improbability, hence complexity, would be the same in both input and output strings, hence negating your contention that this “pseudo-unary” function can generate CSI. Remember, Jeff, that’s what we were arguing about.
    Going on, when you say that “the definition of specification itself says nothing about probability” this seems wrong in the extreme. When you look in the index of “NFL” for “specification-definition”, you’re referred to page 63. Here’s the quote: “Given a reference class of possibilities Ω, a chance hypothesis H, a probability measure induced by H and defined on Ω (i.e., P(| H), and an event/sample E from Ω; . . . Any rejection region R of the form Tγ = {ω ∈ Ω | f(ω) ≥ γ} or Tδ = {ω ∈ Ω | f(ω) ≤ δ} is then said to be detachable [independent] from E as well. Furthermore, R is then called a specification of E, and E is said to be specified.” The Ts that are used refer to “extremal sets”, and the function f in them is a probability density function. So how can you say that a specification says nothing about probability when a probability density function is included in its very definition?
    As to the strings and SAI, believe me, one of the two bit strings I included involves 193 coin-flips (I’m a persistent cuss’). One was designed. SAI can’t tell us which one is random, which has information, which is designed. Can’t we just put it to sleep? I’m sure it’s of some utility for computer science, but why mention it in the same breath as CSI? They’re entirely separate critters.

    Reply
  94. Anonymous

    second class:
    This is incorrect. You either take into account the actual probabilistic resources or you use the number 10^150, not both.
    Without checking (which is what I should have done the first time), I believe you’re right.
    So much for Dembski’s flagellum analysis.
    Dembski states that biological function is specified, not that it is a “specification”, which has mathematical overtones.
    First of all, it was not in TDI that Dembski said that phone numbers and Visa card numbers are CSI. TDI was pre-CSI.
    I misspoke (mis-wrote?): it should have been Intelligent Design.
    Interesting, then, that Dembski has no problem inferring design from the Caputo sequence, which falls far short of 500 bits.
    But Dembski didn’t infer it was designed, the New Jersey Supreme Court did.
    Where did Dembski retract his statements?
    Poor choice of a word. I should have written that Dembski ‘deleted’ the portion dealing with Visa card numbers, etc.

    Reply
  95. Tyler DiPietro

    Lino,
    Your latest post clearly demonstrates that Professor Shallit is right, you’re not following the argument.
    As for specification, your latest quote from Dembski doesn’t demonstrate anything aside from what professor Shallit said. There is nothing in the quote to indicate that specification refers for a set of events with low probability. The “specification” of the event can refer to a region of a high probability, but in that case you don’t have enough complexity to infer design. Hell, Shallit outlined this very argument a few posts above. Here is the quote.

    “You have provided no evidence for your claim that a specification necessarily corresponds to identifying a subset S of low probability. There is nothing Dembski’s works that says this, and you have provided no quote to justify it. To identify something as CSI, you need two things: a specification, and the probability induced by that specification that is sufficiently low. These things are independent. As I read Dembski, you can have a specification that gives a region with relatively high probability, in which case you don’t get enough CSI to infer design. But this doesn’t mean it’s not a specification; it’s just not a good enough specification to infer design (pace Dembski). I contend you continue to misunderstand this distinction.”

    As for your objections to the examples, they come off as mostly gibberish. I can only gather that you are ignoring the difference between casual history interpretations of the probability and the so-called universal probability bound interpretation, which is Dembski’s much more grandiose (and absurd) claim.
    I’ll only mention in passing that it takes a special kind of arrogance to tell a computer scientists about the subjects he studies as if he’s completely unfamiliar, especially one of Prof. Shallit’s stature and research pedigree.

    Reply
  96. Lino D'Ischia

    Mark CC:
    As to photons: look, if you split the photon stream, the math requires that half of the photons go up, and half go down. But that doesn’t mean that if we looked at the detectors we’d see 101010101010101 both top and bottom; but if a bit string were produced that had 50% 0s and 50% 1s in both the top and bottom detectors, and still coded the cure for cancer, again, what quantum math is overturned. If, on the other hand, the photons ended up with 30% 1s in the top and 70% 1s in the bottom, but 50% 0s in the top and 50% 0s in the bottom, obviously something is wrong. What more can be said? It’s hypothetical.
    Dembski has never precisely defined specified complexity. It’s remarkable when you look over his books and papers and various writings and lectures: he’s absolutely scrupulous about leaving holes in the definition. And when someone – like Shallit – refutes his nonsense, he pulls exactly the scam that I mentioned above. Any refutation of his CSI-based gibberish is met by “But that’s not *really* CSI.”, supported by the wiggle-room of the imprecise definition.
    I would agree that there are a lot of subtle nuances that are hard to keep track of at times. But what I find is that the more I read (and re-read), the book, the overall exposition seems to get more solid.
    As to CSI and specified complexity, here’s a thought. On pages 72-73 Dembski is laying out his “generic chance elimination argument”. It is “generic”, and he uses simply α as the significance level cutoff (or something like that) He concludes with:
    “S [the subject] is warranted in inferring that E did not occur according to any of the chance hypotheses in {Hi}i∈I and therefore that E exhibits specified complexity.
    So CSI allows you to eliminate every single chance hypothesis that anyone could possibly come up with (this is purchased by the 500 bits of complexity), but with lesser amounts of complexity one can still eliminate a particular, or a set of particular, chance hypothesis/es.
    second class:
    This just came to mind: Dembski uses the 500 bits of complexity to infer design, yes, but then the question becomes, What permits him to infer design? The answer to that is that with the 500 bits of complexity, he has “eliminated” ALL chance hypotheses; whereas, in the Caputo case, there was only ONE chance hypothesis; namely, that the string of 40 Ds and 1R was (more or less) the result of flipping a coin with D on the top and R on the bottom. He then calculates the number of replicational resources, etc, and does the Nα

    Reply
  97. Lino D'Ischia

    Tyler:
    Do you have anything to say about the fact that in trying to use the two examples Jeff gave as specifications neither worked; yet when I used an integer one could find the event in both the input and output strings with no problems? Is that just gibberish?
    Hopefully, I’m not arrogant. But I am a persistent cuss’.

    Reply
  98. Tyler DiPietro

    Lino,
    The problem is that you’ve once again misunderstood the argument. Where did he say, for instance, that the signal from outer-space involved prime numbers? What we was talking about was the receipt of the output strings from outer-space. And you’ve once again missed the point that attaching it to a specific physical event was trivial. Dembski claims that he has a universal probability bound that can infer design from any physical event with sufficient CSI, which Shallit’s example demonstrably has given Dembski’s universal probability interpretation. The point is that this demonstrates that Dembski’s “CSI” is meaningless, pseudomathematical pablum.
    Your integer example only proves one thing: you can use an input to generate an output (if you write the integer in binary, and it constitutes at most ten bits). You once again completely ignore the fact that the output of the encoding is embedded within a larger class of possibilities, within which it constitutes CSI.
    It’s also worth mentioning that you completely bungled the decoding function he defined in the paper (very briefly: you count the 1’s in the pseudo-unary string, write the result in binary, and delete the first one. You don’t subtract the binary value from anything).
    As for being persistent: yes, you are being persistently ignorant and arrogant.

    Reply
  99. Lino D'Ischia

    Tyler:
    I’m sure the good professor will have an answer. I’ll wait for his. In the meantime, let me just point out that in saying the following, you’ve completely missed the point of all this:
    “It’s also worth mentioning that you completely bungled the decoding function he defined in the paper (very briefly: you count the 1’s in the pseudo-unary string, write the result in binary, and delete the first one. You don’t subtract the binary value from anything)”
    What you’re describing (from memory, without going back and making sure) is how you reverse the function, that is, using the output string to get back to the original input string. What I was demonstrating was–in contrast to the problems associated with whimsically applying any string of data to any defined region within a reference class–how there’s no difficulty in using this rejection region if an event is properly chosen for it. The ‘subtraction’ had to do with ‘identifying’ the chosen integer within the output string. It’s there. I simply gave the formula for extracting it. All this points out that the rejection region must be determined with the event already in hand.
    And please note that the problem with Dembski’s string had nothing to do with the prime numbers it contained, but with the fact that it contained 0s other than at the end of the string.
    These problems arise because Jeff chose a function at random without taking into account any kind of event. Now in the case of Arizona weather, why would you even attempt to use Dembski’s methodology on something that is obviously not designed. The Explanatory Filter simply eliminates events like that. In the case of Dembski’s Contact bit string, if you look at it and say, “Well this looks like a binary string, so I’ll use binary numbers to represent it”, that’s fine. And Jeff defines the rejection region in terms of a binary string. No problem. Except, as I pointed out originally, if you use the “psuedo-unary” function to transform a 1000 long binary bit string, the output would be almost infinitely long (10300). Jeff conveniently chose a bit string length of 10 for the input. But what if he had chosen 100? Then the output would contain 1030 1s with a 0 at the end. As I say, he conveniently chose it to be 10 bits long to cover up this problem. (But, of course, as I originally pointed out, no CSI was produced in any event!)
    I hope I don’t sound arrogant pointing this all out.

    Reply
  100. Lino D'Ischia

    secondclass: (who’s really first class)
    The last part of my last sentence kept getting cut-off. I thought I had fixed it before posting. It should read: “He then calculates the number of replicational resources, etc, and does the Nα ≤ 1/2 calculation.”
    [I hope this works. It’s still acting funny.]

    Reply
  101. Stephen Wells

    When you say that the Arizona weather is “obviously not designed”, could you specify how exactly you determine that?

    Reply
  102. secondclass

    Lino:
    Dembski states that biological function is specified, not that it is a “specification”, which has mathematical overtones.
    If “specified” doesn’t entail a “specification” in Dembski’s terminology, then clarity certainly isn’t his forte. But I think Dembski would be surprised to learn that his flagellum analysis didn’t include a specification.
    But Dembski didn’t infer it was designed, the New Jersey Supreme Court did.
    The Caputo case is Dembski’s poster child for design inferences. His application of the GCEA to the Caputo case in NFL is the closest he’s ever come to applying his full argument, and he definitely comes to the conclusion of design when he applies it.
    but if a bit string were produced that had 50% 0s and 50% 1s in both the top and bottom detectors, and still coded the cure for cancer, again, what quantum math is overturned.
    Randomness implies more than just a 50/50 split for the frequency of 0’s and 1’s. In particular, all substrings of any given size should be uniformly distributed also.
    I would agree that there are a lot of subtle nuances that are hard to keep track of at times.
    I don’t think that the problem is sublety. I agree with Mark that Dembski has never resolved the ambiguities in his approach, and even worse, he isn’t always consistent with himself.
    The answer to that is that with the 500 bits of complexity, he has “eliminated” ALL chance hypotheses; whereas, in the Caputo case, there was only ONE chance hypothesis; namely, that the string of 40 Ds and 1R was (more or less) the result of flipping a coin with D on the top and R on the bottom.
    Actually, this isn’t the case. 500 bits simply means that P(T|H) is 2^-500, so it applies only to the chance hypothesis H. A higher bit count does not mean that the probability applies to more chance hypotheses.
    secondclass: (who’s really first class)
    Flattery will get you everywhere. Everyone please go easy on my friend Lino.

    Reply
  103. Mark C. Chu-Carroll

    Lino:
    You’ve done an excellent job of making my point for me on two counts.
    First: your description of your supposed scenario for an unembodied intelligence sneaking information into a quantum process is not consistent with the math of quantum phenomena. As I keep saying, you can’t describe quantum effects using prose – you really need to get into the math. The quantum probability distribution properties do not just mean that considered as binary, there’ll be an equal number of 1s and 0s. It says a whole lot more than that. By trying to keep playing with it on the level of informal prose, you’re making exactly the kind of mistakes that I predicted.
    Second: you’ve provided a perfect example of the Dembski shuffle. As a reminder, that’s my term for Dembski’s little game where he provides an incomplete definition of something, and then claims to refute all criticisms of it by saying that they didn’t understand the definition. How much more perfect could it get than saying that one of Dembski’s own examples of CSI isn’t valid, and we’re just all confused because we don’t understand that when Dembski says that something is specified, that doesn’t mean that he’s saying it has a specification. We’re all just confused about the distinction between things that are specified, and things that have a specification – a distinction that Dembski never makes, which makes no sense, and which flies in the face of all of Dembski’s arguments. But he’d probably approve: it gets him off the hook for one of his screwups.

    Reply
  104. Tyler DiPietro

    Lino, you are demonstrating the old maxim that “a little knowledge is a dangerous thing”. You’ve obviously failed the comprehend even the most basic points of the argument, yet are completely convinced that you have.
    Look closely at your integer example: you are only demonstrating that you can take an input and generate an output with a defined function. You are taking a decimal integer and converting it to a binary string. If you wanted to provide an analogue to Shallit’s example, the output would be embedded within the set of all possible binary strings of length between 1 and 11 (of which your outputs are a subset). Shallit’s example is embedded within all possible strings of length between 1025 and 2048, making his a better example of how functions generate CSI.
    You have also missed the point of Shallit’s function altogether. The reason why he chose 10 bit binary strings is because it provides a perfect example of how the image of function can fail to possess CSI while it’s target can. This is something contrary to Dembski’s claim that a function cannot generate CSI.
    As for Dembski’s explanatory filter, the problem is that it demonstrably doesn’t have the capabilities you claim for it. The explanatory filter is essentially an argument from ignorance dressed in some unnecessarily turgid mathematical notation. Dembski’s has never demonstrated the power of his explanatory filter by applying it to an empirical situation. His claims that he can infer design through some relatively simple mathematics are absurd.

    Reply
  105. Tyler DiPietro

    “The reason why he chose 10 bit binary strings is because it provides a perfect example of how the image of function can fail to possess CSI while it’s target can.”
    This should read “the image of it’s domain possesses CSI while it’s domain does not.”

    Reply
  106. Lino D'Ischia

    second class:
    If “specified” doesn’t entail a “specification” in Dembski’s terminology, then clarity certainly isn’t his forte. But I think Dembski would be surprised to learn that his flagellum analysis didn’t include a specification.
    The function of the bacterial flagellum “specifies” the flagellum, but is not the “specification”. Yes, this is true. However, in Section 5.10 that function is given its mathematical underpinnings. Information has to be both “specified” and “complex”. His mathematics in 5.10 fleshes out the complexity of the flagellum. Another condition for a specification is “detachability”, or, roughly, the independence of the event E from the rejection function that it induces. So when Dembski says that biological functions are “specified”, my understanding of what he is saying is that since biological functions are “independent” of anything we could artificially impose on it, then there is zero likelihood that we would come up with it on our own. Rather, we look at biological systems, and find functions that are defined within the systems themselves.
    The Caputo case is Dembski’s poster child for design inferences. His application of the GCEA to the Caputo case in NFL is the closest he’s ever come to applying his full argument, and he definitely comes to the conclusion of design when he applies it.
    I’m glad you mentioned the GCEA. Dembski finally says that if you calculate all of the probabilistic resources available, multiply that times the significance level of the extremal sets defined by the event and the rejection function it induces, if this is less than 1/2, then we can infer design. This can be done in the Caputo case because we know its “causal history”. We know that Caputo told the court that he chose the D and R blindly from an urn (or an equivalent system to coin-tossing). That is “THE” “chance hypothesis”. There is no other. When you do the calculation (and Dembski does do it in the book), then N (the probabilistic resources) times α is less than 1/2. The only chance hypothesis has been shown to be less than 1/2; hence, chance is ruled out. (Actually, in the Caputo case it is well below 1/2) Design is left as the only other possible inference.
    Randomness implies more than just a 50/50 split for the frequency of 0’s and 1’s. In particular, all substrings of any given size should be uniformly distributed also.
    But if it were uniformly distributed, then it wouldn’t be the cure for cancer. This violates the mathematics for random events (e.g., coin-tossing), but it doesn’t violate QM expectations.
    I don’t think that the problem is sublety. I agree with Mark that Dembski has never resolved the ambiguities in his approach, and even worse, he isn’t always consistent with himself.
    Well, Gary Player, the famous golfer, once said, “The more I practice, the luckier I get.” Reading Dembski over and over makes what he says more and more clear. But, of course, I am very sympathetic to what he writes. For someone who is not so sympathetic, I shouldn’t be surprised that they don’t read it in its entirety, let alone re-read it.
    Actually, this isn’t the case. 500 bits simply means that P(T|H) is 2^-500, so it applies only to the chance hypothesis H. A higher bit count does not mean that the probability applies to more chance hypotheses.
    That’s where α comes in. That’s the significance level, and in the case of CSI its 2-500. But then the “specificational/probabilistic resources” come in as well. The idea is that if our “background knowledge”, K, were truly great, then there would very likely be more than just ONE chance hypothesis that could explain the event E and develop a rejection region that includes E. As a result, all the elements of those rejection regions have to be added together, and so enter into the probability calculation. So if you look at the top of p. 77, you’ll find that Dembski addresses this. He writes: “Actually, this is one place where we have to be careful about including the entire set of relevant chance hypotheses {Hi}i∈I” It’s too tedious to include all the HTML tags that are needed for the next few sentences, but if you read them I think you’ll agree with my understanding here. In the end, Nα < 1/2.
    Flattery will get you everywhere. Everyone please go easy on my friend Lino.
    LOL.

    Reply
  107. Mark C. Chu-Carroll

    Randomness implies more than just a 50/50 split for the frequency of 0’s
    and 1’s. In particular, all substrings of any given size should be
    uniformly distributed also.

    But if it were uniformly distributed, then it wouldn’t be the cure for cancer.

    That is exactly the point. You’re proposing a scenario, based on Dembski’s quantum babble, which isn’t
    possible. If you express it in terms of the math of quantum behavior, you’d see that it just doesn’t work. You can’t meddle with quantum phenomena without any energy expense, and produce a printout of a cure for cancer.

    This violates the mathematics for random events (e.g.,
    coin-tossing), but it doesn’t violate QM expectations.

    Yes, it does. And that’s exactly why you continually refuse to put you and Dembski’s argument about “disembodied designers influencing things via non-energetic alteration of quantum states” into anything resembling a mathematical frame.
    Because you can’t put it into the math of quantum physics, and show how it doesn’t violate any of the properties of quantum behavior, and doesn’t alter the energy state of the system.
    It’s all just the Dembski shuffle: keep the argument nice and vague; use math for handwaving value, but *never* actually do the math for a real example, never do the math to support an argument.
    Tell me Lino, why do you think that after the years Dembski has spent doing this stuff, that he’s still never shown a complete computation of the CSI in any of his non-contrived examples?

    Reply
  108. Lino D'Ischia

    Tyler DiPietro:
    You’re entitled to your opinion about Dembski and his Explanatory Filter. I don’t accept them.
    You wrote:
    If you wanted to provide an analogue to Shallit’s example, the output would be embedded within the set of all possible binary strings of length between 1 and 11 (of which your outputs are a subset). Shallit’s example is embedded within all possible strings of length between 1025 and 2048, making his a better example of how functions generate CSI.
    This has forced me to look more closely at the output range that Shallit specifies. Here’s a 10-bit string: 0100000000. This has to be one of the 1024 elements of the domain of the function. But, in binary form, 0100000000=01. Shallit tells us that the output bit string for 01 is 111110. And 0000000000=0, which would become 10 in the output string. So, unless I’m missing something, the length of the output bit-strings should run from 2≤l≤2049, and the 1024 elements corresponding to the original region would be defined by 2n+1 1s, with a zero added at the end (where n=the integer we select). So, for my example above, n=237. The output bit string would consist of 475 1s with a zero at the end. There would be 1024 such strings. For any given n, the probability for the domain and range of the function would be 1/1024. The complexity hasn’t changed. Hence CSI hasn’t changed. In the meantime, we’ve detected another error.

    Reply
  109. Lino D'Ischia

    Mark CC:
    “Because you can’t put it into the math of quantum physics, and show how it doesn’t violate any of the properties of quantum behavior, and doesn’t alter the energy state of the system.”
    Random distributions are the provenance of mathematics. Quantum mechanics is the provenance of the real world. The cure for cancers upsets what we would expect if we were looking for a random distribution, but it wouldn’t upset the real world (quantum world) at all.
    There really isn’t any further we can go with this is there?

    Reply
  110. Mark C. Chu-Carroll

    Lino:
    That’s a typically Dembskian weasel move.
    *You* proposed that scenario, as an example of how Dembski’s math demonstrates the way that an unembodied designer can influence the world.
    It’s a totally unreal scenario – totally made up by you to show how brilliantly Dembski had used quantum mechanics to explain this.
    You’ve repeatedly insisted that your scenario works in terms of quantum phenomena.
    But after all these rounds of discussion, when it’s totally obvious that your scenario is wrong, and that your claims that it’s consistent with quantum physics are just nonsense – then, suddenly, math is no longer relevant.
    As I keep saying: we’ve got beautiful math that describes quantum phenomena to an astonishingly precise degree. These phenomena have been verified by experiment, and the math has been as close to perfect as we can measure. We *can’t* understand what happens at a quantum level, *except* via the mathematical framework of quantum theory.
    You didn’t dispute that all the way through this discussion. In fact, *you* are the one who proposed this scenario as an example of something that was consistent with our understanding of quantum physics.
    You’re wrong. It’s that simple. You just don’t have the integrity to admit it. So you weasel out, and say “If we observed this, then you’d have to admit that it happened”.
    Yeah, if we observed it happen, I’d have to admit that it happened. But if it did – it wouldn’t be consistent with our understanding of quantum physics – which was your original claim. If that were to happen, it would throw our entire understanding of quantum phenomena right out the window. It would be one of the most important and remarkable scientific observations of all time.
    But it still wouldn’t salvage your argument. Because what you argued was that your scenario was compatible with the
    mechanics of quantum physics without any alteration of energy states. It’s not.

    Reply
  111. Lino D'Ischia

    Mark CC:
    “But after all these rounds of discussion, when it’s totally obvious that your scenario is wrong, and that your claims that it’s consistent with quantum physics are just nonsense – then, suddenly, math is no longer relevant.”
    This is a silly statement. You’re not going to equate “Statistics” with “Quantum Mechanics” are you? They’re independent. The axioms of one are not the axioms of the other. And this is exactly what provides the room needed for the scenario Dembski entertains. This has been clear in my mind from first I read Dembski’s example.
    Have you ever studied QM? The statistics involved with QM have to do with the definition of the inner product as defined over Hilbert Space. Uniform distributions are not required at all. Measurements collapse the wave equation by definition (it’s one of the “axioms of QM”). What you measure needs, therefore, to match up with what theory predicts. So, when you’re dealing with the spin of photons, they’re spin 1/2 as are all bosons, and that simply means that if you shoot a stream of photons through some kind of splitter and measure them (collapse the wave-function), that statistically you should get 50% in the ‘up’ detector and 50% in ‘down’ detector. That’s it. The actual distribution matters not. It’s the total count between up and down that matters. QM in no way requires a uniform probability distribution to be found amonst the spin up and spin down photons detected. Bottom line, the statistical nature of QM smears reality. And…..you can sneak in information because of this smearing.

    Reply
  112. Tyler DiPietro

    Lino,
    There are so many basic flaws in your analysis that it is difficult to know where to start. From here:
    “But, in binary form, 0100000000=01.”
    This is untrue. When converting a binary string to decimal you multiply each digit by 2 exponentiated by its place value, and then add the resulting values together. You’ve hacked off the wrong side of zeros, 0100000000 is actually equal to 100000000 (which in decimal is 512).
    You’ve also overlooked the step where you append 1 to the string before calculating it’s value and generating a string of n 1’s followed by a zero.
    It’s increasingly obvious that you believe you know more these subjects than you really do.

    Reply
  113. David vun Kannon

    secondclass –
    I just read through the C source code for ev. There is a maximum population size of 128. So M&D could have hacked the code to change that (simple enough) and documented their hack, or they could have run several single generation runs with different random seeds. Either way, the point is that there is an independent way of checking M&D’s claims.
    Anyone out there with a spare Linux machine that wants to try it?

    Reply
  114. Mark C. Chu-Carroll

    Lino:
    Still just doing the Dembski shuffle.
    Still no math. Lots of jargon, but no math. I didn’t say that statistics equals quantum mechanics. I said that your scenario violates the math of quantum mechanics. When it started becoming obvious that you’d screwed up, and your stuff wasn’t going to be able to be wedged in to the mathematical description of quantum mechanics, you’re the one who punted and said math doesn’t matter.
    *You* have claimed that an extraordinary scenario is fully compatible with quantum mechanics. It’s your job to demonstrate that your claim is correct – and that’s what you’ve refused to do, despite my asking time and time and time again.
    How about you actually either:
    (A) show the math of what you’re proposing – the math that describes your scenario in terms of quantum phenomena, and shows how your scenario is possible without any change to energy states; or
    (B) admit that you can’t do it, and you’ve been talking out your ass this entire time.

    Reply
  115. Aaron F.

    Does it really make sense to characterize natural selection as a search algorithm? If I drop a marble into a round-bottomed bowl, the marble will eventually come to rest at the bottom of the bowl, but I certainly wouldn’t say that the marble is “searching” for the lowest point! If I dump a bunch of stuff into a lake, some of it will sink and some of it will float, but I wouldn’t say that the lake is “searching” for low-density objects! I know that search algorithms, like simulated annealing, are often inspired by physical processes… but can you learn anything of value by describing physical processes as search algorithms?

    Reply
  116. creeky belly

    So, when you’re dealing with the spin of photons, they’re spin 1/2 as are all bosons, and that simply means that if you shoot a stream of photons through some kind of splitter and measure them (collapse the wave-function), that statistically you should get 50% in the ‘up’ detector and 50% in ‘down’ detector. That’s it. The actual distribution matters not. It’s the total count between up and down that matters. QM in no way requires a uniform probability distribution to be found amonst the spin up and spin down photons detected. Bottom line, the statistical nature of QM smears reality. And…..you can sneak in information because of this smearing.
    WRONG. Photons are spin 1; in fact all bosons are integer spin. Fermions (such as electrons, neutrons, protons) are half integer spin.
    How do you introduce information into a uniformly distributed, stochastic process? You say that uniform distributions aren’t necessary, but that’s all you argue from. 50/50, 50/50, up-down. That is a uniform distribution by definition: each outcome is as likely as the other. The fact that you say this tells me that you really have no idea what Hilbert space really represents. If information is being introduced onto the channels through this process, there are quantum protocols for discovering eavesdroppers.
    You’re also hung up on the concept of stochastic: it means that you can’t know deterministically which outcome you’ll get for a particular initial condition. You could get 1010011110 or you could get 1111111111 if you wait long enough. If you try to alter the state of the beam of light on the way there, however, there’s no way to do it without scattering off matter (or light) and changing the polarization (detectable).

    Reply
  117. Unsympathetic reader

    Lino writes: “The actual distribution matters not.
    Actually, from what I’ve read, *it does*. If there aren’t non-local, hidden variables in QM then the up/down sequence will display true randomness. *Non-correlation* is the expected behavior and the channel certainly cannot be used for communication (I’m not certain it could be used for communication even if there were hidden variables in QM). There is no decent evidence for these hidden variables and they are problematic “additions” to quantum theory at this point. Dembski’s proposal thus relies on a set of properties that have never been substantiated.*
    While one can always “rescue” a pet theory with the addition of more variables, there is a problem of justification. At some point, as Elliott Sober notes, a ‘crufty’ theory is perceived to collapse under the weight of the ad hoc additions.
    *Aside: Nonetheless, this notion remains popular among some fans of psychic phenomena.

    Reply
  118. Lino D'Ischia

    Tyler DiPietro:
    “This is untrue. When converting a binary string to decimal you multiply each digit by 2 exponentiated by its place value, and then add the resulting values together. You’ve hacked off the wrong side of zeros, 0100000000 is actually equal to 100000000 (which in decimal is 512).
    So what you are saying is that we’ve detected even another error in Elsberry and Shallit’s example, since the output for the digit 512 would have been, according to your BCD approach, 1026 1s with a 0 at the end, and Elsberry and Shallit show only 5 1s with a 0 at the end for 01.
    “It’s increasingly obvious that you believe you know more these subjects than you really do.”
    Since I was following Shallit’s lead (after all, he was the one proposing the CSI generating algorithm with its associated examples), I guess this is meant for him, right?

    Reply
  119. Lino D'Ischia

    Mark CC:
    Please don’t have a nervous breakdown. You’re taking this much more seriously than you ought. QM can’t prove Dembski’s hypothetical one way or the other. Why not take it for what it’s worth–a conjecture?
    Here’s what Unsympathetic Reader wrote:
    If there aren’t non-local, hidden variables in QM then the up/down sequence will display true randomness.
    Turn this statement around and it reads: “If there are non-local, hidden variables present, then the up/down sequence will display non-randomness.”
    So, there you have it, the “unembodied designer” is a non-local hidden variable–just like we thought! Smile if you can, please.

    Reply
  120. Tyler DiPietro

    “So what you are saying is that we’ve detected even another error in Elsberry and Shallit’s example, since the output for the digit 512 would have been, according to your BCD approach, 1026 1s with a 0 at the end, and Elsberry and Shallit show only 5 1s with a 0 at the end for 01.”
    *sigh*
    You don’t get it. Let’s calculate those results using algorithm and binary conversion.
    1. Start with a string 01 and append 1 to the front, making it 101.
    2. 101, after binary decimal conversion, is 5.
    3. The result is then 5 1’s followed by a zero, exactly as Shallit and Elsberry described.
    Just stop. It’s getting embarrassing.

    Reply
  121. Lino D'Ischia

    creeky belly:
    “If information is being introduced onto the channels through this process, there are quantum protocols for discovering eavesdroppers.”
    You say “if information is being introduced onto the channels through this process, . . . ” So you admit that information–such as the cure for cancer–can be sent via these channels then?
    As to the reference, what does it have to do exactly with the discussion we’re having? Hilbert Space is the space that spans vector space, i.e., it’s complete.

    Reply
  122. Lino D'Ischia

    Mark CC:
    Here’s what Dembski writes: “Consider, for instance, a device that outputs 0s and 1s and for which our best science tells us that the bits are independent and identically distributed so that 0s and 1s each have probability 1/2. The device is therefore an idealized coin tossing machine; not that quantum mechanics offers such a device in the form of photons shot at a polaroid filter whose angle of polarization is 45 degrees in relation to the polarization of the photons–half the photons will go through the filter…… what happens if we control for all possible physical interference with this device, and nevertheless the bit string that this device outputs yields an English text-file in ASCII code that delineates the cure for cancer. . . .” (p.340 NFL)
    When you ask me for the math underlying this hypothetical of Dembski, that’s like asking me to demonstrate the math underlying coin-tossing. There is no math “underlying” coin-tossing. But there is a “math” (statistics) that deals with its output.
    In a prior post I asked Jeff Shallit to tell me which bit string was random and which was designed. Well, one of the two strings encodes a message in ASCII (with a twist thrown in, of course), and the other one is the result of coin flips. Analytically, he can’t tell the difference between the two; nor could anyone just looking at them. Do you get it now?

    Reply
  123. Lino D'Ischia

    You know, Tyler, all your huffing and puffing will get you nowhere.
    You write:
    “You don’t get it. Let’s calculate those results using algorithm and binary conversion.
    1. Start with a string 01 and append 1 to the front, making it 101.
    2. 101, after binary decimal conversion, is 5.
    3. The result is then 5 1’s followed by a zero, exactly as Shallit and Elsberry described.
    Just stop. It’s getting embarrassing.”

    Let’s look at step #1: “Start with a string 01 and append 1 to the front, making it 101.”
    Excuse me. In the previous post you wrote this: “You’ve hacked off the wrong side of zeros, 0100000000 is actually equal to 100000000 (which in decimal is 512).” So now we’re back to what I said originally, which was that 01 represented 010000000, and that we could drop all the 0s to the right of the 1. Remember that Shallit talked about 10-bit strings. 01 doesn’t fit. So it must be converted to 0100000000. And, according to your interpretation about 100000000 representing 512 [which you corrected to 256], that means that the “front” of the bit string is to the right, not the left.
    Enough, Tyler. It’s getting embarrassing alright.

    Reply
  124. creeky belly

    You say “if information is being introduced onto the channels through this process, . . . ” So you admit that information–such as the cure for cancer–can be sent via these channels then?
    You can perturb the channels in a time dependent fashion to spell out whatever you want, but that will make the distribution non-uniform [either 100%(|0>), 0%(|1>) or 0%(|0>), 100%(|1>)]. However, I don’t know what your setup is. Is it one machine with 500 bits that can be entangled? Or is it 500 separate measurements of the one bit? If it’s the former you can create a quantum circuit to figure out what’s operator is being applied. If it’s the latter, you can figure out if an operator is being applied through the BB84 protocol.
    But this is what I’m hearing:

    send in: |0> + |1>
    designer applies {+} operator
    measure: |0>
    send in: |0> + |1>
    designer applies {-} operator
    measure: |1>
    repeat until cure for cancer is found.

    This can be rooted out by BB84 error rate. The designer would also require energy to create the matter for repolarizing the light.
    As to the reference, what does it have to do exactly with the discussion we’re having? Hilbert Space is the space that spans vector space, i.e., it’s complete
    If you go to page 7 there’s a discussion on how to determine if someone is trying to either measure or screw up your quantum transmission line. If you really understood Hilbert space, you’d understand the concept of a stochastic process.

    Reply
  125. Lino D'Ischia

    creeky belly:
    “This can be rooted out by BB84 error rate. The designer would also require energy to create the matter for repolarizing the light.”
    Am I supposed to be impressed that BB84 can root out the “information” that is being imparted? The discussion here has focused on whether all of this violates QM math. You’ve given an example of the math above. There is no violation of the math; the operator imparts no energy. And, that it can be “rooted out”, only implies that it was there to begin with. But why would you want to “root it out” if you already have the code for cancer. BB84 is only good for eavesdroppers. But, in the meantime, it does serve to illustrate that what Dembski writes is possible.

    Reply
  126. creeky belly

    Enough, Tyler. It’s getting embarrassing alright.
    Speaking of embarrassing, hypocritical, glass house, bone-headed statements, maybe you should have cracked open a elementary particle book before you made your Boson/Fermion blunder. Photons are part of a group of particles called Bosons(integer spin) which include such marvelous things as photons(EM force), pions, gluons(strong force), W and Z bosons(weak force), perhaps the graviton (gravity, if it’s ever observed) as well. Under the Pauli exclusion principle they like to huddle up near each other. Fermions(half-integer spin) include all marvelous things like electrons, protons, neutrons, neutrinos, taus, muons, and quarks. Under the Pauli exclusion principle they are never found in close proximity, unless they have opposite spin states (up/down).
    But please continue.

    Reply
  127. HWSOD

    um almost any computer scientist would change 01 to 0000000001 not 0100000000. beacus the first two both have the value 1 and the las has the value 2^9.

    Reply
  128. creeky belly

    There is no violation of the math; the operator imparts no energy. And, that it can be “rooted out”, only implies that it was there to begin with. But why would you want to “root it out” if you already have the code for cancer. BB84 is only good for eavesdroppers. But, in the meantime, it does serve to illustrate that what Dembski writes is possible.
    No, because you lose the uniform, stochastic distribution. There’s no way to have a quantum process that’s where you can fix the outcome, without changing the polarization. To change polarization, you need to interact with matter. Changing the polarization, without matter, violates the conservation of momentum and energy. Case-closed. Mathematics violated.

    Reply
  129. Jeffrey Shallit

    I’m sorry, Lino, I’ve lost my patience here. Too many demands on my time, and I’d rather devote it to students who are actually listening to what I have to say. When you say things like “…why would you even attempt to use Dembski’s methodology on something that is obviously not designed” it is clear you have no interest in actually testing Dembski’s definitions. In order to test them, you need to see if they produce false positives; that’s the whole point of my example.
    Indeed, you and I agree that the weather in Tucson is not designed; yet, with the uniform probability interpretation of Dembski’s CSI, he would conclude it was. This explains why the uniform probability interpretation is not tenable; despite this, Dembski often uses it. (And you’ve misunderstood that Tucson example, too; there was no intent that the observations represented consecutive days; I was thinking of them as representing consecutive hours.)
    When you say things like “But, in binary form, 0100000000=01”, it is clear you don’t understand the definition of my mapping. Could I encourage you to take a course in theory of computation? In the first two weeks of such a course, you learn about strings of symbols and their properties. It is not true that 0100000000 = 01, because the left side is a string of length 10, and the right side is a string of length 2. You are making the elementary mistake of confusing a string with the number it represents in base 2. This isn’t rocket science.
    Good luck with your cheerleading for Dembski.

    Reply
  130. Lino D'Ischia

    Jeff:
    Indeed, you and I agree that the weather in Tucson is not designed; yet, with the uniform probability interpretation of Dembski’s CSI, he would conclude it was.
    I’ve read Dembski’s book. In the first place, as I already mentioned, the explanatory filter would throw this out. Again, Step#1 comes before Step #3. You seem to think that you can interpret his methodology any way you want. You start with an event, and an appropriate rejection region ensues. Assuming that you mean “hours” for Tucson weather, fine (but even there, 2048 hours=85 days+). But on one side of the function you have ten bits, on the other 1025 to 2048 bits: how can you equate weather in that fashion? If you take the weather on the 1025-2048 side, then you’ll have nothing but 1s with a 0 at the end. Kolmogorov complexity-wise, it would reduce to “Print 1024 1s, Print 0”. This kind of compression, per Kolmogorov, would imply “information”. Well, I suppose its some kind of information. IOW, Kolmogorov complexity might falsely lead us to think it was designed. OTOH, per Dembski, you would have a “pattern” of 1024 1s followed by a 0. Well, why would I argue that this is designed. It’s just a bit string. Based on my background knowledge, what “specification” do I see present in the string. I don’t know about you, but I don’t see any. So in what way would Dembski say it’s designed? Because we’re dealing with more than 1000 bits? Well, under that definition, if I flip a coin 1000 times, that’s CSI. Doesn’t that seem strange to you?
    As to my interpretation of 010000000=01, this is due to your having said of the “pseudo-unary” function: “append a 1 on the front of x”. Well, if binary numbers are counted right to left, then you should have said, “append a 1 on the end of x”. That explains the range of the function. Indeed, it’s not rocket science; but you misspoke slightly. I now understand your mapping; but I dont’ understand your logic in applying this to an increase in CSI, or, in thinking that any-old kind of information can be plugged into it.

    Reply
  131. Mark C. Chu-Carroll

    “This can be rooted out by BB84 error rate. The designer would also require energy to create the matter for repolarizing the light.”

    Am I supposed to be impressed that BB84 can root out the “information” that is being imparted? The discussion here has focused on whether all of this violates QM math. You’ve given an example of the math above. There is no violation of the math; the operator imparts no energy. And, that it can be “rooted out”, only implies that it was there to begin with. But why would you want to “root it out” if you already have the code for cancer. BB84 is only good for eavesdroppers. But, in the meantime, it does serve to illustrate that what Dembski writes is possible.

    My goodness but you’re dense…
    Your entire claim is that an “unembodied designer” can influence quantum events *without* altering energy states – that the disembodied designer can do it without adding any energy or matter to the system.
    You’re responding to someone who actually bothered to do some of the math, and showed why your “disembodied designer” would need to add either energy or matter to do what you claim – and claiming that they support you.
    *No*, they’re not supporting you. No, Dembski’s argument is incorrect. The kinds of pertubations of quantum phenomena that you’re talking about *do* violate the basic rules of quantum behavior as we understand them.
    No amount of weaseling can get you past that. You and Dembski’s scenario *does not work*. It is inconsistent with
    with the very thing it claims to be consistent with.

    Reply
  132. Mark C. Chu-Carroll

    When you ask me for the math underlying this hypothetical of Dembski, that’s like asking me to demonstrate the math underlying coin-tossing. There is no math “underlying” coin-tossing. But there is a “math” (statistics) that deals with its output.
    In a prior post I asked Jeff Shallit to tell me which bit string was random and which was designed. Well, one of the two strings encodes a message in ASCII (with a twist thrown in, of course), and the other one is the result of coin flips. Analytically, he can’t tell the difference between the two; nor could anyone just looking at them. Do you get it now?

    I’ve been getting it all along – the problem is that you don’t.
    There *is* math underlying your scenario. You’re claiming that a “disembodied designer” can encode a message into a quantum phenomenon without violating any of the rules of quantum physics. There’s a very well understood mathematics of quantum behavior. Your scenario is inconsistent with that.
    The basic math of quantum behavior includes a whole lot of information about the statistical behavior of particles. Changing that distribution costs energy. You’re trying to argue that there is a way of altering that distribution that is consistent with the standard behavior of quantum phenomena, but which has no energy cost. To make that argument stand up, you need to show how your scenario fits
    the standard math of quantum physics – that what you’re proposing is possible without creating an energetic pertubation in the quantum states being studied.
    You keep weaseling around, to avoid admitting the fact that your argument doesn’t work. You obviously *can’t* do the math to show that it’s consistent. But you’ll do just about anything to avoid admitting that. Just like your idol, Dembski. Lots of hand-waving, lots of jargon, lots of shallow bullshit lifted from wikipedia to make it look like you know something – but you’ll continually wiggle, weasel, and squirm to avoid actually admitting that you’ve make an incorrect argument.
    It’s fine to say that your scenario doesn’t fit the math of quantum physics as we understand it, because we have an incomplete understanding. But then your argument fails, because the entire argument was that it *was* consistent with quantum physics. If what you’re really proposing is that quantum physics is wrong, and that this is possible, that’s fine. But then you can’t claim that it is possible within the current theory of quantum physics, because it isn’t.

    Reply
  133. Unsympathetic reader

    Lino: “There is no violation of the math; the operator imparts no energy.
    Then the bandwidth of the channel is zero. How long of a bit string can you pass through a channel with zero bandwidth?

    Reply
  134. David Marjanović

    Well, we know that universe was built from the smallest particles upwards, not from the largest downwards. So, you would have to think that if an “unembodied designer” is going to tinker, it would be done at the particle (QM) level, and not at the classical level.

    1. Non sequitur.
    2. So much, then, for claims that ID “theory” does not make any statements about the Designer.

    Reply
  135. Lino D'Ischia

    creeky belly:
    Your entire claim is that an “unembodied designer” can influence quantum events *without* altering energy states – that the disembodied designer can do it without adding any energy or matter to the system.
    What’s the nature of the operator? Is it the identity operator, or the inverse identity operator? And what if the average number of identity operators matches the average number of inverse identity operators….what does that do to the overall statistics? It leaves them unchanged! You can write what you consider to be the “hypothetical” mathematics of what has happened, but you can’t ferret that out. All we have are measurements. And as long as you have 50% in one detector and 50% in the other, you have no idea whether your math applies or not. Remember, if the output is in ASCII code, you wouldn’t be able to tell it very much from random strings. So, for the last time: we DON’T see the math, we SEE detectors going off, and as long as it’s 1/2 up and 1/2 down, QM theory is perfectly happy. And the underlying operators remain “hidden”—as in “hidden variables”.

    Reply
  136. Lino D'Ischia

    David Marjanović;
    Thank you for your opinion. Why don’t you read Dembski’s “No Free Lunch” before making comments about what ID says or doesn’t say?

    Reply
  137. Anonymous

    Unsympathetic Reader:
    Then the bandwidth of the channel is zero
    This IS a problem for “embodied designers”, I concede.

    Reply
  138. Mark C. Chu-Carroll

    What’s the nature of the operator? Is it the identity operator, or the inverse identity operator? And what if the average number of identity operators matches the average number of inverse identity operators….what does that do to the overall statistics? It leaves them unchanged! You can write what you consider to be the “hypothetical” mathematics of what has happened, but you can’t ferret that out. All we have are measurements. And as long as you have 50% in one detector and 50% in the other, you have no idea whether your math applies or not. Remember, if the output is in ASCII code, you wouldn’t be able to tell it very much from random strings. So, for the last time: we DON’T see the math, we SEE detectors going off, and as long as it’s 1/2 up and 1/2 down, QM theory is perfectly happy. And the underlying operators remain “hidden”—as in “hidden variables”.

    That’s not what quantum mechanics predicts. It does *not* say that as long as things measure out as 50-50 after your message is complete. It predicts much more than that.
    You keep trying to hand-wave past the fact that you can’t actually show the math for your scenario. There *is* a reason for that, as I keep saying. The math doesn’t work. Your scenario is not consistent with quantum mechanics. And no amount of hand-waving is going to
    change that.

    Reply
  139. David vun Kannon

    Aaron F. @122

    but can you learn anything of value by describing physical processes as search algorithms?

    The algorithm is just another model, just another abstraction of the reality. If you can prove/demonstrate certain tings about the model then you can cautiously apply those findings to reality.
    Dembski clearly believes that the iterative variation/selection algorithm cannot produce the observed results in the time/space/energy allowed since the formation of the planet. In this case I think his model is propping up a pre-existing belief.OK, his anti-model…
    Before I was banned there, I commented to Dembski that he should put some effort into bounding the effectiveness of evolutionary algorithms – “What EAs Can’t Do.”, “EAs Considered Harmful”, etc. Having proven those bounds to the satisfaction of the other researchers in the field (and these results can be very helpful) he could embark on mapping the results back into the biological world.
    Maybe these papers are his starting down that path.

    Reply
  140. Unsympathetic reader

    Lino: “And the underlying operators remain “hidden”—as in “hidden variables”.
    Don’t confuse hopeful assertions with facts. Replace “hidden variables” with “Invisible Pink Unicorns”. IPUs have the same level of evidence demonstrating their existence.

    Reply
  141. creeky belly

    What’s the nature of the operator? Is it the identity operator, or the inverse identity operator? And what if the average number of identity operators matches the average number of inverse identity operators….what does that do to the overall statistics? It leaves them unchanged! On paper, it’s a mathematical construct. In reality, an operator constitutes a physical interaction. A physical polarizer doesn’t need energy to change polarization, but you still need the matter there to interact, which requires energy (E=mc^2).

    Reply
  142. Lino D'Ischia

    Unsympathetic Reader:
    Replace “hidden variables” with “Invisible Pink Unicorns”.
    Replace “Invisible Pink Unicorns” with “unembodied designer”. So what. The point is is that the artifactor can’t be detected.

    Reply
  143. Anonymous

    Unsympathetic Reader:
    Replace “hidden variables” with “Invisible Pink Unicorns”.

    Replace “Invisible Pink Unicorns” with “unembodied designer”. So what. The point is is that the artifactor can’t be detected.

    No, the point is that it’s silliness masquerading as a serious argument. Your “hidden variables” that involve no changes to energy or matter are inconsistent with quantum physics.

    Reply
  144. Tyler DiPietro

    Okay, I’m going to try one more time.
    “Excuse me. In the previous post you wrote this: “You’ve hacked off the wrong side of zeros, 0100000000 is actually equal to 100000000 (which in decimal is 512).” So now we’re back to what I said originally, which was that 01 represented 010000000, and that we could drop all the 0s to the right of the 1.”
    Let’s try this one more time: YOU CAN’T FUCKING DO THAT. The value of the integer in binary is converted to decimal by multiplying each digit by two exponentiated by it’s place value. Like Shallit said, this isn’t rocket science. You are making elementary errors in calculation that a first year computer science major, fresh out of an introductory course in discrete mathematics, would not make. You are clueless, you are tossing mathematical jargon and notation without any understanding the concepts behind them.
    “Enough, Tyler. It’s getting embarrassing alright.”
    Yes, it is enough. It’s time for you to admit that you are simply incompetent on every subject you’ve been speaking on. But you quite obviously lack the intellectual honesty to do that. Dembski has certainly attracted fans not too dissimilar from himself.

    Reply
  145. Lino D'Ischia

    Mark CC:
    This is my last post on this:
    A team sets up an experiment involving a photon gun, polarizing filter, detectors, etc. They monitor all their equipment to make sure there are no energy leaks of any sort. They run an experiment 1000 times, using 10,000 photons each time. After the 1000th time, they take down their equipment, store the bit records, and go to Europe for an extended vacation. The bit records subsequently get lost and aren’t discovered until five years later. Some young statistics student discovers the bit records and analyzes 10 of the bit strings statistically. Then for the heck of it, he wonders if the bit strings can be converted into ASCII. The first one can’t, nor the second, nor the third. But with the fourth one, lo and behold, it does code for ASCII. He runs a program to translate the bit string using ASCII and out pops the “cure for cancer”.
    This is the best way I can explain things. If you don’t see that no physical law was broken, that no energy was detected, that the experimenters saw nothing statistically wrong with their experiments when conducting them–though one of them later turned out to code for the cure for cancer, then there’s nothing further I can say.
    Here’s the logic: If the experiment codes for the cure for cancer, then it violates the laws of physics. How do you know that it violated the laws of physics? Well, because it coded for the cure for cancer. I don’t buy this circular reasoning.

    Reply
  146. Lino D'Ischia

    creeky belly:
    On paper, it’s a mathematical construct. In reality, an operator constitutes a physical interaction. A physical polarizer doesn’t need energy to change polarization, but you still need the matter there to interact, which requires energy (E=mc^2).

    In reality, the polarizer was there at the beginning of the experiment and at its end.

    Reply
  147. Lino D'Ischia

    Mark CC:
    That’s not what quantum mechanics predicts. It does *not* say that as long as things measure out as 50-50 after your message is complete. It predicts much more than that.
    I’ve been asking you all along: point out this “much more”.

    Reply
  148. Tyler DiPietro

    I swear I’m going to shoot myself in the fucking head if I stay in this thread trying to communicate with Lino any longer. I’m done, and I’d encourage everyone else to follow my lead purely for mental health related reasons.

    Reply
  149. Lino D'Ischia

    Tyler:
    Tsk, tsk. Such language. Try to control yourself.
    Now, did you read this in my last post to Shallit?
    As to my interpretation of 010000000=01, this is due to your having said of the “pseudo-unary” function: “append a 1 on the front of x”. Well, if binary numbers are counted right to left, then you should have said, “append a 1 on the end of x”. That explains the range of the function. Indeed, it’s not rocket science; but you misspoke slightly.

    Reply
  150. Tyler DiPietro

    Okay, I think I can manage one-last post before being induced to commit suicide.
    “As to my interpretation of 010000000=01, this is due to your having said of the “pseudo-unary” function: “append a 1 on the front of x”. Well, if binary numbers are counted right to left, then you should have said, “append a 1 on the end of x”.”
    The bolded statement is false. The leftmost digit is the largest place value, in binary as it is in all number bases. Therefore, the “front” of the number would be that digit.
    And you obviously haven’t comprehended the meaning of “pseudo-unary”. You have managed to extrapolate something that Shallit and Elsberry never even so much as aluded to in their paper, no doubt due to your already demonstrated lack of understanding about anything you are talking about.
    Please, pick up an introductory text on discrete math or the theory of computation before making an abject fool of yourself any further.

    Reply
  151. Mark C. Chu-Carroll

    Lino:
    I understand your scenario. The point is, it’s a scenario
    that can’t happen. Hypotheticals are great fun for thinking about, but they do not make for a scientific argument. Just because you can imagine a hypothetical doesn’t mean that it’s possible. If you want to argue that it’s possible, you have to do more than repeatedly state the hypothetical: you need to demonstrate that it’s possible.
    It’s not our job to do the math of quantum physics for you. Quantum physics predicts a probability distribution that has a great amount of detail. If you want to know more about it, pick up a textbook on quantum mechanics, and study it. I’ll give you one hint: the word “fractal” has been bandied about in relation to the probability distribution.
    As I’ve been saying from when you first proposed this: you need to show the math that says this works, that this fits the predictions of quantum mechanics. If it doesn’t fit the predictions of quantum mechanics, then you need to show some kind of experimental evidence that quantum mechanics is wrong.
    You’re basically saying “My scenario fits the with quantum mechanics because I’ve defined it as fitting with quantum mechanics”. I can create a scenario where, in the non-relativistic domain, an object’s acceleration under a force isn’t proportional to its mass, and assert that it’s completely consistent with newtonian mechanics. That doesn’t mean that it’s possible. While I can state a scenario in which I say that the scenario is consistent with newtonian mechanics, but where force and acceleration aren’t proportional for a fixed mass. That doesn’t mean that my scenario is possible.
    You’re proposing an impossible scenario. *You* made the extraordinary claim (or rather, Dembski made it, and you repeated it as a “brilliant” explanation) that your scenario is possible while remaining completely consistent with quantum physics.
    But you’ve refused to make any real attempt at demonstrating it. In fact, you’ve demonstrated that you have *no* actual understanding of the mathematics of quantum physics. You don’t understand what it means for something to be consistent or inconsistent with quantum mechanics. You don’t understand the mathematical formalism, or what it means, or how to apply it. In fact, all you know about quantum physics is from an extremely informal description of it. And quantum physics is one of those areas where informal descriptions are virtually useless.
    As the famous quote goes: “If you think you understand quantum physics, then you don’t understand quantum physics.”
    Another great quote found via Google:

    if you limit yourself to the mathematical formalism (without delving too deep into matters of rigor), and you limit yourself to those aspects of the formalism which are clearly identifiable with physical quantities (often “outcomes of measurement”), then you have a powerful and pragmatic machinery which is “up and running”, in the sense that you can calculate a lot of things, and compare it successfully with experiment. That’s essentially what has been done for the last 80 years.

    The only way to understand quantum physics, insofar as anyone actually understands quantum physics, is via the mathematical formalism. No amount of handwaving about how your scenario is possible is going to change that. If you really want to make the argument that Dembski is correct, and your scenario is really consistent with quantum mechanics, then you’re going to need to break out some textbooks, and do some work to show that it’s possible.

    Reply
  152. Anonymous

    In reality, the polarizer was there at the beginning of the experiment and at its end.
    BZZZT. Wrong again. It can’t be the same polarizer, otherwise you’d just get a string of just 0’s or just 1’s. The only way to spell out the message is to have a time dependent polarization scheme, which requires energy to rotate it every time you want to fix a bit.

    Reply
  153. Lino D'Ischia

    Mark CC:
    The only way to understand quantum physics, insofar as anyone actually understands quantum physics, is via the mathematical formalism. No amount of handwaving about how your scenario is possible is going to change that. If you really want to make the argument that Dembski is correct, and your scenario is really consistent with quantum mechanics, then you’re going to need to break out some textbooks, and do some work to show that it’s possible.
    Mark, isn’t it clear that if I proposed such a mathematical formalism that your response would be: “that’s impossible”. Why would you respond that way? Because your reaction would be: “You can’t do that without interfering with the apparatus”. That’s right. You couldn’t do it. I couldn’t do it. But, underneath the veil that the statistical nature of QM provides, it’s “theoretically” possible. Your effort here is to say: “It’s NOT theoretically possible”. I challenge you to prove that because, again, given QM’s statistical nature, it can’t be ruled out. Have we seen the “cure for cancer” pop out of an experiment so far? No. Do I expect that it will happen soon? Well, I’m not holding my breath. The only point of Dembski’s scenario is to illustrate the “wiggle-room” that nature provides. And, if we keep going on, back and forth, I’ll still end up with that “wiggle-room”, simply because nature is what nature is. Why not just leave it there.
    If you want to know more about it, pick up a textbook on quantum mechanics, and study it.
    I have several. I’ve audited a year long course. QM is nothing more than linear algebra: you have eigenvalues, eigenvectors, Hermitian matrices, etc, etc. The statistical nature of QM comes from the need to come up with real values, since all we have are measurements which require real values. That means we never see wave functions; we only see the collapse of the wave function. That’s where the hiddenness comes from. As I say, nature is what nature is.

    Reply
  154. Lino D'Ischia

    creeky belly:
    “BZZZT. Wrong again. It can’t be the same polarizer, otherwise you’d just get a string of just 0’s or just 1’s. The only way to spell out the message is to have a time dependent polarization scheme, which requires energy to rotate it every time you want to fix a bit.”
    BZZZT. Wrong again. That’s what an “embodied designer”, like you, would need.

    Reply
  155. Lino D'Ischia

    Tyler DiPietro:
    The bolded statement is false. The leftmost digit is the largest place value, in binary as it is in all number bases. Therefore, the “front” of the number would be that digit.
    Thank you for explaining this. And, yes, I’ve never taken a course on the theory of computation. But the argument here hasn’t really been about that explicity, only implicitly.
    I’ve looked over some of my earlier posts, and Elsberry and Shallit’s paper, as well as Shallit’s responses. The problem as I see it is that I misunderstood Shallit’s argument. But the reason I didn’t understand it the way he intended it is because it has nothing to do with Dembski’s approach to design. Shallit continues to believe that in using the “psuedo-unary” program that he has created CSI simply because there’s enough “binary bits”. Well, that’s NOT how CSI is created, nor how it is analyzed. Shallit refuses to begin with an event and then develop a rejection region that is detachable. Shallit, from what I can see, insists on understanding Dembski in terms of Kolmogorov complexity. Why he chooses to do so, I don’t know. But having looked over everything, I can see why Dembski said of Shallit’s criticism of NFL: “Has he even read the book.”

    Reply
  156. Unsympathetic reader

    Lino writes: “The only point of Dembski’s scenario is to illustrate the “wiggle-room” that nature provides.
    Actually, it’s the ‘wiggle room’ that science fiction provides. Nature may have different ideas about what is theoretically possible. A phenomenon such as the one described by Dembski would require a significant reworking of QM theories.
    In any case, it appears the bandwidth for transmission of knowledge in this thread is zero. I thing someone’s receiver is defective or turned off. No sense wasting more energy.

    Reply
  157. Mark C. Chu-Carroll

    Lino:
    As near as I can tell, your argument comes down to: “Dembski’s claim about unembodied designer is consistent with quantum mechanics, provided by “consistent with quantum mechanics” you don’t actually mean “consistent with quantum mechanics”, but actually “consistent with Lino’s version of quantum mechanics”.
    As I’ve said before, there’s a good reason why you refuse to describe your scenario mathematically in the math of quantum physics.

    Reply
  158. creeky belly

    BZZZT. Wrong again. That’s what an “embodied designer”, like you, would need.
    So you think the “unembodied designer” isn’t subject to physical conservation laws? You’re on your own with the pink unicorns now. You see, my “unembodied designers” fight over which message to write: the cure for cancer or “Lino doesn’t understand quantum mechanics”. That’s why the message comes out stochastic and uniform. My view is consistent with quantum mechanics because I can imagine it and say it’s consistent.

    Reply
  159. Unsympathetic reader

    creeky belly: You see, my “unembodied designers” fight over which message to write…
    That’s why the message comes out stochastic and uniform.
    Brilliant! It’s certainly a more coherent explanation. And I’ve not seen that idea in sci-fi books.
    You know, the ‘hidden values’ idea doesn’t seem to provide a way out either, at least from the standpoint of zero energy input. Something has to manipulate the entangled object’s ‘counterpart’. Besides, how can an entangled particle have a manipulable ‘partner’ *outside* the universe? ‘Hidden variables’ seem to allow determinism (though that’s not certain), not ‘interruptive’ or supernatural communication.

    Reply
  160. secondclass

    His mathematics in 5.10 fleshes out the complexity of the flagellum.
    But Dembski’s method requires the complexity of the rejection region, not just the event. And since no rejection region was specified, it would seem that a design inference is unjustified.
    I’m glad you mentioned the GCEA. Dembski finally says that if you calculate …
    Yes, I understand the GCEA. The point is that Dembski applied the GCEA to the Caputo sequence and came up with a design inference, so it seems a little strange that you would say that Dembski didn’t infer it was designed.
    That is “THE” “chance hypothesis”. There is no other.
    What about the chance hypothesis at the top of page 7 in Elsberry and Shallit’s paper?
    For someone who is not so sympathetic, I shouldn’t be surprised that they don’t read it in its entirety, let alone re-read it.
    You mean reading the Cliffs Notes isn’t good enough? No wonder we critics can’t seem to get Dembski’s concepts right.
    That’s where α comes in. That’s the significance level, and in the case of CSI its 2^-500. But then the “specificational/probabilistic resources” come in as well.
    2^500 is the count of probabilistic resources, or rather an upper bound to use if we don’t feel like counting them. The UPB and Dembski’s approach to counting probabilistic resources are both logically flawed, but either one will get you below the heuristic p-value of .05.
    The idea is that if our “background knowledge”, K, were truly great, then there would very likely be more than just ONE chance hypothesis that could explain the event E and develop a rejection region that includes E. As a result, all the elements of those rejection regions have to be added together, and so enter into the probability calculation. So if you look at the top of p. 77, you’ll find that Dembski addresses this. He writes: “Actually, this is one place where we have to be careful about including the entire set of relevant chance hypotheses {Hi}i∈I” It’s too tedious to include all the HTML tags that are needed for the next few sentences, but if you read them I think you’ll agree with my understanding here.
    No, I think you misunderstand this paragraph. Dembski says nothing about aggregating rejection regions across chance hypotheses. Condition 2 says that each member of SpecRes needs to be detachable under all chance hypotheses, and condition 3 says that they should be at least as improbable as the specification for E under all chance hypotheses. (Dembski later dropped the latter condition after a logical flaw was pointed out.)
    The fact remains that you can choose to count the probablistic resources or you can just use the number 2^500, regardless of how many chance hypotheses there are or how much of the causal history is known.

    Reply
  161. Jonathan Vos Post

    A nice hard number on how fast evolution goes. This relates to some spurious claims by Intelligent Design demagogues based on badly designed Genetic Algorithms and distorted cliams about thermodynamics. Let’s see what actual organisms tell us, appropriately studied.
    http://www.physorg.com/news110478853.html
    Beyond a ‘speed limit’ on mutations, species risk extinction
    Harvard University scientists have identified a virtual “speed limit” on the rate of molecular evolution in organisms, and the magic number appears to be 6 mutations per genome per generation — a level beyond which species run the strong risk of extinction as their genomes lose stability.
    By modeling the stability of proteins required for an organism’s survival, Eugene Shakhnovich and his colleagues have discovered this essential thermodynamic limit on a species’s rate of evolution. Their discovery, published this week in the Proceedings of the National Academy of Sciences, draws a crucial connection between the physical properties of genetic material and the survival fitness of an entire organism.
    “While mathematical genetics research has brought about some remarkable discoveries over the years, these approaches always failed to connect the dots between the reproductive fitness of organisms and the molecular properties of the proteins encoded by their genomes,” says Shakhnovich, professor of chemistry and chemical biology in Harvard’s Faculty of Arts and Sciences. “We’ve made an important step toward finally bridging the gap between macroscopic and microscopic biology.”
    According to Shakhnovich, crucial aspects of an organism’s evolutionary fitness can be directly inferred by inspecting its DNA sequences and analyzing how the proteins encoded by those sequences fold. DNA sequences encode the order of amino acids in a protein, and amino acids act as the protein’s basic building blocks by arranging themselves into a structure that allows the protein to perform its biological function.
    The research was inspired in part by the longstanding recognition that knocking out essential genes, making them inactive, produces a lethal phenotype, or a physiologically unviable organism.
    “From there, we made the simple assumption that in order for an organism to be viable, all of its essential genes — those that support basic cell operations — have to encode at least minimally stable proteins,” says Shakhnovich. “What occurs over the long process of evolution is that random mutations can either encode slightly more or less stable proteins.”
    If enough mutations push an essential protein towards an unstable, non-functional structure, the organism will die. Shakhnovich’s group found that for most organisms, including viruses and bacteria, an organism’s rate of genome mutation must stay below 6 mutations per genome per generation to prevent the accumulation of too many potentially lethal changes in genetic material.
    The existence of a mutation limit for viruses helps explain how the immune system can perform its function. Because viral replication and survival can only occur at a limited rate, the body has a window of time to develop antibodies against infectious agents. Furthermore, if the mutation rate is high, the size of the genome in question must be small to stay within the bounds of the speed limit — thus organisms that tend to mutate quickly are those with concise genomes, such as viruses and bacteria.
    The Shakhnovich speed limit also offers an explanation for observed differences in genome sizes between organisms with genome error correction — such as bacteria, mammals, birds, and reptiles – and those without, such as RNA viruses: In more complex organisms, cells have evolved correction systems to detect and fix errors in DNA replication. These systems drastically reduce the number of mutations per replication, increasing the mutational stability of the genome and allowing more intricate and delicate biological systems to develop without the risk of interruptive mutations.
    “It’s an interesting corollary because it suggests that there is a fundamental tradeoff between evolutionary security and adaptive flexibility: Larger, more complex organisms have to have error correction to protect organismic viability, but this means the rate of evolution slows down significantly,” Shakhnovich says. “As organisms become more complex, they have more to lose and can’t be as radically experimental with their genomes as some viruses and bacteria.”
    Source: Harvard University
    This news is brought to you by PhysOrg.com

    Reply
  162. Jonathan Vos Post

    Is it Intelligent Design that distinguishes fungi, algae and animals? No.
    It’s a very simple mutation mechanism, and not the point mutations that obsess Dembski et al.
    Stability domains of actin genes and genomic evolution
    Authors: E. Carlon, A. Dkhissi, M. Lejard Malki, R. Blossey
    Comments: 9 Pages, 7 figures. Phys. Rev. E in press
    Subjects: Biomolecules (q-bio.BM); Statistical Mechanics (cond-mat.stat-mech); Biological Physics (physics.bio-ph); Populations and Evolution (q-bio.PE)
    In eukaryotic genes the protein coding sequence is split into several fragments, the exons, separated by non-coding DNA stretches, the introns. Prokaryotes do not have introns in their genome. We report the calculations of stability domains of actin genes for various organisms in the animal, plant and fungi kingdoms. Actin genes have been chosen because they have been highly conserved during evolution. In these genes all introns were removed so as to mimic ancient genes at the time of the early eukaryotic development, i.e. before introns insertion. Common stability boundaries are found in evolutionary distant organisms, which implies that these boundaries date from the early origin of eukaryotes. In general boundaries correspond with introns positions of vertebrates and other animals actins, but not much for plants and fungi. The sharpest boundary is found in a locus where fungi, algae and animals have introns in positions separated by one nucleotide only, which identifies a hot-spot for insertion. These results suggest that some introns may have been incorporated into the genomes through a thermodynamic driven mechanism, in agreement with previous observations on human genes. They also suggest a different mechanism for introns insertion in plants and animals.

    Reply
  163. Torbjörn Larsson, OM

    I was alerted to the other day that i had missed a fun thread here due to lack of time. And it was correct.
    Now I’m all full of positive energy (read: have laughed a lot) due to the denseness of the Salvador Cordova wannabe that guested these pages, so I will exercise them by making a comment. Hmm, I can’t add much to the debunked points so I will pick on remainders of Lino D’Ishia’s misdirections and errors:

    Now, if you want circularity, how’s this: Who survives? The fittest. Who are the fittest? Those who survive.

    A futile attempt of misdirection. Fittest is a measure of reproductive success. It can be measured in several ways.
    On your original point: Dembski makes circular argument. So does Behe when he makes similar assertions, like assuming evolution doesn’t work so he can get invalid generation numbers for traits and then make the claims such as that evolution doesn’t work.

    Whatever your thoughts regarding the ease of switching from enzyme function A to enzyme function B, it is all completely irrelevant if cytochrome c doesn’t exist. So the improbability of its coming into existence sets a minimum barrier for all future function.

    This is of course a complete strawman of how evolution works. There is no predetermined goal, no predetermined function. If it is not genetic drift, it is an adaption to the immediate selection pressure.
    Assuming a need for a specific or future function is not describing evolution.

    Reply
  164. Torbjörn Larsson, OM

    And then the infantile physics:

    let us suppose that the universe has some kind of a flicker rate,

    Let’s not, as it invalidates Lorentz invariance. Time is continuous, it is our ability to distinguish close events with length measures (clocks) that fails.
    Also, we don’t need to invoke spacetime uncertainty to have a finite (but very very small) probability for a momentary mass-energy violation consisting of large objects. The problem with this idea is that it violates that we can’t have local hidden variables. And even if we go global we would detect the energy difference or it wouldn’t change things and import information.

    Since the universe is expanding at a fast rate, light traveling along the fabric of space is then traveling super-luminally

    Absurd! The light speed are measured with clocks that travel with the spacetime expansion, and light is observed to travel as predicted. It is called relativity theory for a reason.

    So, please explain to me where this added space comes from allowing for the expansion of the universe. Do you have any kind of answer at all? And, if you do, please indicate to me the ‘mechanism’ that’s at work.

    Spacetime is created by the expansion, and the expansion is described by the process of big bang within general relativity cosmologies. The cause for the expansion is an asymmetric initial condition.

    Reply
  165. Torbjörn Larsson, OM

    Jonathan, I saw the Shakhnovich et al paper discussed over at Sandwalk, so I quickly browsed it. (But didn’t check their math more than that they derived diffusion over some sort of functional space from the right type of PDE.)
    The biologists didn’t think much of it, though Moran (the biomolecular specialist blog owner) was interested in the topic.
    It was interesting to me since the authors derived a model from first principles for a phenomena they claim it earlier only existed ad hoc models for. One cause for concern is that they used a selection of proteins for confirmation of one of their general predictions. But AFAIU, if it is validated, their model would cover enzymatic and structural RNA as well. (As in ribozymes et cetera.)

    Reply
  166. Wesley R. Elsberry

    A comment from David v.K. had me looking at the EIL site again, and the following is a copy of a comment I made in the PT thread.

    Of course, looking at another of the papers that is listed on the EIL site, I noticed that it includes a clear error that I informed Dembski of long ago.
    In fact, today is the seventh anniversary of the unregarded notification of that error. This is the standard for unacknowledged errors that Dembski has set. Time will tell as to whether Robert Marks will be an apt pupil…

    Reply
  167. Wesley R. Elsberry

    I’ve sent two emails to Robert Marks concerning the seven-year-old problem in the analysis that is presented in the “active information” paper on the EIL site. One showed where I pointed out the problem seven years ago, and the other showing another attempt to communicate the problem that I made five years ago. I have heard nothing back from Marks, not even a call for better documentation of the error. The erroneous paper is still linked from the EIL site.
    Even a small amount of fact-checking should show Marks that there is a huge problem in the way they approach the topic.
    But it appears that Marks and Dembski are choosing the second of two options that I have discussed as possible responses to the problem. If so, I do look forward to making good on my promise to follow-up their essay wherever it appears with a correcting letter, which will document the long history of the error and how the authors carefully preserved their cherished misconception.

    Reply

Leave a Reply to Tom English Cancel reply