Turing Equivalent vs. Turing Complete

In my discussion with Sal Cordova in this post, one point came up which I thought was interesting, and worth taking the time to flesh out as a separate post. It’s about the distinction
between a Turing equivalent computing system, and a Turing complete computation. It’s true
that in informal use, we often tend to muddy the line between these two related but distinct concepts. But in fact, they are distinct, and the difference between them can be extremely important. In some sense, it’s the difference between “capable of” and “requires”; another way of looking at it is
“sufficient” versus “necessary”.

Let me start by giving you a definition of the two terms.

A computing system is called Turing equivalent if it is capable of performing any
computation that can be performed on a Turing machine. Another term for a Turing equivalent computing
system is an effective computing system. The importance of Turing equivalence comes down to the fact that computation has a fundamental limit; any Turing-equivalent/effective computing system can
perform any computation. There’s no computing device more powerful than a Turing machine – Turing machines and all other effective computing systems lie at the maximum point of capability of
mechanical computing devices.

A computation is Turing complete if it can only be computed by a Turing-equivalent computing
system. In fact, in the proper formal definition of Turing completeness, a computation C is
Turing complete if and only if the fact that a given computing device φ can compute the result of
C implies that φ can compute the result of any other Turing complete computation.

Why is this distinction so important?

The biggest reason is because it turns out that being Turing equivalent is actually remarkably
trivial. Intuitively, it seems like the maximum computation capability possible for a mechanical
computing device should be hard to achieve. But it isn’t! As you can see by perusing the

archives of the pathological programming language topic on this blog
, there are lots of ways of producing
Turing-equivalent computing systems, and many of them are incredibly simple. In fact, it’s pretty hard
to design a device capable of performing mechanical computations which is not
Turing-equivalent. Almost any mechanical system that’s capable of performing any kind of computation
ends up being Turing equivalent; it’s only by imposing deliberate restrictions on them that we generally achieve less than Turing-equivalent capabilities. The only part of Turing equivalence that’s
difficult at all is unbounded storage; as long as you have the ability to perform anything like iteration or recursion, and you have some theoretically unbounded/expandable storage, you’ve got
a Turing-equivalent computing device.

Turing-completeness, on the other hand, is a rarer beast. Most computations simply don’t require the full capability of a Turing machine. There are a number of simpler classes of computing
devices; linear-bounded automata, primitive recursive automata, finite-state automata, etc. Most of the computations that we encounter are actually computable by a member of one of those simpler classes of devices.

To give a couple of concrete examples:

  1. The common factorial function is primitive recursive. It does not require Turing-equivalent computational capabilities to compute.
  2. Searching a telephone book for a specific person to find their phone number is at worst
    linear-bounded; it does not require a Turing-equivalent computing device.
  3. Compiling a program in most programming languages does not require Turing
    equivalent capabilities. (The language may be capable of expressing Turing-complete
    computations, but the process of translating from the language to code for a specific
    computing device does not require Turing equivalent computation capabilities.)

The reason that this entered into my debate with Sal is because he, like many other people,
confused these two ideas. Because the DNA molecule can, in fact, perform Turing-complete computations,
you can view DNA (or, more generally, cell biochemistry) as a Turing-equivalent computing system. This
does not mean that cells require Turing-equivalent computational capabilities.

My argument with Sal is basically that Sal claims that the fact that DNA is Turing-equivalent means that life requires Turing-equivalent capabilities – which is another way of saying that some part of the basic biochemical processes that make up life are equivalent to a
Turing-complete computation. But that is not true – it’s an invalid implication. The fact that living things possess a component which is capable of Turing-equivalent computation
doesn’t mean that living things require Turing-equivalent computation.

To say that life contains DNA, which is a Turing-equivalent computing system, is saying that
Turing equivalent computational capability is sufficient for life. It is not saying
that it is necessary for life. The latter is a much stronger claim, and requires
more evidence.

0 thoughts on “Turing Equivalent vs. Turing Complete

  1. dakkar

    First, a nitpick: you wrote «any Turing-equivalent/effective computing system can perform any computation». Has Church’s thesis been demonstrated? I’d have written «we have found no computational problems that can not be solved by a Turing-equivalent computing system, and we think that we never will».
    Second, to show a sample of languages that can require a Turing-complete compiler, I’ll call on C++ (without the artificious limitation on the recursion of template instantiation), Haskell, any derivation of typed lambda-calculus… in pratice, anything whose type-system is Turing-complete (cfr. Curry-Howard’s isomorphism)

    Reply
  2. Mark C. Chu-Carroll

    dakkar:
    I didn’t think getting into the theoretical worries about
    the Church thesis was appropriate for the post.
    I know that C++ compilation is Turing complete as a result of the template instantiation rules. (Which is, in my opinion, a wretched botch of language design… Taking a language construct that was originally designed for a rather weak version of parametric types, and expanding it into an
    ugly functional programming language orthogonal to the basic language underneath is just *wrong*).
    I thought that Haskell and friends were NP-hard, but not Turing complete. I know that for SML, the logic that underlies the type system are constrained enough that type inference can be NP-hard in some pathological cases, but is pretty fast in most common ones. I was under the impression
    that Haskell’s type system retained the NP-hard properties of ML-style Hindley-Milner type inference. Do you have any references to where I can see what about Haskell’s type system makes it TC?

    Reply
  3. Blake Stacey

    I’d like to request a segue from this topic into quantum computational classes. IIRC, this came up a few months ago with regard to whether quantum computation lets you do “super-Turing” tasks.

    Reply
  4. dakkar

    Re Church’s thesis: I concur, that’s why I wrote «nitpick»
    Re C++: I couldn’t describe it better than «wretched botch» 😉
    Re Haskell: I may well be wrong. It’s just that (as far as I understood, and keep in mind I left my PhD in computer science after the first year), once the type system allows recursive types, you can have a full lambda calculus inside of it, which should be resolved at compile time…

    Reply
  5. Coin

    I’d like to request a segue from this topic into quantum computational classes.
    I’d be quite curious about this as well. Every time I try to figure out exactly what the fundamental “model” is for quantum computations (i.e., what automaton is used to model a quantum algorithm instead of a turing machine), or how or why shor’s algorithm works, I get lost in the mess of greek letters that is all the wikipedia articles on the subject consist of. I also still for the life of me can’t figure out what the difference between BQP and BPP is, and frankly I’m still a bit fuzzy on how BPP itself works. I do not know what to read to better this situation.

    Reply
  6. Mark C. Chu-Carroll

    Coin, Blake:
    That’s not a topic for a segue; that’s a hugely complicated
    subject.
    The short version of it is basically that a quantum computer
    can be treated like a non-deterministic or probabilistic machine (that’s two different categories). How those two kinds of machines can work is a huge issue, which would take a series of posts. It’s also something where I’d have to dig out some textbooks I haven’t looked at in over a decade. The one class I took that discussed the probablistic models of computation was a nightmare, so I’ve avoided the subject ever since. I’ll probably try to get to it at some point, since people keep asking, but not this afternoon 🙂

    Reply
  7. Blake Stacey

    There was a time when I could have explained Shor’s Algorithm, but that time is at least a year and a half dead. Unfortunately, when we encountered that topic, I hadn’t yet discovered how to type LaTeX as fast as my professors could lecture, so I don’t have a convenient document to pull out and post.
    Ah, a site-specific Google search takes me to this thread of 6 August 2006, in which MarkCC said the following:

    There aren’t any super-turing machines. One of the things that we study a lot in theoretical computer science is the implications of various differences in how computers and computation could work. The thing is, a “super-turing” machine – that is, a machine that could solve the halting problem – is impossible per Godel. You can’t get around the fundamental Godel limitation no matter what your machine can do: any mechanical system is fundamentally limited by the limits of mathematics itself.
    The big different between a quantum machine and current hardware is performance. Different computing systems can solve all of the same problems; but some can be exponentially faster on some problems than others. Quantum machines, at least theoretically, have the capability to push things that are currently *intractable* into the realm where they become reasonable.
    Quantum computers aren’t super-turing. What they *are* is non-deterministic, in the formal sense. The formal computational meaning of non-deterministic is that all possible paths are followed at the same time; so, for example, a search that required exponential time because of the number of paths that needed to be covered could be done in polynomial time with a quantum computer; it would effectively search all paths and collapse on the successful one.
    A quantum machine would make NP-complete problems tractable. The definition of NP is “nondeterministic polynomial time”: that is, things that can be done in polynomial time given a non-deterministic machine; NP complete problems are problems with no known deterministic polynomial-time solutions, but which are polynomial time on a non-deterministic machine, and which are part of an equivalence class so that if any could be solved in polytime, all could be solved in polytime. But since a quantum machine is non-deterministic, it means that the NP problems are polynomial time on the quantum machine.

    I also requested a GM/BM post on quantum computation in that thread. . . . Clearly, those “quantum mind” people have been bothering me for a while!

    Reply
  8. jsr

    Re: Hindley-Milner’s complexity
    It is worst-case exponential-time. See:

    Pawel Urzyczyn Assaf Kfoury, Jerzy Tiuryn.
    An analysis of ML typability.
    Journal of ACM 41 (1994) 368-398.

    H. Mairson and F. Henglein.
    The complexity of type inference for higher-order typed lambda calculi.
    Journal of Functional Programming 4 (1994) 435-478.

    Once you understand what is going on, you can amuse yourself by writing
    little ML programs that have monsterously large types.

    Reply
  9. Xanthir, FCD

    Yep, I remember the quantum computing thread of a while back. After looking over some references, I realized I was mixed up. Quantum computers, being nondeterministic, can transform some intractable problems into tractable ones. I had accidentally conflated the tractable/intractable divide with the computable/noncomputable one.
    Operable quantum computers will still kick the crap out of a whole lot of very difficult problems (and render virtually every major encryption scheme used today or in the past useless in the process), but they won’t give us an oracle (which a superTuring machine would be).

    Reply
  10. Walker

    A quantum machine would make NP-complete problems tractable

    Whoa, whoa. Factoring (Shor’s Algorithm) is not NP-complete. Many people I have talked to theorize that QP (quantum polynomial) is between P and NP.
    Furthermore, just because you are feasible with respect to time does not mean you are feasible. We don’t know how quantum computers work yet, but it is very possible in their implementation that energy usage is no longer proportional to the computation time, as it is with normal computers. So while a quantum computer may factor in polynomial time, for all we know Schor’s algorithm requires exponential energy. If so, I just pick a big enough prime that you need a few suns to factor it.

    Reply
  11. Walker

    Correction: I should have said that factoring is not known to be NP-complete. Many people think that it is not, but its fate is uncertain.

    Reply
  12. Antendren

    I should hope not. If you could prove that something in NP was not NP-complete, you’d have solved P!=NP.

    Reply
  13. Xanthir, FCD

    Walker:
    Heh, egg on my face again. ^_^ You’re right, factoring isn’t known to be NP-complete. I assumed it was. Well, that knocks down one possibility.
    However, as far as I know there’s no reason to assume that QCs will have extraordinary energy requirements, or anything else of the sort. Would you happen to have any references to that? I’m very interested.
    Antendren:
    Nico is right. NP-complete is a particular important subset of NP. There are plenty of NP problems that could be shown to actually be P without any major repercussions. There is, however, a subset of NP called NP-hard; any problem in NP can be easily (in polynomial time) be transformed into an NP-hard problem. The collection of NP-hard problems which are themselves in NP form the NP-complete set. Thus, by proving that any NP-complete problem was in P, you’d have proved that P=NP.

    Reply
  14. Torbjörn Larsson

    I concur with the interest in posts on computational complexity.

    One of the things that we study a lot in theoretical computer science is the implications of various differences in how computers and computation could work.

    Scott Aaronson on Shtetl-optimized likes to go on and study physical consequences of computation. For example he notes that P != NP would imply that closed timelike curves are definitely ruled out. (Globally, I guess, so no time machines whatsoever.)
    It also means that classical physical systems that form solutions to NP-complete systems (like solving Steiner trees with soap bubbles) will inevitably exhibit local minima and potentially long relaxation times. He also discusses quantum systems and computation. ( http://portal.acm.org/citation.cfm?id=1052804&dl=ACM&coll=ACM&CFID=15151515&CFTOKEN=6184618 )
    Last he discussed computation and the anthropic principle, which caused some stir on his blog ( http://scottaaronson.com/blog/?p=181 ) and others (Cosmic Variance, for one). He probes an anthropic landscape with cloning, and finds that “No application of the Anthropic Principle can be valid, if its validity would give us a means to solve NP-complete problems in polynomial time.” (I guess that means anthropic universes in string theory can’t be infinite in both time and space.)
    Perhaps computation has physical consequences. It is not naive platonism, so it would be palatable for me.

    “quantum mind”

    Decoherence killed that cat (and Schrödingers), didn’t it? At least in the way that Max Tegmark used it, explicitly to show that Penrose’s quantum speculations were just speculations.
    The measurement problem is perhaps remaining. We discussed that on Wilkin’s blog a while back.

    Reply
  15. Torbjörn Larsson

    “I guess that means anthropic universes in string theory can’t be infinite in both time and space” – because of the clone possibility of NP-complete P solution.

    Reply
  16. Blake Stacey

    Weekend reading: A. Litt et al., “Is the Brain a Quantum Computer?” Cognitive Science 30, 3 (2006): 593-603. PDF link. The abstract reads as follows:

    We argue that computation via quantum mechanical processes is irrelevant to explaining how brains produce thought, contrary to the ongoing speculations of many theorists. First, quantum effects do not have the temporal properties required for neural information processing. Second, there are substantial physical obstacles to any organic instantiation of quantum computation. Third, there is no psychological evidence that such mental phenomena as consciousness and mathematical thinking require explanation
    via quantum theory. We conclude that understanding brain function is unlikely to require quantum computation or similar mechanisms.

    The authors make the sensible point that

    explaining brain function by appeal to quantum mechanics is akin to explaining bird flight by appeal to atomic bonding characteristics. The structures of all bird wings do involve atomic bonding properties that are correlated with the kinds of materials in bird wings: most wing feathers are made of keratin, which has specific bonding properties. Nevertheless, everything we might want to explain about wing function can be stated independently of this atomic structure. Geometry, stiffness, and strength are much more relevant to the explanatory target of flight, even though atomic bonding properties may give rise to specific geometric and tensile properties. Explaining how birds fly simply does not require specifying how atoms bond in feathers.

    On the subject of quantum computation:

    The theory of quantum computation originated with the pioneering ideas of Feynman (1982) and the universal machine proposed by Deutsch (1985). Quantum computing employs qubits, whose states, unlike those of bits in classical digital computers, may exist simultaneously as coherent superpositions of both 0 and 1. Potentially, quantum computers may employ a unique sort of parallelism that would make tractable some problems currently considered intractable via classical computation. In computational complexity theory, a problem is considered tractable if it can be solved in polynomial time, that is, if the time required to solve it increases asymptotically by at most a polynomial function of the size of the input. Problems with optimal solution times increasing faster than this (e.g., as an exponential function of the input size for sufficiently large values) are considered to be intractable.
    The technological potential for quantum computing was first realized in the formulation by Shor (1994) of a polynomial-time quantum algorithm for the problem of factoring a number into its constituent primes, for which the best classical algorithms require exponential (or at least superpolynomial) time. The apparent intractability of prime factorization under classical computation is central to the security of the cryptographic schemes prevalent today. If implemented on a suitable quantum machine, Shor’s algorithm could potentially break the encryptions depended on by governments, banks, and millions of individuals.
    Indisputably, phenomena requiring quantum mechanical explanation exist throughout the brain, and are fundamental to any complete understanding of its structure and physical mechanics. […] However, none of these effects contribute essentially to explaining the overall functionality of the associated system, which can be fully described without explicit appeal to quantum-level phenomena. In our wing analogy, it is unnecessary to refer to atomic bonding properties to explain flight. We contend that information processing in the brain can similarly be described without reference to quantum theory. Mechanisms for brain function need not appeal to quantum theory for a full account of the higher level explanatory targets.

    And on the subject of noise:

    There is substantial evidence that the timescales for individual quantum events in the brain are not in accord with the temporal requirements for influencing
    neural firing in a consequential (i.e., non-noise) manner. Eliasmith (2001) reviewed empirical evidence that demonstrates that specifying firing times within about 1msec captures nearly all of the available information in a neural spike train. This is because unreliable presynaptic vesicle release, disparate neurotransmitter concentrations in these vesicles, and neural spike timing “jitter” induced by axons limit the possible precision of neural information transfer. As a result, environmental noise severely constrains the useful temporal precision of neural spiking in most of the cortex. The fastest neural timescale sensitivities are found in highly specialized subsystems, such as the barn owl auditory system. Even there, however, precision is only on the order of microseconds (Carr&Friedman, 1999). Calculations by Tegmark (2000) explained why this implies that effects on the timescale of quantum phenomena can be safely averaged out as noise.

    There’s lots of good stuff in this paper, including references into the literature. My favorite bit might be the quotation they give from P. S. Churchland, who said, “The want of directly relevant data is frustrating enough, but the explanatory vacuum is catastrophic. Pixie dust in the synapses is about as explanatorily powerful as quantum coherence in the microtubules.”

    Reply
  17. Antendren

    Nico and Xanthir, chase through the definitions.
    If P=NP, then all of P is NP-complete. For example, here’s how I would reduce TSP to the shortest path problem (easily solved with a breadth first search): Solve TSP in polynomial time, then create a trivial instance of the shortest path problem with the appropriate solution.
    My claim earlier is simply the contrapositive of the above statement: if there exists something in NP which is not NP-complete, then P!=NP. If P!=NP, then any element of P is such a thing, but now you’re begging the question.

    Reply
  18. Coin

    My claim earlier is simply the contrapositive of the above statement: if there exists something in NP which is not NP-complete, then P!=NP.
    Antendren, I think there is some confusion over the terminology here– as usually happens when this subject comes up, since the terminology is a bit puzzling. “In NP” just means that the problem is solvable in polynomial time on a nondeterministic machine. Not that the problem requires polynomial time on an ND machine; just that it can be solved in polynomial time on an ND machine.
    Everything in P is also in NP. For that matter everything in L and everything that can be solved in constant time are in NP. NP is a superset of these other classes.
    This is why we have the separate terms “NP-hard” and “NP-complete”, which refer to problems which everything in NP can be polynomial-time reduced.
    If we wanted to get really pedantic we could say that “P” and “NP-complete” are both subsets of NP, and P != NP if and only if P and NP-complete are disjoint.

    Reply
  19. Antendren

    Coin, believe me, I fully understand that. As I showed, if P=NP, even something solvable in constant time is NP-complete. Thus you cannot show that addition is not NP-complete without solving a millenium problem.

    Reply
  20. JBL

    Nitpicking and repeating things that people already know:
    Antendren’s statement is true, although it’s not actually the contrapositive (there is another logical step required). That “If P = NP, all of NP is NP complete,” is true, but only vacuously so (and so it’s not “interesting,” regardless of the status of P vs. NP). Assuming P!=NP, there are many problems in NP which are not NP complete (including many not in P), which is an “interesting” statement (so long as we don’t know that P = NP). To be precise, Walker should have said, “assuming P!=NP, factoring is not known to be NP complete.”

    Reply
  21. Coin

    As I showed, if P=NP, even something solvable in constant time is NP-complete.
    …because the entire solution to the NP-complete problem could be snuck inside of the “polynomial-time reduction” to the constant-time problem. Okay, now I understand what you’re saying. Yes, I suppose that’s technically true then.

    Reply
  22. Matthew

    As I understand it, quantum computers can’t compute any functions that aren’t already turing-computable. What they can do, is compute in polynomial time certain functions which take exponential time on a standard deterministic turing machine.
    Why is this?
    Imagine a computer where, at each point the algorithm has to make a choice, it branches off two different threads to try each choice. But, somehow, the fact that it’s running this ever-growing tree of threads in parallel doesn’t slow it down.
    That is how a quantum computer works my friend (or at least one way of thinking about it – depends on your interpretation of quantum physics, I’m told)

    Reply
  23. Wesley Tanaka

    “being Turing equivalent is actually remarkably trivial”
    … but not for a machine that exists. A Turing Machine includes an infinitely long tape, which could be anywhere from hard to physically impossible to replicate in real life.

    Reply
  24. Mark C. Chu-Carroll

    Wesley:
    Since you’re nitpicking, I’ll nitpick back 🙂
    A Turing machine does *not* including an infinitely long tape; it contains an arbitrarily long tape. That may sound like quibbling, but it’s actually an important distinction.
    A Turing machine can not access an infinite amount of storage. An algorithm that required an infinite amount of storage is not computable on any effective computing system.
    The distinction is that you can’t compute how much space will be required for a particular computation in advance. The only way to know how much space a computation needs is to run it, and when it halts, check how much it used.
    It’s true that we don’t have any arbitrary resource machines; but in general, when we talk about concrete devices in the real world, what we look for is practically unbounded resources. So for example, if you think about a BrainF*ck style language, you can implement it on a PC by using a pointer for the tape head, and letting it run off into memory. Eventually you’ll run out of space – but given an address space of 4GB of physical memory, for all practical purposes, you can treat it as effectively unbounded.

    Reply
  25. Thony C.

    A Turing machine does *not* including an infinitely long tape; it contains an arbitrarily long tape. That may sound like quibbling, but it’s actually an important distinction.
    Didn’t I already say this once a long way bach up the thread?

    Reply
  26. Coin

    Didn’t I already say this once a long way bach up the thread?
    Technically no, you said it in the other thread 🙂

    Reply
  27. Thony C.

    Didn’t I already say this once a long way bacK up the thread?
    Technically no, you said it in the other thread 🙂
    OOPPS! Sorry!

    Reply
  28. Harald Korneliussen

    “The distinction is that you can’t compute how much space will be required for a particular computation in advance. The only way to know how much space a computation needs is to run it, and when it halts, check how much it used.”
    You said elsewhere that constructing languages that aren’t Turing complete is hard, but this distinction shows how it can be done. As far as I know, SPARK Ada isn’t Turing complete because it requires all programs to be bounded in space: no dynamic allocation, no recursion.

    Reply
  29. Torbjörn Larsson

    neural spike timing “jitter” induced by axons

    Indeed. Biochemistry “machinery” is noisy with reaction distributions and even random reversals at times. During a spike some gates close randomly instead of keeping open, and conversely. It is normally robust enough but not precise or efficient. (Not being efficient doesn’t sound like QM implemented either, btw.)

    Reply
  30. Michael Ralston

    Harald: But that’s generally considered an implementation detail that’s beneath notice.
    The reason people care about Turing-equivalence in a language is because they want to be able to do various things that are Turing-complete. Many of these can be done with some small bound on memory! And most of those that can’t will also require more time than we have, so who cares?
    So a language that’s limited to some finite maximum amount of memory, while it is incapable of doing what an arbitrary Turing machine could do, is typically capable of doing more or less whatever a practical Turing machine can do in practical amounts of time. (and yes, that’s very qualified. That’s because these sorts of things are fuzzy and that’s part of why theoretical computer science tends to skip over them.)

    Reply
  31. David Harmon

    ” To say that life contains DNA, which is a Turing-equivalent computing system, is saying that Turing equivalent computational capability is sufficient for life. It is not saying that it is necessary for life. The latter is a much stronger claim, and requires more evidence.”
    I’d also point out that even if we found a Turing-complete “subprogram” operating among the cellular machinery, that would not necessarily prove that TECC was necessary. (It would be an impressive feat for evolution’s trophy shelf!) On the other hand, it’s worth noting that “reality in general” has TECC, and our local conditions tend to display it strongly.

    Reply
  32. Salvador T. Cordova

    Mark wrote:
    To say that life contains DNA, which is a Turing-equivalent computing system, is saying that Turing equivalent computational capability is sufficient for life. It is not saying that it is necessary for life. The latter is a much stronger claim, and requires more evidence.

    You missed the sense of the word “necessary”, and thus the way you are arguing over my usage becomes and equivocation.
    One can say, metabolism is a necessary condition of life, metabolism is not a sufficient condition for life. A Turing-equivalent computing system is a necessary condition for life as we know it, it can in no wise be a sufficient condition…
    You’ve equivocated the sense of the “necessary”.
    Salvador

    Reply
  33. Mark C. Chu-Carroll

    Sal:
    You’re the one who’s missing the distinction. We know that DNA in a living cell is capable of performing Turing-equivalent computation. But we don’t know whether that computational power is a required capability. Unless you can show something that DNA does in a real cell that demonstrates that it requires Turing-equivalent computational capability, you can’t assert that life requires Turing-equivalent computational capability.
    As I said in the post: given the nature of computation, it’s not surprising to see a TE mechanism. As soon as you start doing anything like mechanical computation, you almost inevitably fall into the TE rat-trap.
    If you want to argue that life requires turing-equivalent computational capability, then you have to demonstrate that requirement. A demonstration would be quite simple: just show one thing done by DNA in a living thing that requires a Turing-equivalent computing system.

    Reply
  34. Jonathan

    Hi Mark, I’m going to jump into the fray, though I see now that the conversation has been dead for a few days. I happened to come across this thread while searching for a related topic, and I can’t resist to comment, to see how you’ll respond.
    You said:

    To say that life contains DNA, which is a Turing-equivalent computing system, is saying that Turing equivalent computational capability is sufficient for life. It is not saying that it is necessary for life. The latter is a much stronger claim, and requires more evidence.

    Without approaching the topic of Turing-equivalence or -completeness yet, I’d like to point out that it seems you’ve gotten your necessaries and sufficients mixed up. Life–as we know it–is always accompanied by DNA, which is to say that whenever life is present, DNA is present. However, the opposite is not true; it is [i]not[/i] true that whenever DNA is present, life is present also.
    How do I mean this? I mean this by virtue of the fact that any organism that we define as being alive has inside of every cell a strand of DNA. On the other hand, an organism may be dead and still have DNA, or there may be a lone strand of DNA floating around, minding its own business, and not actually going through the processes of life.
    From this we can conclude that DNA is necessary but [i]not[/i] sufficient for life. Taking the contrapositive of that statement, we can also conclude that the presence of life is sufficient for the presence of DNA, but not necessary for it.
    (Note that we may in the future discover that life can operate without DNA, but I’m speaking strictly of the broad forms of life that have evolved here on planet Earth.)
    Now, the part about Turing-equivalence and completeness I’m rather fuzzy on. As far as I can see, considering what I’ve been taught about Turing machines, DNA plus a few other cellular components seem to me to be the spitting image of one. DNA, strictly speaking, is a tape of certainly arbitrary–and nearly infinite–length that contains “instructions” in the form of a four letter alphabet. Codons–the “words”–are segments three letters long; with four letters in segments of three, there is a possibility for sixty-four unique words, which are read by the ribosome and code for amino acids (although the words map on to only twenty amino acids), with particular words for starting and stopping the reading process. This is and extremely simplified version of the process, but I think it’s sufficiently robust for the present discussion.
    Would you agree that the DNA (the tape) plus the ribosome (the read head) form what we could call a Turing machine? As far as writing goes, either new segments of DNA can be inserted into the strand, or mutations occur.
    If it’s enough to say that DNA is a TM, what would stop us from concluding that it is Turing-equivalent, or that the processes that it execute are Turing-complete?

    Reply
  35. Mark C. Chu-Carroll

    Jonathan:
    You’re making the same mistake that Sal did – that is, expanding the context of the discussion from the computational capability of a cell to the existence of the cell.
    In terms of the computational capabilities needed by a cell, Turing equivalence is sufficient, but it has yet to be shown that it is necessary.
    It’s been shown in several places that DNA posesses Turing-equivalent computational capabilities. The question isn’t “Can DNA be a Turing-equivalent computing device”, but “Does the cell require Turing-equivalent computing capability?”
    Look at the example of the cellular automaton in the original post. It is absolutely true that Conway’s “Life” is a Turing-equivalent computational system – in fact, there’s a Turing machine implementation as a Life playfield. But there are millions of life playfields that have been created
    by hobbyists over the years, and to the best of my knowledge, before the Life Turing machine was built, none of them actually did any computations that required Turing-equivalent capabilities.
    Just saying that DNA is present in cells doesn’t answer the question that I’m asking. As I keep repeating: Turing-equivalence is not rare. Virtually anything
    that can perform mechanical computations ends up being Turing equivalent unless artificial restrictions are placed on it. So the fact that DNA is TE is not surprising. But that doesn’t tell us whether the life of a cell requires that much computational power. It’s possible that no process in a living cell requires Turing-equivalent computational capabilities – that DNA is overpowered in some sense – but that because of the nature of computation, it’s easier to produce a TE mechanism that a sub-TE one.
    To say that a cell requires Turing-equivalent capability is to say that some process in the cell
    performs a Turing-complete computation. And the original post that started this comment thread defines what a Turing-complete computation is: a computation that cannot be performed on anything with less computational capability than a Turing machine.
    Do cells require TE computation? I don’t know. I really have no clue. I’m not aware of any process in a cell that corresponds to a Turing-complete computation, but I know very little about cell biology, so it may be well-known among cell-biologists or computational biologists that there is some TC process that is necessary for the life of the cell. It wouldn’t surprise me if there was, but then it also wouldn’t surprise me if there wasn’t. But if you want to make the case that cells require TE computation, then you need to show one – just one – Turing complete computation that is needed for the proper functioning of a living cell.

    Reply
  36. Mike Hogan

    Interesting blog — just ran across it.
    I’m interested in this because there is a dopey “truism” among those promoting Domain Specific Languages that DSL’s trade “expressiveness” for “simplicity”. In fact, there is no such trade-off as long as the languages are in the same computaitonnal class. If that class is “turing equivalent”, then it’s a one step proof.
    On type inference, I did a little work on parameterized polymorphic regular tree types a little while back (relevant to type checking XML documents that contain equivalent information expressed using a different schema), and if I recall correctly type inference on such systems is worst case NP complete (and frequently undecidable).
    BTW, aren’t all NP complete problems ultimately pesudo-polynomial (i.e. exponential in powers of 2)?

    Reply

Leave a Reply