Basics: The Halting Problem

Many people would probably say that things like computability and the halting
program aren’t basics. But I disagree: many of our basic intuitions about numbers and
the things that we can do with them are actually deeply connected with the limits of
computation. This connection of intuition with computation is an extremely important
one, and so I think people should have at least a passing familiarity with it.

In addition to that, one of the recent trends in crappy arguments from creationists is to try to invoke ideas about computation in misleading ways – but if you’re familiar with what the terms they’re using really mean, you can see right through their
silly arguments.

And finally, it really isn’t that difficult to understand the basic idea.
Really getting it in all of its details is a bit harder, but just the basic idea that there are limits to computation, and to get a sense of just how amazingly common uncomputable things are, you don’t need to really understand the depths of it.

It makes sense to start with a bit of history. Before people actually built real
computers, there was a lot of interest in what could be done by a purely mechanical
process. In fact, the great
project of mathematics in the early 20th century was the attempt to create a perfect
complete mathematics
, which meant a new formalization of math in terms of a logic
in which every true statement was provably true; every false statement was
provably false; and given any statement, you could follow a completely mechanical
process to find out whether it was true or false.

You most commonly hear about how Gödel wrecked that with his incompleteness theorems. But Alan Turing did something that demonstrates the same basic principle: defining the limit of what could be done by a purely mechanical computation process. In the end, what Turing showed about computability is really the same concept as what Gödel showed about logic with incompleteness: the limits of
mechanical/axiomatic processes.

Fundamentally, what Turing did was prove that there are things that cannot be
computed. He did that two ways: first, by defining an easy to understand problem which
cannot be solved by any mechanical device; and second, by showing how common
non-computable things are.

The first of Turings two proofs is commonly known as the Halting problem.
The simplest version of the halting problem is the following. Suppose you have
a computing machine M, and some program, P which can be run on M. Can you write a program which can by looking at P can determine whether or running P will eventually stop?

I actually find the more formalized presentation to be clearer. Take a computing
machine, which we’ll call φ. (Don’t ask why φ, it’s just tradition!) Now,
suppose we have a program P for φ which computes some function on its
input. We’ll call that function φP. So φP(i) is the
result of running the program P on the input i using the machine φ.

So the halting problem is really the question: given a machine φ, is there
a program H where φH(P,i) = YES if and only if φP(i) will halt, and will return “NO” if φP(i) would not halt. The program H is called a halting oracle – so the question is, is it possible to write a halting oracle?

The answer is no. You can’t write a program like that – no such program can possibly exist. And it’s actually really quite easy to show you why – because if there were a halting oracle H, it’s very easy to write a program for which H is guaranteed to get the wrong answer.

Take the halting oracle program H. Write a program T which includes
H, and which does the following:

  • If φH(T,i)=YES (meaning that the halting oracle says that T
    will halt on input I), then T will run an endless loop, thus never halting.
  • If φH(T,i)=NO (meaning that the halting oracle says that T will
    not halt on input i), then T will immediately halt.

So T completely defeats the Halting oracle. No matter what you do, for any program that claims to be able to tell whether other programs halt, there are programs for which they can’t get the right answer. There’s always a program that for a given halting oracle will do exactly what the oracle says it won’t. And that, in turn,
means that there are some functions that simply cannot be computed. And here’s the neat twist: given a halting oracle H, it’s easy to write a program which will
generate the program that defeats H – that is, the Halting oracle is non-computable, but if it were computable, then the program that defeats it would
also be computable.

Turing, genius that he was, was not satisfied with that as a result. He wanted to
understand more about what could and could not be computed. Did you need to
have a program which looked at itself to cause a problem? Was it just meta-programs that were the problem? Or were there simply natural functions which just could not be computed?

The answer was that there are lots of things that can’t be computed – in fact, most functions are not computable! Just like most numbers are irrational, most functions are not computable. I’m not going to go through the proof here – but here’s a very basic sketch:

Suppose we limit ourselves to just thinking about functions on the natural numbers: {f | f:NN}. How many functions are there in that set? It’s an infinite set – and it’s an infinite set larger than the infinite set of natural numbers. For a computing machine, you can map every one of its programs to a natural number – so the computing machine can only have as many programs as there are natural numbers. So there have to be values in that infinite set of functions that have no program that compute them.

0 thoughts on “Basics: The Halting Problem

  1. Binil Thomas

    Mark,
    Thanks for explaining this. I could not follow the last part of your article.
    Suppose we limit ourselves to just thinking about functions on the natural numbers: {f | f:N→N}. How many functions are there in that set? It’s an infinite set – and it’s an infinite set larger than the infinite set of natural numbers.
    Can such an assertion be made, since both sets are infinite?
    For a computing machine, you can map every one of its programs to a natural number
    Did you mean that since each program is distinct, each can be named with a natural number – or am I missing something?
    Thanks,
    Binil

    Reply
  2. Anonymous

    Can such an assertion be made, since both sets are infinite?

    Hey! That’s a wonderful suggestion for another “basics” post. (Binil: In short, yes, that assertion makes sense, and yes, you’re missing something. Hopefully something that will be covered in another basics post, or by the other commenters shortly)

    Reply
  3. Daniel Martin

    I remember how bizarre I found it when I realized a few years ago that there are only countably many English-language descriptions of numbers. This means that there exist numbers between 0 and 1 which cannot be described. (“Described” here means a description which specifies the number uniquely. So “one-half square root of two” is an adequate description as is “one over pi”, but “irrational” isn’t)
    I sometimes imagine something out of the Cthulhu mythos bringing one of those numbers to drive a mathematician insane. (or not; without the axiom of choice it’s awfully hard to get ahold of those things, and I don’t know how the Elder Gods feel about the Banach-Tarski paradox)
    By the way, signing in with typepad only seems to cause my comments to appear as “Anonymous” (as in my short comment above)

    Reply
  4. Tordek

    Aleph-0 vs Aleph-1 if you want the explanation of different infinite sizes.
    An example of the infinite functions F:
    F1(n)=1
    F2(n)=2

    Fm(n)=m
    There you have infinite functions: each receives one variable and returns m (the function’s own number). Now, since you’ve “spent” every natural number numbering those functions, you just need to add one more, and you’ve run out of numbers (there is one more). If that is not enough, you also have another group of infinite functions: Addn(x)=x+n.
    Repeat for multiplication, division, and all the operations.
    Correct me if I’m wrong.

    Reply
  5. Zombie

    Tordek, I’m not sure that proof quite works, you could say the same thing about rationals: n/1 exhausts all the integers, but there is another mapping that is 1:1. In fact, the set of all functions you give are countable, because they’re all computable and can be encoded with a finite bit string.
    However, if you assume functions are countable, then you can label every single one of them Fm(n) for some integer m. Now pick a new function G where G(n) != Fn(n). G is different from Fn for all n, but ought to be in the set of all Fm if integer functions are coutable, which is a contradiction. So integer functions of integers are not countable.

    Reply
  6. Chad Groft

    MarkCC:

    Take the halting oracle program H. Write a program T which includes H, and which does the following:

    * If φH(T,i)=YES (meaning that the halting oracle says that T will halt on input I), then T will run an endless loop, thus never halting.

    * If φH(T,i)=NO (meaning that the halting oracle says that T will not halt on input i), then T will immediately halt.

    This argument doesn’t work, because it’s not clear that you can write such a T. You certainly couldn’t embed the source code of T in the source code of T, for example.
    What you actually have to do is make the program part of the input, as follows: Suppose you can write H. Write a program T which halts on input i iff program i does not halt on input i. You can certainly do this, given H. Now, what happens if you run program T on input T? By the program’s own logic, it halts iff it does not halt. This is a contradiction, so our assumption (that a program H exists) is false.
    It’s almost exactly like Russell’s Paradox (T halts on precisely those programs that do not halt on themselves; does T halt on itself?). The trick is treating the input i both as a program and as input to a program.

    Reply
  7. Chad Groft

    Binil/Tordek/Zombie: Yes, there are different infinite cardinalities. What Zombie says, both regarding Tordek’s “proof” and a corrected proof, is entirely correct. However, the distinction is not quite as simple as Aleph-0 versus Aleph-1.
    To be more specific, Aleph-0 (also called Beth-0) is the smallest infinite cardinality, that of the natural numbers. Aleph-1 is the very next smallest cardinality; it’s uncountable, but a set of size Aleph-1 embeds into any uncountable set. (No, it’s not obvious that there should be such a cardinality, but given the usual axioms of set theory (ZFC) there is.) Aleph-2 is the next cardinality after Aleph-1, and so forth.
    The set of all functions f: N -> N actually has the same cardinality as the set of all subsets of N, denoted P(N). This cardinality is sometimes called Beth-1. Beth-2 is the cardinality of PP(N), etc.
    Now, Beth-1 and Aleph-1 may well be the same. The statement “Beth-1 = Aleph-1” is one form of the well-known Continuum Hypothesis, and it may be true; Gödel showed it was consistent with ZFC. However, it may be false; Paul Cohen showed that its negation is also consistent with ZFC, and in fact Beth-1 can be “forced” arbitrarily high in the Aleph hierarchy (which goes much higher than just Aleph-n for all natural numbers n).
    Once again, we’ve gone much further than a “basics” article.

    Reply
  8. noone

    Chad Groft: Actually, it does work, you just need some more basics – which I won’t give here, because it is a bit much, but provided you believe me that any sufficiently powerful programming language is equivalent, we actually can construct T explicitly (in Pseudocode – feel free to translate it to something you know):
    function T(x)
    if H(x) then // if the program with the index x halts,
    while true do ; // do an infinite loop.
    else // but if it does not halt,
    return 0. // return 0 and stop.
    Except for the call to the Halting Oracle H this function is obivously computable, therefore T is computable if and only if H is computable.
    As we have constructed T, we can compute its index (easily, as it is already represented as a number within the computer). If we apply T to its own index we get that T halts if and only if T does not halt, which is a contradiction. Therefore, T cannot be computable and by the previous observation so can’t H.
    As I said, this contains some assertions which are also basic, but too numerous for such a comment, so you better look them up yourself.
    Also, this commenting facility doesn’t work with text browsers.

    Reply
  9. Michael Ralston

    The Kleene Recursion Theorem strikes!
    It’s possible to, given ‘programming languages’ that satisfy certain properties (which Turing machines do), actually take any program, and algorithmically modify it to produce a new program that has the same output as the first program if that first program were given itself as input.

    Reply
  10. Mark C. Chu-Carroll

    Chad:
    If you’ve got a Gödel numbering of all programs in a language, it’s possible via a quine-like technique to include T’s number in itself.

    Reply
  11. Confused

    Hey Mark,
    I’ve been following your blog for a couple of days now and its very interesting. But there are a couple of things that I don’t understand in this post :
    For instance :
    Take the halting oracle program H. Write a program T which includes H, and which does the following:
    * If φH(T,i)=YES (meaning that the halting oracle says that T will halt on input I), then T will run an endless loop, thus never halting.
    * If φH(T,i)=NO (meaning that the halting oracle says that T will not halt on input i), then T will immediately halt.
    So T completely defeats the Halting oracle. No matter what you do, for any program that claims to be able to tell whether other programs halt, there are programs for which they can’t get the right answer. There’s always a program that for a given halting oracle will do exactly what the oracle says it won’t. And that, in turn, means that there are some functions that simply cannot be computed. And here’s the neat twist: given a halting oracle H, it’s easy to write a program which will generate the program that defeats H – that is, the Halting oracle is non-computable, but if it were computable, then the program that defeats it would also be computable.
    This part defeated me. 🙂
    Could you possibly explain this at an even more basic level ?

    Reply
  12. Xanthir, FCD

    Confused:
    Hopefully I can clear this up a bit. It is a bit confusing, especially if you don’t grasp that programs can include themselves within themselves (anything that’s equivalent to a Turing machine can do this).
    All right, so you’ve got the Halting Oracle. It takes some program P and some input I, and tells you whether or not P(I) will halt. Calling the Oracle looks like this: H(P,I). H will return YES if P(I) halts, and NO if P(I) loops forever. (Note that H doesn’t actually *run* P(I) – if it did that, it would loop endlessly whenever P did. It just knows how to analyze P so that it can tell what it will do without running it.)
    The important thing to hold in mind is that H can really take *any* program. If we ran, say, H(H,I) (passing H to itself), then it would always return YES, because the Oracle is guaranteed to halt and give an nanswer.
    Now, again, P can be any program at all. But we’re going to make a very special program, called T. T contains the code for H (so it can run it) and the code for itself (which is possible, though weird), and basically does just two things (I’ll write this in pseudocode):
    Program T(data I)
    {
    If (H(T,I) == YES) (H says that T(I) will halt)
    Then loop forever (don’t halt)
    If (H(T,I) == NO) (H says that T(I) will loop forever)
    Then stop (halt immediately)
    }
    So, if H(T,I) would return YES, it has to return NO. If it would return NO, it has to return YES. It can’t do both, and so either answer produces a paradox (similar to the “This statement is false” paradox – both work on self-reference). Thus, the Halting Oracle cannot exist (proof by contradiction).
    The last paragraph of Mark’s is just a nice bonus. If the Halting Oracle actually existed (was computable), then T is guaranteed to be computable (it must exist as well). This prevents any objections of, “Well, that would be true IF we could construct T, but we can’t, so the Halting Oracle is still possible.”
    Does that help?

    Reply
  13. grammar SS

    Can you write a program which can by looking at P can determine whether or running P will eventually stop?Can you write a program which, by looking at P, can determine whether or not running P will eventually stop?

    Reply
  14. krisztian pinter

    back in my university days, i thought math is good, but boring topik, with no surprises. recently, i read a lot of absolutely crazy math things, so i had to revise my opinion. math contains more surprises then, say, chemistry. to be on topic, here is a related outrageous statement: most of the real numbers are not computable.
    http://www.math.hmc.edu/funfacts/ffiles/30004.8.shtml
    to be precise, the article is about writing out the decimal digits, but i think it is not the major point here.

    Reply
  15. Chad Groft

    If you’ve got a Gödel numbering of all programs in a language, it’s possible via a quine-like technique to include T’s number in itself.

    You’re probably right, but it’s certainly not clear. The version I proposed above is the one I’ve always seen in discussions of the subject, and it’s more immediate. To make it especially clear, one can think of the numbers as equivalent to text files, which one can then try to interpret as source code or as data.

    Reply
  16. Xanthir, FCD

    krisztian pinter:
    Yep, that is a crazy but true statement. Most reals are not computable, and most programs aren’t Turing-recognizable.
    Basically, if a number is computable, then there is (obviously), a program that will compute it. You can translate any program into an integer (one way is to use an analog of Godel numbering). Thus, the set of all possible programs is no larger than the set of integers. The set of reals is strictly larger than the integers, so there must be an infinite number of uncomputable reals!
    The proof that they give is probably easier, though, especially if given soon after the proof that the rationals are the same size as the integers (both proofs involve organizing the set into an infinite ordered set of finite groups).

    Reply
  17. Foxy

    I think I understand that there are so many more uncomputable things than computable.
    However, the only description I’ve ever heard of them is the halting problem, or modifications therein.
    Are there any other distinct concrete examples of uncomputable problems?
    Or, sort of like NP problems mapping on to each other, could you establish some sort of equivalence between uncomputable problems?
    Honestly, I don’t know if I really understand what that means. But it sounds like an interesting question.

    Reply
  18. Stephen Uitti

    Let’s say you have a program, p, which runs under Linux on an X86. Can you write a program q that can predict the output of p?
    Maybe. If p is inefficiently coded, a program q might execute quicker, and finish first. Or, perhaps a copy of p, we’ll call q, could be compiled with optimization turned on, and therefore finish first. If p takes sufficiently long to run, it might be quicker to wait until next year, when the new x86 is faster. The a copy of p, which we’ll call q, could predict the result from p. For a little while, there was this program that could translate x86 code to Alpha native code – and run it faster than any existing x86.
    I took Operating Systems with Peter Denning. Denning came up with a proof that using a Least Recently Used algorithm was the best one could do for page replacement. Interesting, and gospel in the industry for years, but sadly wrong. One of the assumptions was that the operating system couldn’t predict the page access patterns of random programs. This is, of course, false. Many programs show sequential or random memory access patterns. The operating system can see this, recognize it, and perform much better than LRU. For example, if the program is doing sequential access, the OS can perform read ahead (and read more than a page at a time which can be much faster with disk drives), and as soon as a page is actually used, it can instantly schedule that bit for page out. If the page is clean (wasn’t written to), it can be simply deallocated. This sort of OS can allow programs to get closer to the performance of sequential programmed I/O using the memory system. Very practical.
    So, the difference between theory and practice is that, in theory, they’re the same. Great theory is no excuse to stop thinking. That would be prejudice.
    One of the old jokes was this: The new Cray supercomputer is so fast that it can compute an infinite loop in 17 minutes!
    Another: The new Cray supercomputer is so fast, that it takes five halt instructions in a row to get it to stop!
    On the subject of fast computers, i understand that there is a Linux laptop in the International Space Station. At 17,000 miles per hour, it’s the fastest computer ever!

    Reply
  19. Ørjan Johansen

    I find it interesting that nearly every result mentioned in this article and its comments so far is frequently proved in essentially the same way: using the technique of diagonalization. That is, by making some kind of “function” which applies any argument in a set to itself. Either to prove that the function is not in the set, or to get a function that somehow evaluates to itself when applied to itself.
    This includes:
    The halting problem
    Gõdel’s incompleteness theorems
    Kleene’s recursion theorem (I think)
    The construction of quines
    Russell’s paradox
    There being more real numbers than integers
    There being more integer functions than integers

    Reply
  20. Mark W

    I wanted to know whether it is possible to write a perfect spam filter engine. My hunch based on this article seems to be no unfortunately.

    Reply
  21. Chad Groft

    Are there any other distinct concrete examples of uncomputable problems?

    Many. Let C be the set of all (partial) computable functions N -> N. Take a subset S of C, not the empty set, not all of C. There is no program that will accept an integer n as input and tell you whether it codes for a program whose function is in S. So there’s no program which will tell you
    * whether program n is defined on all inputs;
    * whether two programs m and n define the same function;
    * whether program m is an extension of program n (m and n are input).
    You can probably come up with some yourself.
    Moving away from computability theory per se, there are a number of problems in group theory that are not solvable, but I don’t see anything on group theory in the archives, so I’ll stay away from that.
    Let’s look at topology instead. Many “nice” topological spaces can be triangulated; that is, divided into (a finite number of) vertices, edges, triangles, tetrahedra, and so on up. A triangulation can serve as input to a program; one inputs the number of vertices, the pairs of vertices spanned by some edge, the triples spanned by some triangle, etc. There is no program which will take two trianguations as input and determined whether their spaces are equivalent, i.e., homeomorphic. There are other reasonable ways to specify a space (cutting out by algebraic equations comes to mind, but there are others), and they suffer from the same problem.

    Reply
  22. Ørjan Johansen

    Are there any other distinct concrete examples of uncomputable problems?
    Or, sort of like NP problems mapping on to each other, could you establish some sort of equivalence between uncomputable problems?

    Yes. In fact the whole theory of NP-completeness and complexity has borrowed basic ideas from the older theory of Turing-completeness and computability/recursiveness.
    Finding uncomputable problems or Turing-complete systems by reducing to previous known ones (and ultimately to Turing machines), inspired finding NP-complete problems in the same way. There are long lists in each case, from many different areas of discrete mathematics. (Just the other day I saw a topic at Lambda the Ultimate about someone proving the undecidability of a programming language type system in this way.)
    As another example, the basic proof that there is an endless succession of distinct complexity classes (exponential time problems not solvable in polynomial time, for example) is yet another adaptation of diagonalization and the halting problem.

    Reply
  23. Torbjörn Larsson

    This thread give me the impression that while math has been busy with mainly discover what is possible, computer science is busy discovering what is not which is also useful. The only math impossibility thing this layman can remember right now is an example of a differential equation that has no solutions and full singularity descriptions.

    there is a Linux laptop in the International Space Station. At 17,000 miles per hour, it’s the fastest computer ever!

    I hate to nitpick a good joke, but this should be the fastest direct-accessed computer. The Pioneer’s and Voyager’s onboards must be the fastest.

    I wanted to know whether it is possible to write a perfect spam filter engine. My hunch based on this article seems to be no unfortunately.

    Interesting. I can’t help think that these things could be true and possibly connected by halting problems, and so extending Ørjans list:
    * Not possible to write perfect spam filter. (Mark W hunch.)
    * Not possible to write perfect compression program. (Torbjörn L hunch from Mark’s hunch.)
    * Not possible to catch all patterns in one complexity measure. (Murray Gell-Mann.)
    * Not possible to generally find simplest algorithms for given functions. (Chaitin?)
    * Not possible to find one optimization or filter method that handles all inputs. (NFL theorem?)

    Reply
  24. Zombie

    The “perfect spam filter” isn’t really comparable to the rest, though, because the limit isn’t necessarily a matter of fundamental computability. It’s either a specification problem (define spam) or better yet, it’s what’s semi-jokingly referred to as “AI-complete”, suggesting it requires a simulation of human consciousness to avoid false positives/negatives inherent in any simpler filter, and to keep up with spammer’s attempts to circumvent filtering.

    Reply
  25. Luke Breuer

    Surely it is possible to tell if certain programs will halt on certain inputs. For example, a program with no loops (like Hello, World!) is guaranteed to halt. Given this, there must be some way to describe programs for which the halting problem can be solved. Comments?

    Reply
  26. Mark C. Chu-Carroll

    Luke:
    There are a number of classes of programs for which the halting problem is solvable.
    The largest easy-to-understand one that I can think of is the class of primitive recursive functions. Speaking very loosely, the primitive recursive functions are the class of all functions that have one parameter whose value reduces with each iteration, and the program halts when it reaches 0.
    A lot of common things that we do are primitive recursive. Most interesting mathematical functions that we use in everyday programming are PR; many (if not most) algorithms that work on common data structures are PR.

    Reply
  27. Mark C. Chu-Carroll

    Stu:
    I’d need some time to look up the actual state table…
    There’s also some dispute about just how to measure the size of the machine… i.e., do the number of transitions count?
    Using a measure of states×symbols, Minsky did a 4-state 7-symbol machine, which is one credible candidate for smallest.
    Wolfram did an even smaller one, with 2 states and 3 symbols, but I think it’s sort-of cheating. The Wolfram solution is a Turing machine implementation of a cellular automaton, where you could implement a Turing machine using the CA. The reason that I think it’s cheating is because to be a real UTM, you need to be able to describe how to encode an arbitrary Turing machine as an input for your UTM; Wolfram has shown that you can do the encoding, but not how – and much of the real functionality of the TM being run by the UTM will need to be implemented as a part of the input.
    So I tend to support the Minsky approach.
    I’ll probably hunt down Minsky’s UTM, and do a post about it, showing how it works. That’s something I’ve wanted to do for a long time; maybe now’s a good time to bite the bullet and do it!

    Reply
  28. Blake Stacey

    The Feynman Lectures on Computation give a diagram for a UTM. It’s not the smallest possible, since as Feynman says one could be slightly more clever and use the same state for multiple purposes. I don’t have the book in front of me, but I recall a statement to the effect that one could make a UTM with just two symbols and many states, or just two states and many symbols — so I guess we’d have to clarify our notion of “size” before we could decide upon a “smallest” UTM.
    MathWorld describes Minsky’s 1962 construction as a “7-state 4-color” machine, where in Wolfram-ese, “color” means what the rest of us call symbols on the tape.

    Reply
  29. Blake Stacey

    While looking for papers about minimal UTMs, I stumbled across one by P. W. K. Rothemund entitled “A DNA and restriction enzyme implementation of Turing Machines” (1995, PostScript link). The abstract follows:

    Bacteria employ restriction enzymes to cut or restrict DNA at or near specific words in a unique way. Many restriction enzymes cut the two strands of double-stranded DNA at different positions leaving overhangs of single-stranded DNA. Two pieces of DNA may be rejoined or ligated if their terminal overhangs are complementary. Using these operations fragments of DNA, or oligonucleotides, may be inserted and deleted from a circular piece of plasmid DNA. We propose an encoding for the transition table of a Turing machine in DNA oligonucleotides and a corresponding series of restrictions and ligations of those oligonucleotides that, when performed on circular DNA encoding an instantaneous description of a Turing machine, simulate the operation of the Turing machine encoded in those oligonucleotides. DNA based Turing machines have been proposed by Charles Bennett but the invoke imaginary enzymes to perform the state–symbol transitions. Our approach differs in that every operation can be performed using commercially available restriction enzymes and ligases.

    Rothemund accepts Minsky’s construction as “the smallest UTM known” (I believe it’s older than Wolfram’s CA-oriented trick), and he gives a DNA implementation in section 6.

    Reply
  30. Ørjan Johansen

    I recall that another way to understand primitive recursive functions is as programs allowing loops, but only “for” loops where the maximal number of repetitions is calculated before the loop starts.

    Reply
  31. Xanthir, FCD

    Orjan: (And I apologize for the incorrect letter)
    I think that’s an accurate rendering of primitive recursion. It seems to be a pretty straightforward opposite of arbitrary recursion, where you determine the number of loops inside the loop code.

    Reply

Leave a Reply