Basics: Proof by Contradiction

I haven’t written a basics post in a while, because for the most part, that well has run dry, but once
in a while, one still pops up. I got an email recently asking about proofs by contradiction and
counterexamples, and I thought that would be a great subject for a post. The email was really
someone trying to get me to do their homework for them, which I’m not going to do – but I can
explain the ideas, and the relationships and differences between them.

Proof by contradiction, also known as “reductio ad absurdum”, is one of the most beautiful proof
techniques in math. In my experience, among proofs of difficult theorems, proofs by contradiction are the
most easy to understand. The basic idea of them is very simple. Want to prove that something is true? Look
at what would happen if it were false. If you get a nonsensical, contradictory result from assuming its
false, then it must be true.

Let’s be a bit more precise. The principle of proof by contradiction comes from the logical law of the
excluded middle, which says “for any statement S, (S or not S) is true” – that is, S must be either true or false. From that, we can infer that if S in true, not S must be false; if not S is true, then S must be false. There is no third option. So if we can prove that (not S) is false, then we know that S must be true. The way that we can prove that (not S) is false is by assuming that it’s true, and showing that
that leads to a contradictory result.

The alterative form (which is really the same thing, but can look different when it’s presented by
mediocre teachers) is proving that something is false. Just switch S and not S in the above discussion. Proving that S is false is just another way of saying that we want to prove that (not S) is
true. In a proof by contradiction, we can do that by assuming that S is true, and showing that that
leads to a contradictory result.

As always, things are best with an example. Since I’m a computer scientist, I’ll pull out
my favorite proof by contradiction: the proof of the Halting theorem.

Suppose you have a computer, which we’ll call φ. Every program for φ can be
described by as a number. Similarly, every possibly input for a program can be represented
as a number. The result of running a particular program on φ can be described as a function
φ(p,i) where p is the program, and i is the input.

One thing about programs is that they can contain infinite loops: there are some programs
which for some inputs will run forever without producing any results. One thing that we would
really like to know is for a program p with input i, will φ(p,i) ever return a result? If
it does, we say that program p halts with input i. The big question is, can we
write a program h such that takes any pair of p and i as inputs, and tells us whether p halts with input i?

The answer is no. The way that we prove that is by a classic proof by contradiction. So we
start by assuming that it’s true:

  1. Suppose that we do have a program h such that φ(h,(p,i))=true if φ(p,i)
    halts, and false otherwise.

  2. We can write a program q, where φ(q,i) runs φ(h,(q,i)) as a subroutine. If
    φ(h,(q,i)) returns true, then q enters an endless loop. Otherwise, q halts.
  3. For program q, if φ(h,(q,i)) says that q halts, then q doesn’t halt. If φ(h,(q,i))
    says that q doesn’t halt, then q halts. Therefore h isn’t a program which correctly says
    whether another program will halt. This contradicts the assumption in step one, so that
    assumption must be false.

For another example, one of the classic logic errors is the misuse of implication. If you have a logical statement
that for all possible values X, if A is true for X, then B must also true for that X, and you know that A is true for some specific thing Y, then you can infer that B must be true for Y. There’s a common error where you get that backwards: for all X, if A is true for X, then B must be true for X, and you know B is true for some specific Y, then inferring A is true for Y. That is not valid inference – it’s false.

We can prove that that kind of inference is invalid. The way we’ll do it is by assuming its true,
and then reasoning from it.

  1. Assume that it is true that “for all values X, if A is true for X, then B must be true for X”, and
    “B is true for Y”, then “A is true for Y”.
  2. Take the classic example statement of this type: “If X is a man, then X is mortal”,
    and we’ll use it as an instantiation of the rule above: If we know that “If X is a man, then X is mortal, and we know that X is a mortal, then X is a man.”
  3. The pet dog that I grew up died about 15 years ago. Since he died, he must have been mortal.
  4. By the statement we just derived, we can conclude that my pet dog was a man.
  5. But my pet dog was not a man, he was a dog. So we have a contradiction, and
    that means that the statement cannot be true.

Presto, one proof by contradiction.

Often, as in the example above, when you do proof by contradiction, what you do is find a specific
example which leads to a contradiction. If you can do that, that example is called a
counter-example for the disproven statement. Not all proofs by contradiction use specific
counter-examples. A proof by contradiction can be done using a specific counterexample for which the
statement is false. But it can also be done in a more general way by using general principles to show that
there is a contradiction. Stylistically, it’s usually considered more elegant in a proof by
contradiction to show a way o constructing a specific counterexample. In the two example proofs
I showed above, both proofs used counterexamples. The first proof used a constructed counter-example: it didn’t show a specific counter-example, but it showed how to construct
a counterexample. The second proof used a specific counter-example. Most proofs
by contradiction rely on a constructed counter-example, but sometimes you can simply show by
pure logic that the assumption leads to a contradiction.

An important thing to be careful for in proofs by contradiction is that you are actually obtaining
true contradictions. Many creationist “proofs” about evolution, age of the universe, radiological
dating, and many other things are structured as proofs by contradiction; but the conclusion is merely
something that seems unlikely, without actually being a logical contradiction. A very common
example of this is the big-numbers argument, in which the creationist says “Suppose that life were the
product of natural processes. Then the following unlikely events would have had to occur. The probability
of that combination of events is incredibly small, therefore it couldn’t have happened.” That’s not
a proof of any kind. There is no logical contradiction between “X has a probability of 1 in 101010 of occuring”, “X occured”. Improbable never equals impossible, no matter
how small the probability – and so it can’t create a logical contradiction, which is required for
a proof by contradiction.

As an interesting concluding side-note, there are schools of mathematical thought that do not
fully accept proof by contradiction. For example, intuitionism doesn’t accept the law
of the excluded middle, which limits proof by contradiction. Intuitionism also, by its
nature, requires that proofs of existence show how to construct an example. So a proof by
contradiction that proves that something exists, by showing that assuming its non-existence
leads to a logical contradiction, are not considered valid existence proofs in intuitionistic
logic.

0 thoughts on “Basics: Proof by Contradiction

  1. Thony C.

    My two favourite proofs of all time are both fairly basic and are both reductio ad absurdum, the first is the proof that the square root of two is not rational and the second is Euclid’s proof that there is no greatest prime. I find both proofs elegant, easy to follow and totally convincing and both have the beauty of a good poem or piece of music to my mind. A close run third, in my book, is Cantor’s proof of the non countability of the reals a reductio ad absurdum of true simpicity and elegance.

    Reply
  2. Elad

    Great post! I can see two pitfalls, though:
    1. In the Halting problem, you’re representing programs and inputs by integers. This is trivial of course, but confusing to the uninitiated. I can definitely see someone voicing a complaint like: “but you’re marking a crazy assumption — that programs are numbers. Show me which programmer writes their programs by numbers”. It all follows from a trivial one-to-one equivalence, of course, but I would mention that in some way
    2. Your objection to creationism sounds so tiny and irrelevant that you might come off as supporting creationism. I would make it clear that the problem with the “tiny probability” argument is not only that it’s not a proof, but a much more major problem: Example of that problem: if there is a 10^(-9) chance of winning the lottery, then there’s still a person that wins it, and that person can might as well say “what, there is a tiny tiny probability to win the lottery, so I obviously didn’t win”. However, someone _does_ win the lottery.

    Reply
  3. Blaise Pascal

    With regard to computed counterexample/specific counterexample/no counterexample, how do you categorize proofs by infinite descent?
    Example: Proof that 2 has no rational square root:
    Assume 2 does have a rational square root m/n, with m, n being natural numbers. We have 2 = (m/n)^2 = m^2/n^2, and therefore 2n^2 = m^2. Because only even numbers have even squares, m must be even: m = 2m’. Therefore 2n^2 = (2m’)^2 = 4m’^2, or n^2 = 2m’^2. Similarly, n must be even, so n=2n’, yielding (2n’)^2 = 2m’^2, or 2 = m’/n’, with m>m’ and n>n’. No assumptions were made about m or n, so it is possible to repeat this procedure indefinitely, yielding two infinite decreasing sequences if natural numbers m>m’>m”>m”’… and n>n’>n”>n”’… But natural numbers can’t decrease indefinitely, so these sequences are an absurdity. Therefore, our assumption that 2= (m/n)^2 exists must be false. Therefore 2 cannot have a rational square root.

    Reply
  4. Elad

    Answer to Blaise Pascal: I would characterize them as proof by contradiction, where the proof uses induction. (or the other way around maybe? It’s late here).

    Reply
  5. Sander

    @Blaise Pascal:
    I think that’s just a normal proof by contradiction. The assumption leads to the existence of an infinite decreasing sequence, which is just a normal contradiction. X->(there exists an infinite decreasing sequence of naturals) is just as good as X->(my pet dog was a man) or X->false.
    Of course you can also start with “Assume 2 does have a rational square root m/n with gcd(m,n)=1” and avoid this altogether.

    Reply
  6. Anonymous

    MarkCC wrote:

    For example, intuitionism doesn’t accept the law of the excluded middle, which limits proof by contradiction.

    Intuitionism only rejects tertium non datur for situations invoving actual infinities so both of the Euclidean proofs by contradiction that I mentioned above are accepted by Intuitionists but the Cantor proof is not.

    Reply
  7. Anonymous

    We can write a program q, where φ(q,i) runs φ(h,(q,i)) as a subroutine.
    Note that q is defined in terms of itself, in the sense that it must have access to its own source code (to plug into h). The recursion theorem says this can be done, and it’s not hard when you know the proof of the recursion theorem. However, if you hand someone a C implementation of a halting tester h and ask them to write code for q, they’ll find it tricky if they haven’t seen this before.
    At the cost of a little extra convolution, you can get around this issue by defining q differently. Define q so that φ(q,i) runs φ(h,(i,i)) as a subroutine and then does the opposite of whatever it outputs. (If φ(h,(i,i)) says φ(i,i) halts, then φ(q,i) will go into an infinite loop. If it says φ(i,i) doesn’t halt, then φ(q,i) immediately halts.) This definition does not define q in terms of itself, so we don’t need the recursion theorem.
    Now plugging in i=q shows that φ(q,q) halts iff φ(h,(q,q)) says φ(q,q) doesn’t halt, so h is wrong for input (q,q).

    Reply
  8. Craig Helfgott

    Blaise: I recently saw a great (new!!) proof that sqrt(2) is irrational. Also proof-by-contradiction-by-infinite-descent.
    Assume sqrt(2) is rational. Then you can draw a 45-45-90 triangle ABC with integer sides. (Let C be the right angle).
    Draw the following picture (you don’t need to label points).
    Draw a circular arc, centered at A, with radius AC. It intersects the hypotenuse at D. Draw the tangent to the circle at D, it intersects BC at E.
    AD=AC is an integer, AB is an integer, so BD=DE=EC is an integer. BC is an integer, so BE is an integer, and BDE is a (smaller) 45-45-90 triangle with integer sides (D is the right angle).
    Repeat ad infinitum, and you get an infinite decreasing sequence of integral triangles. Pretty picture, too.

    Reply
  9. Thony C.

    I don’t know what happened here but the “anonymous” post on intuitionism is mine but as I tried to post it the system said I couldn’t! Now I come back and the post is posted as anonymous!

    Reply
  10. Doug

    I think Elad had a good point there about “cretinism”. One might show this better by using Zadeh’s idea of constructing probability vs. possibility distributions, with possibility indicating ‘degree of ease’. For instance, if we want to talk about the probability/possibility of eating a certain number of oranges tommorow, and rounding to the first place after the decimal, we would have something like
    0 1 2 3 4 5 6 7 8 9
    Prob. .5 .4 .1 0 0 0 0 0 0 0
    Poss. 1 1 1 1 1 .8 .6 .4 .3 .1
    Or one can talk about eating a certain number of eggs for breakfast, the number of people that can fit in a car, the number of keystrokes someone will type in a day, or the probability vs. possibility of winning a lottery etc.
    Concerning intuitionism/property of the excluded middle… there still exist mathematical schools of thought which don’t accept proof by contradiction, or possible schools of mathematical thought at least. For instance, a logic with truth values of {T, U, F}, where any combination of U (undecided) and a member of {T, U, F} yields a U for the five basic logic operations (and, or, not, implication, logical equivalence), doesn’t have the property of the excluded middle, nor that of contradiction as a logical theorem. Of course, in such a school if a particular proposition has a truth value in {T, F}, those properties will still hold. But, first one has to establish the propositon as in {T, F} as opposed to {T, U, F}, which doesn’t get assumed in such a (possible) school of thought.

    Reply
  11. Torbjörn Larsson, OM

    Many creationist “proofs” about evolution, age of the universe, radiological dating, and many other things are structured as proofs by contradiction; but the conclusion is merely something that seems unlikely, without actually being a logical contradiction.

    More generally it is their modus operandi, as they can’t make positive assertions or their scam would be given up. This reminds me of a relation to the structure as proof by contradiction in their false dilemmas, such as that if evolution is wrong when creationism is correct.
    (An added irony is that the most realistic alternative, lamarkianism, that could explain many if not all observations outside genetics and supplement evolution proper, even have a possible mechanism in epigenetics. The only thing is that it hasn’t been observed yet.)

    Reply
  12. ruidh

    I use indirect proof in solving sudoku. I avoid the “easy” and “medium” puzzles and go straight for the “hard”, “challenging” and “brain twister” puzzles. Occasionally, I get stuck with about 2/3rds of the puzzle filled in. I find a cell that has only two possibilities and write one number in the upper left and the other in the lower right. Then I continue working both the upper left and lower right until I hit a dead end or find a contradiction. If I find a contradiction, I know that the lower right of my starting cell is a correct entry.
    Sometimes, I even find a cell that has the same entry for both the upper left and lower right. I know that’s a correct entry because A->B and ~A->B means B.

    Reply
  13. Doug

    [An added irony is that the most realistic alternative…]
    That can make things interesting. One could almost pit epigenetics vs. genetics as casuative in say a high school biology classroom, and we would have something very close/close enough to a real scientific controversy, or at least one that can become scientific depending on our scientific evidence. Or one could reasonably pit “nature vs. nuture” in biology and there exists something controversial among biologists. Think of Dawkins vs. Gould on this. I suppose that might qualify as more philosophy of biology than biology, but it comes close enough and it definitely affects how people do biology AND biologists DO have different takes on the issue. Maybe teaching such a controversy comes as a pedagogical poor idea depending on one’s educational philosophy, but at least “nature vs. nuture” or “epigenetics vs. genetics” presents something with real biological or, in the case of epigenetics vs. genetics, scientific content to it, as opposed to the lying tactics of “cretinists” which go so far that some of them committed fraud not just on a community, but a court of law. If those people really wanted alternatives and wanted extremely loose defintions of science, why didn’t they want to teach Lamarckism too?

    Reply
  14. Torbjörn Larsson, OM

    Correction of my previous comment: “lamarkianism” – lamarckism.

    One could almost pit epigenetics vs. genetics as casuative in say a high school biology classroom,

    I think they are, especially as AFAIU it is hard to test evolutionary mechanisms. (The lamarckist mechanisms probably awaits positive research results.)

    I suppose that might qualify as more philosophy of biology than biology,

    Or possibly it is unduly promoted as such, because I don’t see why you couldn’t call them different research strategies. I.e. for visible traits you have to choose a null hypothesis, and adaptation seems like a sound choice. Pluralists don’t like that, and perhaps they use drift as a null hypothesis.
    Similar circumstances applies for genetic vs epigenetic regulation et cetera.
    In the end it shouldn’t matter, as they should test their ideas. (Or they are just-so stories, an earlier complaint when adaptations were routinely used for explanations.)

    Reply
  15. Anonymous

    Elad: “if there is a 10^(-9) chance of winning the lottery, then there’s still a person that wins it, and that person can might as well say “what, there is a tiny tiny probability to win the lottery, so I obviously didn’t win”. However, someone _does_ win the lottery.”
    There’s another problem with most such arguments that ends up being even more significant. The point that someone wins the lottery becomes somewhat weaker when the probabilities claimed are something like 10-140 (which is closer to the order that I tend to see claimed by people making these arguments).
    Every argument I’ve seen that yielded such numbers misapplied probability theory. E.g. they’ll claim that the probability of forming a self-replicating system by chance is low because the probability of getting a chain of a given number of amino acids selected at random to form a given sequence is on such an absurdly small order (I’ve seen this argument presented seriously many times). Besides being a caricature of abiogenesis, this begs the question by assuming that there’s only one possible state that self-replicates.
    The fact is that it’s not quite so trivial to determine what fraction of the space of possibilities of some complicated chemical system replicate, but just assuming it to be one is the most egregious deck-stacking I tend to see in such arguments.

    Reply
  16. Jason

    Dialethism or, more properly when you’re talking about the logic, paraconsistency, fucks it up good and proper. (As John knows, in a paraconsistent logic, some things can be both true and false.) The details depend on which paraconsistent logic you use, which is convenient for me because I can’t remember the details for any of them. However, it’s always going to be complicated, because in any paraconsistent system, by definition, it’s only SOME statements which can be both true and false, not all of them, and the logic itself doesn’t tell you which ones. See http://plato.stanford.edu/entries/dialetheism/ and http://plato.stanford.edu/entries/logic-paraconsistent/.
    Don’t know about supervaluationism. Good question.
    Thony, thanks for the point about intuitionism. I didn’t know that, or had forgotten.

    Reply
  17. Torbjörn Larsson, OM

    Um, dialetheism, perhaps?
    So, with a true contradiction we know we can prove anything within ordinary math. But I see that paraconsistent logic is an out, and that it may not necessarily drop reductio.
    But:

    The approach taken in large software systems is keep the rules of weakening and double negation elimination and to restrict the rule of reductio ad absurdum (Hewitt [2007]).

    Intriguing. Yes, do tell, how does restriction or other alternatives work regarding such a proof?

    but just assuming it to be one is the most egregious deck-stacking I tend to see in such arguments.

    Besides that, the natural assumption is that since the likelihood for life is ~ 1 (seeing that we exist) we are more or less certain that there is some mechanism that give a reasonable probability for life in the universe. I’m reminded of the cart and the horse.

    Reply
  18. Mark C. Chu-Carroll

    Anonymous:
    You’re absolutely right, and that’s something that I’ve discussed extensively in other posts. I’ve got my own
    taxonomy of statistical errors that people use, and what you’re talking about is what I call the “fake numbers” gambit. It’s where someone wants to make an argument that something must be impossible without some kind of deliberate intervention. It’s clearly possible, because it happened, but they want to argue that it couldn’t have happened without God, or some intelligent agent. So they can’t really argue that it’s impossible. So they resort to arguing that it’s improbable, and coming up with some arbitrary probability number beyond which something is impossible. (Like Dembski’s “Universal probability bound”). Then they start slapping together a bunch of numbers to try to create a probability that’s beyond their bound.
    In every case that I’ve seen where someone tries to make an argument of that form, the numbers used to generate the probability are fake.
    Search the GM/BM archives for “Berlinski” for one of the most egregious examples of this from an arrogant SOB who knows better, but makes the dishonest argument anyway.

    Reply
  19. Thony C.

    Out of interest, what does supervaluative logic and dialethism do to this proof?

    As reductio ad absurdum is dependent on tertium non datur any system of logic that strays from the strict dichotomy of a two valued logic would lose reductio as a method of proof.

    Reply
  20. Torbjörn Larsson, OM

    any system of logic that strays from the strict dichotomy of a two valued logic would lose reductio as a method of proof.

    But “restriction” doesn’t seem to mean “elimination”. (See the quote in my comment #17.) However, when I looked at the reference on Hewitt, it seems to concur:

    Direct Logic supports the following nontriviality principles for paraconsistent
    theories:
    Direct Nontriviality […] which states that if the negation of a sentence holds, then it cannot be proved
    Meta Nontriviality […] which states that if the negation of sentence can be proved, then it cannot be proved.

    So the “restriction” seems to be that this elimination applies to “non-trivial” theories. What a wanker Wikipedia is at times!

    Reply
  21. Jason

    > Um, dialetheism, perhaps?
    It can be spelled either way. Apparently Routley/Sylvan spelled it one way and Priest the other. Wikipedia’s official policy is to keep whichever spelling turns up in a given Wikipedia article first, unless there’s a good argument to the contrary. I hope to avoid flame wars by pointing this out!

    Reply
  22. Doug

    Thony C, Torbjörn Larsson, and anyone else who thought the following correct, or might have an inclination to read on.
    [As reductio ad absurdum is dependent on tertium non datur any system of logic that strays from the strict dichotomy of a two valued logic would lose reductio as a method of proof.]
    NO! I said this basically in another comment on another post. For a reduction ad absurdum one needs the property of non-contradiction, and in some instances the property of the excluded middle.
    Proposition 1: For a bounded infinite-valued logic on [0, 1] with 1-a for negation which has max(0, a+b-1) for intersection i, min(1, a+b) for union u, i(a, c(a)=0 and
    u(a, c(a))=1.
    Demonstration: b=c(a) by definition. Consequently,
    i(a, c(a))=max(0, a+c(a)-1)=max(0, a+1-a-1)=max(0, 0)=0
    u(a, c(a))=min(1, a+c(a))=min(1, a+1-a)=min(1, 1)=1.
    Also, an infinite-valued logic on [0, 1] with 1-a as negation, and with drastic union and intersection still maintains the properties of excludedu middle and contradcition. By drastic union and intersection, I mean
    u(a, b)=a if b=0
    b if a=0
    1 otherwise
    i(a, b)=a if b=1
    b if a=1
    0 otherwise.
    Proposition 2: For an infinite-valued logic on [0, 1] with drastic union, intersection and the 1-a operator for complement, i(a, c(a))=0, u(a, c(a))=1… the properties of contradiction and excluded middle.
    Demonstration: For the property of contradcition
    Let a=0, then u(a, c(a))=u(0, 1)=1
    Let a>0, then u(a, c(a))=1, since neither truth value equalling 0 directly implies u(a, b)=1.
    So, u(a, c(a))=1
    For the property of excluded middle
    Let a=1, then i(a, c(a))=i(1, 0)=0
    Let a

    Reply
  23. Doug

    To perhaps buttress my argument here for someone who claims I haven’t shown anything about the contradiction and excluded middle properties, suppose we changed our truth values from the usual 1=T, 0=F, to 3=T, 5=F in a logic with {T, F} as its truth set. The properties now state
    i(a, c(a))=5
    u(a, c(a))=3
    Now, suppose we have an infinite-valued logic on [3, 5]. One can have such a logic with the following operators
    i(a, b)=a if b=3
    b if a=3
    5 otherwise
    u(a, b)=a if b=5
    b if a=5
    3 otherwise
    c(a)=5 if a=3
    3 if a=5
    4 otherwise.
    For i(a, c(a) we’ll first
    let a=3, then i(a, c(a))=i(3, 5)=5 or false. Then,
    let 3

    Reply
  24. Thony C.

    Doug; all you have shown is that if, by a series of definitions, you restrict your infinite valued logic to a sub-domain of two values then it behaves like a two valued logic! 😉

    Reply
  25. Doug

    Thony C,
    [Doug; all you have shown is that if, by a series of definitions, you restrict your infinite valued logic to a sub-domain of two values then it behaves like a two valued logic!]
    If I could go back in time I would have suggested you check all logical properties of clasical logic in the referenced logics before you attempt to make such an assertion. More on point, I don’t know why you’ve tried to pass that off, honestly, after I’ve given proofs (not just linguistic arguments as in other cases) which indicate otherwise. Those proofs show that “the logical conjunction of A and not A yields a truth value of false (in the classical sense of false)” and “the logical union of A and not A yields a truth value of true (in the classical sense of true),” for the logical systems involved, when they get translated into words. I suspect that even a semi-objective observer can see this and you’ve discredited yourself somewhat by your statement here, as I showed MORE than your statement about a sub-domain. I think (although maybe it wasn’t you) I’ve also showed to you specifically that such logics do NOT behave like two-valued logic in another significant sense, as they don’t have ALL the same theorems as two-valued logic. I say all of this, as a sort of warning, in the hope you’ll see how you might make yourself look foolish here, if you didn’t already have an inclination that this might happen.
    Look, the domain of one of the proposed infinite-valued logics comes as [0, 1], as the inputs for the functions intersection, union, and complement get defined on [0, 1]. Rather clearly, if I restrict such a domain to {0, 1/4, 1/2, 3/4, 1}, I have a subdomain of [0, 1]. This doesn’t behave like two-valued logic in many ways. First, the input values can come as something other than a member of {0, 1}. Consequently, the notion of truth works out differently. The domain of the truth set works out differently. The methods used in calculation work out differently, as the rules of classical logic state the for the truth set {T, F}
    i(a, b)=T if a=T and b=T
    F otherwise
    u(a, b)=F it a=F and b=F
    T otherwise
    c(T)=F, c(F)=T.
    I didn’t restrict my infinite-valued logic to a sub-domain of two values, as for the max(0, a+b-1),
    min(1, a+b) values, if I let a=1/4 and thus c(a)=3/4 with c(a)=1-a, I get
    max(0, 1/4+3/4-1)=max(0, 0)=0
    min(1, 1/4+3/4)=min(1, 1).
    Hopefully the aforementioned points, and especially the last part would tip you off that something different behavior goes on. Perhaps, however, this comes to no avail. Fine. In classical logic, there exist two basic properties:
    idempotency, and distributivity, or in symbols
    i(a, a)=a, u(a, a)=a
    i(a, u(b, c))=u(i(a, b), i(a, c))
    u(a, i(b, c))=i(u(a, b), i(a, c))
    Suppose we use drastic union and intersection for these operations. Then it follows that,
    i(.5, .5)=0 u(.5, .5)=1, or more generally let a belong to (0, 1). Then, i(a, a)=0 which does NOT equal a, and u(a, a)=1 which also does NOT equal a. So, idempotency fails for a logic with drastic union and interesection.
    Also,
    i(.4, u(.3, .2))=i(.4, 1)=.4,
    i(u(.4, .3), u(.4, .2))=i(1, 1)=1. So, distributivity fails for drastic union and intersection.
    If you go and check for yourself you can see that distributivity and idempotency also do not hold as theorems for the max(0, a+b-1), min(1, a+b) operations. One can go further, as Klir and Yuan do in their text _Fuzzy Sets and Fuzzy Logic: Theory and applications_, on p. 87 and 88, and prove that for a logic which uses dual t-norms, and t-conorms, meaning that
    c(i(a, b))=u(c(a), c(b)) and c(u(a, b))=i(c(a), c(b)) AND satisified the property of the excluded middle as well as the property of contradiction, then both distributive properties do NOT hold as theorems.
    Again, I didn’t restrict an infinite-valued logic to a sub-domain of two valued. And I didn’t show that a logic with the properties of the excluded middle and contradiction behaves like a two-valued logic.

    Reply
  26. MF

    Doug,
    I find your post here interesting, and no doubt you showed more than Thony C. thought you did, but you’re basically wrong about the principles of excluded middle and contradiction, as well as about reductio ad absurdum. The principles of excluded middle and contradiction better stated say “The proposition ‘a is True or not a is True’ is True”, and “The proposition ‘a is True and not a is True” is False.” The principles simply aren’t purely syntatical statements. They don’t say i(a, c(a))=T, u(a, c(a))=F. Of course, if one translates them into symbolic logic, one can write them that way… but something of meaning gets lost in such a translation. Yes, your “properties” get written symbolically the same way as the classical laws of logic and in a formalistic sense BEHAVE like them, but they simply don’t capture the essence of those statements. The symbolic translation, loses that “is True” bit after a, and not a. For reductio ad absurdum proofs one doesn’t just assume the symbolic a^c(a)=F or however you want to write it. The reductio ad absurdum assumes the meaning in the statement “a is True, or a is not True.”
    Now, what’s more, I haven’t really and completely stated the classical laws of logic. They don’t just say what they say, they implicitly assume only True and False propositions as possible. Consequently, an attempt to write the classical laws of logic in non-classical logic fails. It furthermore fails, in that the terms “True” and “not True” necessarily correspond to the extrema of your fuzzy truth sets. Consequently, it is impossible can’t write those classical laws in any n/infinite valued logic. Sure, you can write your properties, and claim the “behavior” works sufficiently similar, but it is just not enough. You don’t have the essence of classical logic without confining truth values to True and False only.
    I do want to say that you were on to something when you renamed the law of contradiction and excluded middle properties, as you talk about something different. This extends farther than those laws. There is no law of idempotency in the standard fuzzy logic, but in classical logic there is such for intersection. I mean to say, that in classical logic we have
    The proposition “A is True and A is True” is True. In a fuzzy logic with the minimum function it is
    i(A, A)=A.
    In other words, you can and probably should replace the identity predicate with the equality predicate, and consequently re-name all those principles, or laws, properties. When they are true, of course.

    Reply
  27. Xanthir, FCD

    I really don’t feel like getting into these arguments, because I quickly lose track of just what the hell we’re supposed to be talking about, but MF, I believe you’re wrong.
    (a || ~a) = T is the most essential and true form of that statement. It’s more true than the english statement, because it’s well-defined. It seems like it is perfectly captured by what Doug is saying, as well.
    Logic doesn’t care about meaning. It’s a game of symbols which have no intrinsic meaning. This is what makes it so powerful. We assign meaning to the symbols afterwards. As long as the behavior is the same, nobody cares what meaning you assign to two things – they are the same.

    Reply
  28. Doug

    Xanthir,
    Can you explain the (a || ~a) = T statement… I don’t get the ‘||’. I’ve seen it used in set theory to mean “a is not comparable with b”… do you mean that?
    MF,
    I can translate your statements into my notational style like this:
    i(a=T, c(a=T)=F)=F
    u(a=T, c(a=T)=F)=T.
    Consequently, it seems that even though your phrasing looks more precise, it loses some meaning, as I can translate your versions as saying i(T, F)=F, u(T, F)=T. If you mean that by the principles of classical logic, well I can basically claim that fuzzy logic has those as axioms. To say that classical logic assumes only true and false propositions as possible and POC and POEM must likewise assume such begs the question.
    You do have a point about reductio ad absurdum assuming something different than the properties of classical logic. It doesn’t depend so much on tertium non datur, as I showed logical systems where the property of excluded middle holds, but I failed to show how reductio ad absurdum works in such logical systems. Even if tertium non datur means ‘a third-term not given’ this still doesn’t imply that one can’t use a reductio ad absurdum argument. Someone on my site made a comment which suggested to me the following idea.
    Suppose we have a notion of ‘absurdity’ or ‘internal contradiction’ within our given sytem. Suppose, we have a three-valued system. Suppose we also know that either A or the negation of A or the other truth value for A holds. We also have the rule that if the assumption of a statement leads to an absurdity, then the reasoning works out as invalid and consequently our originally statement works out as invalid. Well, we can then use reductio ad absurdum in the following way. Assume A holds, then deduce an absurdity. So, A doesn’t hold. Assume the negation of A holds. Deduce an absurdity. So, the negation of A doesn’t hold. Consequenlty, the other truth value for A holds. In other words, I propose that for a n-valued logic, reductio ad absurdum still can work… we just have to eliminate n-1 possibilities by reducing them each to absurdity. For an infinite-valued logic, we have to reduce all other possible cases than the one we seek to prove to absurdity. If this sort of persepctive sufficeintly works, one might just substitute “process of elimination” for “reductio ad absurdum”.

    Reply
  29. Doug

    Maybe this qualifies as an example of my idea, maybe not. This sort of reductio/process of elimination seem almost too simple.
    Suppose that a member of {3, 5, 7} solves the equation 3x=15.
    Well, if we assume 3 works, then we have 9=15, a contradiction. Assume 7 works. Then we have 21=15, another contradiction. Consequently, given our assumpiton as true, 5 sovles the equation.
    Perhaps better, suppose that a member of (-oo, +oo) solves the equation 4x=29. Well, for all cases where x7.5, then we also have a contradiction. Consequently, x=7.5
    Although, maybe you consider such examples “poor” since we already know the answer and demonstrate such by simple substitution.

    Reply
  30. Doug

    Maybe a better example of a reductio argument in three-valued logic comes as the following. Please note I don’t use the property of the excluded middle, and I don’t even need the property of contradiction for this example at least.
    Assume all statements either true, false, or undecided. Assume there exists no distinction between an object language and a meta-language. A statement X becomes absurd, and therefore rejected, when it has more than one truth value. Now, consider the statement
    “this statement is false.”
    Assume such a statement true (first truth value). Then, as it says, it consequently becomes false (second truth value). So, we have an absurdity, and thus such a statement doesn’t work out as true. Second, assume such a statement false. Then it speaks accurately about itself (remember… assume no object/meta language distinction), and thus becomes true. Again, we have an absurdity, this time by assuming such a statement false. Since we have either T, F, or U, and both T as well as F lead to absurdities, we have F by a reductio based on the uniquenss of truth values.

    Reply
  31. Paul G

    Yep, great post – but Mark, could you proof-read it, please? There are quite a few typos that make it pretty hard to understand in places.

    Reply
  32. Kristjan Wager

    Can you explain the (a || ~a) = T statement… I don’t get the ‘||’. I’ve seen it used in set theory to mean “a is not comparable with b”… do you mean that?

    I think that Xanthir means this is a matter which is closer to code, so it could be translated as ‘(a or not a) equal true‘, where the two lines have the meaning or.

    Reply
  33. Xanthir, FCD

    Thank you, Kristjan. Yes, that’s what I meant. ^_^ C++ was my first language, so I still use its symbols for “and” and “or” when I’m typing. Very slightly simpler than using html entities, though I suppose I should put forth the effort to avoid confusion in the future.
    Consider the line to instead be (a ∨ ¬a) = T.

    Reply
  34. Cléo Saulnier

    The halting problem as defined is a cop out. It only talks about algorithms q that call h. That’s recursive, so you could keep applying algorithms and what you’d get is a succession of results (true and false) that are correct each time at that particular point. Just because you implemented the function incorrectly doesn’t mean that you’ve proven anything.
    “The big question is, can we write a program h such that takes any pair of p and i as inputs, and tells us whether p halts with input i?”
    For most p and i, you haven’t proven anything. You’ve only shown that one particular case of recursion and one particular implementation, this doesn’t work. It doesn’t mean that there isn’t a solution. In fact, you say this:
    “If φ(h,(q,i)) returns true, then q enters an endless loop. Otherwise, q halts.”
    This is patently absurd and doesn’t do what you think it does. A program could look at this and see that q is dependent on h. Since q can both terminate and not terminate depending on h, both results are valid. So h1 = !h0. This isn’t a contradiction. It simply means that every successive h seen is the opposite of the next one.
    If the top-most h returns true, then the inner most h used by q would return false. Since q sees false, then it will halt. And indeed, the top-most h does return true that it halts. WOW! No contradiction. This is time-based results. Much of science and technology is based on osciallating results. There’s no contradiction here I’m afraid. Only time-based (or iteration based) results.
    Your example is completely arbitrary and says NOTHING about anything except an irrelevant example. It especially says nothing about software that is not dependent on h. And 100% of software in actual use is NOT dependent on h.
    Sorry, better luck next time.

    Reply
  35. Xanthir, FCD

    You are incorrect, Cleo. The halting problem as defined by MCC does not involve iteration or oscillation in any way. The program h analyzes the structure of q to determine the result – it does not actually *run* q, as that would prevent it from returning an answer when q looped, but h is defined so as to always return an answer. Thus, h gives a particular answer, yes or no, to whether or not ψ(q,i) halts. It can’t give a “maybe” or a “wait and see”, nor can it change it’s answer as time goes by (unless time is part of the input, but we’re talking about supplying it with a *particular* input).
    Because h is *defined* to always give an answer, and more importantly, must give the *same* answer, then the contradiction becomes obvious. No matter what answer h gives by analyzing the code of q, no matter how clever it is in deconstructing code and teasing out infinite loops, it must in the end return a Yes or No, a Halts or Loops, and q can then use that answer to do the opposite.

    This is patently absurd and doesn’t do what you think it does. A program could look at this and see that q is dependent on h. Since q can both terminate and not terminate depending on h, both results are valid. So h1 = !h0. This isn’t a contradiction. It simply means that every successive h seen is the opposite of the next one.

    See, this is where you make your mistake. You’re asserting that h returns different results given the exact same inputs. The function h has definite, finite code (by definition, as a function with infinite code could require an infinite amount of time to execute, and would thus be indistinguishable from an infinite loop, but h was defined as always halting). The function q has definite, finite code. The input i is finite as well. Thus, since everything involved here is well-defined and finite, h has a well-defined answer when you provide (q,i) as an input. There is no infinite regress of h’s and q’s that form an oscillating answer as you walk back up to the top level. H1=h0 by definition – there’s only one program h. H uses a finite block of code to evaluate finite inputs and comes up with the same answer every time. The only problem is that it can be wrong sometimes no matter what you do to improve it.

    Your example is completely arbitrary and says NOTHING about anything except an irrelevant example. It especially says nothing about software that is not dependent on h. And 100% of software in actual use is NOT dependent on h.

    Um, yeah, we know. The halting problem doesn’t have a thing to do with the vast majority of software. It’s a result of computational theory. It does end up limiting what is *possible* to compute, but most of the time we write program well within the domain of ordinary computability, and so the halting problem doesn’t do a thing for us.

    Reply
  36. Cléo Saulnier

    “The halting problem as defined by MCC does not involve iteration or oscillation in any way.”
    Of course it does, it includes a recursive dependancy.
    “The program h analyzes the structure of q to determine the result – it does not actually *run* q, as that would prevent it from returning an answer when q looped, but h is defined so as to always return an answer.”
    Why is h defined as to always return an answer? That’s the problem with your proof right there. You’re discarding valid answers. If I ask if something returns true or false and don’t accept any of those answers, it’s easy to say it’s undecidable. That’s what’s going on with this proof. It’s a complete joke.
    “Because h is *defined* to always give an answer, and more importantly, must give the *same* answer, then the contradiction becomes obvious.”
    Even if we accept that h always gives an answer, you’re still wrong. If h cannot exist, then neither can q because h makes up part of q. All this proof does is say that h cannot exist for programs that cannot exist. Big frickin’ deal. It still doesn’t say anything about programs that can exist.
    “You’re asserting that h returns different results given the exact same inputs.”
    No, I’m saying you SHOULD accept different answers other than true or false such as relationships. h could returns the same relationship every single time. The only reason other answers aren’t accepted is because of some human definition. This makes effectively renders the proof meaningless.
    “The halting problem doesn’t have a thing to do with the vast majority of software.”
    I don’t think you understand. The halting problem PROOF (as it’s called) has nothing to do with ANY existing software. The counter-example CANNOT exist. That means that h cannot exist for q’s that do not exist. It’s saying I can’t write a program to process input that doesn’t exist. Big deal. The proof is invalid. It’s completely bogus even as it’s defined.

    Reply
  37. Cléo Saulnier

    I just wanted to add one point about proofs by contradiction. If you set up a premise for your proof, you must make sure that that premise remains valid throughout. If your proof ends up invalidating the premise of the proof, the proof is also invalidated.
    And this is exactly what’s happening with the halting problem proof. The premise is that a program p exists. But this isn’t where you want a contradiction. If there’s a contradiction here, the proof falls apart. Where you want the contradiction is what the question is asking. If there exists a program h that can decide the outcome of p. But p must remain valid throughout. It’s h and only h that must be invalid. Unfortunately, the counter-example is set up that if h is invalid then so is p. The proof CANNOT succeed under any circumstance.
    Please do not spread these kinds of invalid proofs around. Some people are bound to believe this kind of garbage. One more thing… if you decide to not accept certain kinds of results, don’t be surprised that there isn’t any. I’m sorry, but that’s not a way to obtain a proof. That’s a way to obain ridicule.

    Reply
  38. Antendren

    “Of course it does, it includes a recursive dependancy.”
    Please illustrate.
    “Why is h defined as to always return one specific type of answer?”
    Because that’s the assumption that we wish to contradict. We’re trying to prove that there is no program which exactly determines when a program halts, so we assume that there is one and call it h. So by assumption, h returns either true or false.
    “If h cannot exist, then neither can q because h makes up part of q.”
    This statement is both true and irrelevant.
    “All this proof does is say that h cannot exist for programs that cannot exist.”
    There is but one h. h isn’t defined for a specific program; it either exists or it doesn’t. What this proof (correctly) says is that it doesn’t.
    “If your proof ends up invalidating the premise of the proof, the proof is also invalidated.”
    No. Then your proof is a valid proof that the premise is false.
    “The premise is that a program p exists.”
    There’s no such premise here. The only place p is used is in the definition of h, where it is a variable which ranges over all programs.

    Reply
  39. Cléo Saulnier

    Of course it does, it includes a recursive dependancy.

    Please illustrate.

    What part of recursion do you not understand? h makes up part of q. h takes q as input which has h within its makeup. That’s a recursive dependancy.

    If h cannot exist, then neither can q because h makes up part of q.

    This statement is both true and irrelevant.

    What? No support for your argument? This is the most important part of the proof. It’s where the proof invalidates itself. It says that the counter-example doesn’t exist (hence the proof doesn’t exist). Who cares about proofs that only says something about programs that do not exist. It’ll NEVER come up. EVER! It’s outside the realm of reality.

    There is but one h. h isn’t defined for a specific program; it either exists or it doesn’t. What this proof (correctly) says is that it doesn’t.

    No no no. You have a p that DOESN’T EXIST!!! The proof only says that h doesn’t exist for this ONE kind of p that DOESN’T EXIST. It says nothing about ALL p or all h.
    Of course it’s impossble to tell if programs that don’t exist will terminate or not. THEY DON’T EXIST! That’s why you can’t tell. It says NOTHING about programs that DO exist.

    If your proof ends up invalidating the premise of the proof, the proof is also invalidated.

    No. Then your proof is a valid proof that the premise is false.

    WOW! You want the premise (program q) to remain true. You’ve invalidated the WRONG part which happens to be your proof itself. h is what you want invalidated, NOT q. If q goes, so does your proof.

    The premise is that a program p exists.

    There’s no such premise here. The only place p is used is in the definition of h, where it is a variable which ranges over all programs.

    If there’s no such premise, then there’s no such proof either. Sorry, but you need to rethink your arguments. p is defined to call h, so your statement is false right there. If you wish to claim that p does NOT call h, then I’m all ears. I’d love to see that version of your proof.

    Reply
  40. Torbjörn Larsson, OM

    h takes q as input which has h within its makeup. That’s a recursive dependancy.

    I don’t pretend to understand this thoroughly, but it seems to me that h doesn’t execute the input. It analyzes its string version and decides if it would halt or not with the given input. If it didn’t, but instead executed it, it wouldn’t halt itself. (And it would be rather pointless to boot.)
    As h is not executing the code it is not making a recursion in any sense that I know of. And I believe Xanthir has said so already:

    The program h analyzes the structure of q to determine the result – it does not actually *run* q,

    I’m not a CS, but I can’t imagine any science where a wellknown basic proof that is generally accepted would contain any simple error.

    Reply
  41. Mark C. Chu-Carroll

    I’ve been avoiding this, because it’s most likely to be pointless; when someone makes the claim that a fundamental proof is invalid, they’re generally unconvinceable. It’s like spending time with people who insist that Cantor’s diagonalization is invalid. What’s the point of arguing?
    The halting problem proof is remarkably simple, and the problems that Cleo is alleging simply don’t exist.
    The proof is based on the supposition that I can create a program H that takes another program Q as input, and returns an answer about whether or not that program will halt. The point of the proof is to show that that supposition inevitably leads to a contradiction.
    Note that the proof does not say that H runs Q. In fact, if it ran Q, then if Q didn’t halt, then H wouldn’t halt, and therefore H wouldn’t be a halting oracle. So if there were a halting oracle H, it by definition could not run Q. The point of a halting oracle is to answer by analysis only whether or not a particular target program Q will halt for a particular input.
    Q does invoke H. But there’s no problem with that. By the fixed point principle, we know that it’s possible to implement self-reproducing programs – we call them Quines – and Q can quine to invoke H.
    The point of the proof is that for any supposed
    halting oracle H, we can construct a program Q for which H must fail. If H returns the wrong answer for anything, then it’s not a halting oracle – and in fact, we can’t trust it for anything, because we know that it doesn’t always return the correct answer.
    It’s a very simple proof by contradiction. We want to prove a statement not-A, where A is “There exists a halting oracle H”. So in absolutely classic traditional form, we suppose A is true, and then show that the truth of A necessarily creates logical contradictions, which means that A cannot be true, and therefore not-A is true.
    Where is there any room for argument in this?

    Reply
  42. Xanthir, FCD

    Well, everyone else answered for me. Once again, h does not run the program that is passed to it as input. As Mark says directly above, and I said in my original response, h *can’t* run the program that is passed to it, because h is defined to halt and provide an answer.
    Why is h defined to halt? Why is this a valid step? Because we’re looking for a halting oracle: a program that will *always* tell us whether another program will halt. We’re not saying, “Here, I’ve created a function h, and I will not arbitrarily declare that it has properties that I want.” Instead, we’re saying, “Any program that can call itself a halting oracle must have this property. Thus, if we design a function called h and want to say it’s a halting oracle, it must have this property. So, we’ll assume that it does for the moment and see where it leads us.”
    If h actually ran q, it *would* create a recursive relationship, as you note. H would run q which would run h which would run q, and so on. This cannot be true, though. If q never halts, then when h runs q, h doesn’t halt either. Thus, any program which wants to call itself a halting oracle *cannot* run the program passed to it. Since we’re assuming h is a halting oracle, we must then assume that h, as well, doesn’t run q. It merely analyzes q’s structure and returns a result based on that.
    Now, as for q running h, this is also trivial. H is a finite-length program (because it has to halt if it wants to be a halting oracle), and so you can easily insert the code of h into another program, such as q.
    The difficulty only comes when you try to call h(q,i) from within q. Since you pass the source code of the function to h, this means that q has to contain it’s entire source code within itself. This seems to make q infinite length (q contains q which contains q which contains q…), but luckily the Fixed Point Theorem which Mark referenced fixes this. Essentially, this guarantees that there will *always* exist programs which *can* output their source code as long as the language they are written in is Turing equivalent. Quines are really interesting, you should google them. They’re one of the first things people often ask for when you create a new language. ^_^
    So, q can utilize quining tricks to reproduce it’s source code from within itself. We’ve rescued q from being unconstructable, so we can use it now, and so the conclusion follows trivially.
    The point of the whole thing is not to show that one particular program isn’t a halting oracle, but rather to show that *any* program which claims to be a halting oracle will have some input for which it fails. Thus, it cannot be an oracle. Since this is true of *any* program, there can be no halting oracles, ever.

    Reply
  43. Cléo Saulnier

    Well, so far there are no arguments against what I say. That H doesn’t execute Q is not a valid argument because I never said it does.
    I also don’t accept papers or proofs just because a so-called “giant” wrote it. If it’s flawed, it’s flawed. And this one is clearly flawed big time. It’s rather obvious too.

    The proof is based on the supposition that I can create a program H that takes another program Q as input, and returns an answer about whether or not that program will halt. The point of the proof is to show that that supposition inevitably leads to a contradiction.

    Right, but a contradiction of what exactly? I’m saying it can’t be a contradiction on Q. If Q cannot exist, then you’ve only proven something about Q’s that don’t exist. It says nothing about all P’s. Do you see that?

    Q does invoke H. But there’s no problem with that. By the fixed point principle, we know that it’s possible to implement self-reproducing programs – we call them Quines – and Q can quine to invoke H.

    So what? You’re going to have to find a valid H to include. If you can’t, then Q doesn’t exist. That only speaks about Q, not all P’s. Why is this so difficult to see?

    The point of the proof is that for any supposed
    halting oracle H, we can construct a program Q for which H must fail.

    Right, but the proof doesn’t accomplish that point. The H in the proof is supposed to handle programs that do not exist. Why in the world would H need to take into account programs that don’t exist?
    Don’t you see. In order for Q to exist, you NEED a valid H that can work on that Q. THERE IS NO SUCH PROGRAM where the combination of both Q and H exists at the same time! This renders Q non-existant. So H does not need to take it into account after all.
    This is really simple stuff. Not sure why there is any confusion here. Your “any” must exist!

    It’s a very simple proof by contradiction. We want to prove a statement not-A, where A is “There exists a halting oracle H”. So in absolutely classic traditional form, we suppose A is true, and then show that the truth of A necessarily creates logical contradictions, which means that A cannot be true, and therefore not-A is true.
    Where is there any room for argument in this?

    Easy! Because your not-A only applies for things that do not exist. Why would anyone care about a proof that says something about things that do not exist?
    Answer me this!
    Why must H take into account programs that can never exist? No one will ever be able to write Q. EVER!

    Reply
  44. Cléo Saulnier

    (Sorry about the double post. Fixing the formatting)
    Well, so far there are no arguments against what I say. That H doesn’t execute Q is not a valid argument because I never said it does.
    I also don’t accept papers or proofs just because a so-called “giant” wrote it. If it’s flawed, it’s flawed. And this one is clearly flawed big time. It’s rather obvious too.

    The proof is based on the supposition that I can create a program H that takes another program Q as input, and returns an answer about whether or not that program will halt. The point of the proof is to show that that supposition inevitably leads to a contradiction.

    Right, but a contradiction of what exactly? I’m saying it can’t be a contradiction on Q. If Q cannot exist, then you’ve only proven something about Q’s that don’t exist. It says nothing about all P’s. Do you see that?

    Q does invoke H. But there’s no problem with that. By the fixed point principle, we know that it’s possible to implement self-reproducing programs – we call them Quines – and Q can quine to invoke H.

    So what? You’re going to have to find a valid H to include. If you can’t, then Q doesn’t exist. That only speaks about Q, not all P’s. Why is this so difficult to see?

    The point of the proof is that for any supposed
    halting oracle H, we can construct a program Q for which H must fail.

    Right, but the proof doesn’t accomplish that point. The H in the proof is supposed to handle programs that do not exist. Why in the world would H need to take into account programs that don’t exist?
    Don’t you see. In order for Q to exist, you NEED a valid H that can work on that Q. THERE IS NO SUCH PROGRAM where the combination of both Q and H exists at the same time! This renders Q non-existant. So H does not need to take it into account after all.
    This is really simple stuff. Not sure why there is any confusion here. Your “any” must exist!

    It’s a very simple proof by contradiction. We want to prove a statement not-A, where A is “There exists a halting oracle H”. So in absolutely classic traditional form, we suppose A is true, and then show that the truth of A necessarily creates logical contradictions, which means that A cannot be true, and therefore not-A is true.
    Where is there any room for argument in this?

    Easy! Because your not-A only applies for things that do not exist. Why would anyone care about a proof that says something about things that do not exist?
    Answer me this!
    Why must H take into account programs that can never exist? No one will ever be able to write Q. EVER!

    Reply
  45. Cléo Saulnier

    To Xanthir:

    The point of the whole thing is not to show that one particular program isn’t a halting oracle, but rather to show that *any* program which claims to be a halting oracle will have some input for which it fails.

    Yeah, but your “*any* program” must exist. Your proof just ends up saying that it doesn’t. So Q is NOT *any* one such program. Proof = BAD!

    Reply
  46. Mark C. Chu-Carroll

    Cleo:
    Your argument is, basically, that proof by contradiction can’t possibly work for anything involving programs. That’s a damned stupid thing to say.
    The premise of a proof by contradiction is to say “Suppose this thing exists”. That’s what we do in the halting proof. We suppose that the halting oracle exists.
    *If* there is a halting oracle, *then* the program Q can be written. That’s standard proof by contradiction reasoning. We’ve taken the existence of H as a supposition, and we’re showing that it leads to contradiction. By incredibly simple logic, if any program H exists, then there’s a program Q. If Q can’t exist – then that satisfies the proof by contradiction! Because by the existence of H, we can prove that a corresponding Q *must* exist. If no Q can exist, that can only mean that the halting oracle cannot exist either.
    This isn’t exactly rocket science. This is incredibly simple, basic, straightforward proof by contradiction. If you don’t get it, that’s your problem, not a problem in the proof.

    Reply
  47. Cléo Saulnier

    I never said that proof by contradiction cannot work for anything involving programs. Not sure what would make you think such a thing.
    You must understand that there are two parts here. H and Q. Q needs H. So tell me what part you want to contradict. If you end up assuming that H exists and then show that it cannot for Q, you must make sure that Q exists throughout. Otherwise you’ve only shown that H cannot exist for a Q that doesn’t exist. That’s what really stupid here.
    I do understand all of this. Yet no one has yet answered why H must take into account a program Q that will never exist. Answer me that if you’re so sure of yourself. I agree with one point. This isn’t rocket science.

    Reply
  48. Mark C. Chu-Carroll

    You keep ignoring the point.
    If H exists, then Q exists. If Q doesn’t exist, then H doesn’t exist. They’re instrinsically connected.
    If H exists, then it’s a program. If a program H exists,
    then by the fixed point theorem, the program Q is constructable. So the existence of Q is logically implied by the existence of H.
    The reason that I say you’re essentially arguing that proof by contradiction can’t work for programs is that this proof is a canonical example of how we use proof by contradiction to show when something is non-computable: we suppose the existence of a program for something non-computable. Then we show how that leads to a contradiction, by using the supposed program as an element of a construction.

    Reply
  49. Cléo Saulnier

    If H exists, then Q exists. If Q doesn’t exist, then H doesn’t exist. They’re instrinsically connected.

    Both of your first two statements are incorrect. There’s no basis for it. Please back this up.
    If H exists, there’s no basis to assume the existance of Q. As for the other statement, if Q doesn’t exist, then H doesn’t need to take Q into account and H can very well exist.

    If H exists, then it’s a program. If a program H exists, then by the fixed point theorem, the program Q is constructable. So the existence of Q is logically implied by the existence of H.

    The program Q is only constructable as long as H works on that Q. If there is no such H for Q, then Q cannot exist and you’ve defeated your argument. This proof only shows that Q cannot exist. It says nothing about H. You’re using circular logic to try and make your argument. I’m sorry, but this does not hold water. Here’s why.
    You’re assuming that H exists, right? Ok, let’s go with that. You want to contradict its existance, correct? Fine. What is the tool you’re going to use to show the contradiction? You’re using Q. But if Q fails to exist, then so does the tool that shows the contradiction. So the contradiction disapears. Please understand this before continuing. If the tool used for showing the contradiction doesn’t exist, there can be no contradiction. Plain as that. It’s the subject matter, not the tool, that you want to contradict.

    Reply
  50. Mark C. Chu-Carroll

    H is a halting oracle: by definition, it works on all programs. A halting oracle is a program, H, which takes another program – any other program – as input, and determines whether or that input program halts. The other program – the program that H tests for halting – is a *parameter*. H works *for all programs*. If H doesn’t work for all programs, then it’s not a halting oracle. The point of the halting theorem is that you can’t build a halting oracle.
    No one would dispute that for many specific programs, you can write a halting prover for that specific program. The question is, can you write a general one? That’s the whole point of the theorem. Not “Given a program Q, can I write a program H which determines whether Q halts?”, but “Can I write a single program H which for *any* program Q determines whether or not Q halts? For the earlier question – can I write a program HQ which determines whether or not a specific program Q halts, it’s true that without knowing Q, you can’t construct HQ. But for the latter question, if the program H exists, then it has to be able to answer correctly for all possible inputs – so you don’t need to know the specific problematic input when you write H – no matter how you write H, no matter what you do, any program that claims to be a halting oracle can be defeated by constructing a program like Q for that halting oracle.
    So, if H is a halting oracle, then we can construct Q very easily, by the fixed point theorem, in exactly the method shown in the post above! Q is a program that incorporates H, and invokes H(Q), and chooses what to do based on what H(Q) predicts it well do. The program for Q is trivial: “If H(Q)=Halt, then loop forever else halt”.
    The only trick in constructing Q is making Q be able to invoke H using Q itself as a parameter. But the fixed point theorem proves that it’s possible. Q is obviously constructable – it’s a trivial if/then statement, with an invocation of H(Q) in the condition.

    Reply
  51. Cléo Saulnier

    You’re arguing on the basis that I don’t understand the halting problem. While that’s rather insulting, it doesn’t show how my argumments are false.
    Your last part doesn’t make sense. How can Q invoke H if you say it doesn’t exist for that Q? I understand you keep trying to show that it should work for *ALL* P. But if Q doesn’t exist, it’s NOT part of *ALL* P. That’s why you’re using circular logic. And you completely avoided my argument.
    Look, you initially said that H exists and works an all P, correct? P includes Q as one of the possibilities, right? Then you go and show that an H that works on all P doesn’t exist. But this means that Q cannot exist either because it has nothing to call. You agree? So we must remove Q from the set of P. Hence, you’ve proven NOTHING! This ends up saying nothing about P where Q is not a part of it. In fact, we MUST omit Q from the set of P because Q does not exist.
    You cannot construct the program Q as you claim. You believe it to be a simple matter of if/then, but that’s not true. The input of H (as a program) is invalid to that Q. H can exist for Q if you accept relationships. But the definition of the problem doesn’t allow Q to use this H, so Q cannot call H. It’s an invalid input. The proof doesn’t mean that H doesn’t exist. It again means that Q cannot exist as described. This is what I meant that the humans who wrote up the descriptions are intentionally selecting what answers they accept. Even so, the proof still falls apart.
    However, you’re refusing to see logic. You completely avoided my previous comment. Please consider it and understand it. You’re trying to sweep the Q self-contradiction under the rug.

    Reply
  52. Mark C. Chu-Carroll

    You’re ignoring simple logic.
    As I keep saying: the way that a proof by contradiction works is by assuming the statement you want to disprove, and then showing that the statement leads to a contradiction.
    If you assume the existence of a halting oracle, H, then that implies the existence of the counterexample, Q. It’s a construction from H. As I keep saying: *IF* you have a halting oracle, then by the fixed point theorem, Q is constructable from H. If Q isn’t constructable – then we have our contradiction – because by fixed point plus construction, Q *must* exist. So the proof by contradiction works. If Q *IS* constructable, then it’s a counterexample to H being a halting oracle, and so it’s a contradiction. So either way, it’s a contradiction caused by assuming the existence of a halting oracle. Which means the proof stands.

    Reply
  53. Cléo Saulnier

    As I keep saying: the way that a proof by contradiction works is by assuming the statement you want to disprove, and then showing that the statement leads to a contradiction.

    That’s not entirely correct (and again you ignored my comments). If you disprove the contradiction itself, then there cannot be a contradiction (IOW, you’re contradicting the contradiction). When using proof by contradiction, you have to be VERY careful what it is you are contradicting. You cannot throw caution to the wind and use all conclusions without first looking at what exactly it is your are disproving. By your statements, I’m having serious doubt if you understand this part of proofs by contradiction.

    If you assume the existence of a halting oracle, H, then that implies the existence of the counterexample, Q.

    Wrong! Why must Q exist? You still haven’t shown any reason why this must be so. I’m sorry, but I’m not just going to take your word for it. Show me why Q must exist?

    It’s a construction from H.

    An H that assumes P contains Q and H, sure. If P ends up not containing Q or H, you’ve only proven something about an H that worked on an incorrect set of P. Big deal. You didn’t prove anything. Fix P and try again. We’re only interested in P having programs that exist.

    *IF* you have a halting oracle, then by the fixed point theorem, Q is constructable from H.

    This only holds when Q and an H that works on Q (as well as all other P) is part of P. If it’s found that Q (or H that works on Q) is not part of P, then you cannot use the fixed point theorem (for one thing, you cannot enumerate things that don’t exist, much less compose something of them) and your argument falls apart. Like I said, it’s a self-destructing proof. It cannot succeed.
    I don’t think you realise that I don’t need to have a halting oracle for programs that do not exist. You keep tripping yourself up on that point. There is no composability possible for things that do not exist. No fixed point theorem would apply.

    If Q isn’t constructable – then we have our contradiction – because by fixed point plus construction, Q *must* exist.

    Well no. Your premise of constructability is flawed. You built Q out of something that doesn’t exist (and need not exist). No contradiction there.
    BTW, in the definition of the proof, it is not accepted that Q can invoke H in the first place even though it says it should. I don’t think you realise that. There’s more than fixed point. There’s program validity too. You can’t compose programs from incompatible parts.
    You’re only resorting to repeating yourself. Notice that I’ve responded to each and every one of your arguments. But you have not done the same. You completely ignore them. I must conclude that you are using wishful thinking at this point. You seem to be genuinly convinced that there cannot be fault in this proof and therefore you do not even consider it. If you work from this frame of mind, it is no wonder that you do not respond to my arguments. However, I was hoping for a more open discussion and implore you to answer my questions put forth in this discussion. Why must an H work on a program that does not exist?

    Reply
  54. Cléo Saulnier

    Let’s redo your proof, shall we? But I’m going to clarify one particular point so that your proof doesn’t result in a contradiction where even you will agree.
    H is only required to output true or false for valid programs, correct? For invalid programs, H does not need to answer correctly because they are not in the set P of valid programs. Invalid programs neither halt nor not halt. They may run for a while and then crash. But they’re still invalid. So we’re going to have H return function F whenever a program S acts on the result of H(S) unless S is H. In that last case, it’ll return false. Basically, it returns a function F for all invalid programs (which includes all those that try to act on H(self)).
    Now, let’s redo your proof and we’ll use your words.

    If you assume the existence of a halting oracle, H, then that implies the existence of the counterexample, Q. It’s a construction from H. As I keep saying: *IF* you have a halting oracle, then by the fixed point theorem, Q is constructable from H.

    See, that last sentence doesn’t hold up. Q is invalid if it tries to use H because Q doesn’t know what to do with a returned function. And since Q is invalid, H correctly has no requirement to produce a true or false answer. All statements are in accord and that means H remains valid for all P. No contradiction.
    See, the original proof has a fatal flaw. It requires that H produce valid results for invalid programs. There is no such requirement. If you want valid results for invalid programs, then of course you won’t find an H that exists. This is why I say the proof sets up its own results. But such a requirement, as there is in your proof, is unfounded. The H that you assume exists cannot exist as defined in the proof. And that’s ultimately why the proof cannot succeed. Both the premise (invalid H) and the conclusion (invalid H) are the same. But as the above example shows, it’s also possible to define a (possibly) valid H that survives your proof intact without any contradiction just as long as you remove the need to give valid results for invalid programs.

    Reply
  55. Thony C.

    Hey Cléo, instead of wasting your time and energy on us pea brains here at GM/BM why don’t you write up your stroke of genius and send it off to the Journal of Symbolic Logic? I’m sure you’re in the running for at least a Field’s if not a Nobel, after all you have solved a problem that defeated the combined efforts of such pea brains as Turing, Post, Church, Gödel, Kleene, Rosser and Davis. Come on, after you get published the name Saulnier will be up there alongside all the other immortals of meta-mathematics. I’ll even let you autograph my copy of Davis’ The Undecidable!

    Reply
  56. Mark C. Chu-Carroll

    In the theoretical realm where this exists, there’s no such thing as a program that crashes – there’s no such thing as an invalid program!
    In terms of primitive computation theory, “halt” means “completes computation without error”; “doesn’t halt” means that either the program never completes the computation, or encounters an error that renders the computation invalid.
    If you want to think of it in terms of real programs and real computers: on a modern computer, you can take any sequence of bytes you want, and put them into memory, and then perform a jump to the first byte of the sequence. There are a number of things that could happen. You could encounter something that doesn’t form a valid instruction. That’s an error, and so the computation doesn’t complete without error – so it’s considered non-halting. It could be a valid set of instructions, which contains something that produces an error, like dividing by zero, addressing an invalid memory location, etc. That’s an error – so the computation doesn’t complete without error, and doesn’t halt. It could contain a jump to an address that’s not part of the program, and which contains garbage. THat’s an error – the computation doesn’t complete without error, so it doesn’t halt. It could be a valid set of instructions that does nothing wrong, but contains an infinite loop. Then the computation won’t complete without error, so it doesn’t halt. Or it could be a valid set of instructions that does nothing wrong, and eventually completes execution without error – a halting program.
    That’s the theoretical abstraction of programs. Every program either halts (that is, completes execution without error), or doesn’t halt (which means either never halts, or encounters an error). There’s
    no such thing as an “invalid” program: there are programs that halt, and programs that don’t. What we would call a buggy crashing program on real hardware is a non-terminating program in the abstract.
    Get that? The definition of “program” in the realm of “effective computing systems”, the domain of this proof, is that every program is valid. *Every* program, by definition, when executed either halts (completes without error) or doesn’t halt (doesn’t complete without error). And a halting oracle *always* halts, and answers “Yes” if its input program halts, and “No” if it doesn’t. Since *every* program halts or doesn’t halt, and the halting oracle must always halt with a correct answer for every possible input, there *is no program* for which the halting oracle doesn’t produce a correct answer.
    Further, the construction of Q given H is trivial. As I’ve repeatedly said, if you have a halting oracle H, generating Q from it is a completely mechanical process. The logic is the trivial conditional that I showed above. And the self-embedding of Q, allowing it to pass itself to H, is guaranteed to be possible – if it’s not, then you’re not using a complete effective computing system, and the proof is only about ECS’s.

    Reply
  57. Xanthir, FCD

    It seems this is the heart of your misunderstanding:

    The program Q is only constructable as long as H works on that Q.

    If H doesn’t work on that Q, then H isn’t a halting oracle. That’s what we were trying to prove in the first place.
    Though, this part might also be it:

    Your last part doesn’t make sense. How can Q invoke H if you say it doesn’t exist for that Q? I understand you keep trying to show that it should work for *ALL* P. But if Q doesn’t exist, it’s NOT part of *ALL* P. That’s why you’re using circular logic. And you completely avoided my argument.

    You seem not to understand the role of hypothetical constructions, and their role in proof by contradiction. When you make a statement like the one above, you are quite literally saying, “I don’t understand how proof by contradiction works.”
    First, you *assume* something. Then you find some logical consequence of that assumption. Finally, you show that the logical consequence is untrue. This proves your assumption false.
    In this case, you assume that a halting oracle exists, called H. The logical consequence of H existing is that it should be able to correctly decide whether or not *any* program halts. Then you construct Q, which H cannot correctly decide on. Thus our assumption is false, and there cannot be a halting oracle.
    The point of the whole thing is that when Q is analyzed, it looks like it does one thing, but when it’s actually run, it does the other.

    Wrong! Why must Q exist? You still haven’t shown any reason why this must be so. I’m sorry, but I’m not just going to take your word for it. Show me why Q must exist?

    Um, because we showed how to construct it? There’s only one tricky step in the construction of Q, and that is inserting Q’s own source code into the call to H. But we can prove that this step is possible. The rest of Q is completely trivial.
    So, that’s our reason why Q must exist. It’s trivially easy to construct, and we showed exactly how to do it. Prove us wrong.

    H is only required to output true or false for valid programs, correct? For invalid programs, H does not need to answer correctly because they are not in the set P of valid programs. Invalid programs neither halt nor not halt. They may run for a while and then crash. But they’re still invalid. So we’re going to have H return function F whenever a program S acts on the result of H(S) unless S is H. In that last case, it’ll return false. Basically, it returns a function F for all invalid programs (which includes all those that try to act on H(self)).

    Wow. If we define H as something other than a halting oracle, then the proof that H isn’t a halting oracle fails! Truly, sir, your command of logic is astonishing.
    What you have defined is not a halting oracle. H must return true or false for *all* programs. If a program is ‘invalid’ (which I’m guessing means that it would cause errors on compilation or execution), it definitely still halts or loops. A crash is simply a halt. It may not be the type of halt you want, but the program definitely stops. Thus, H would say that any incorrectly written programs halt. (Unless they exploit some bug in the compiler/computer architecture causing an infinite loop upon crashing, in which case H will say that the program loops. Because it’s an oracle, and knows these things.)
    If I just feed H my favorite brownie recipe with the quantities of ingredients as input, it’ll crash, because my brownie recipe isn’t syntactically valid Lisp (I’m assuming for the moment that the Oracle is written in Lisp, because both are magical).

    But as the above example shows, it’s also possible to define a (possibly) valid H that survives your proof intact without any contradiction just as long as you remove the need to give valid results for invalid programs.

    Yup, by defining H so that it isn’t a halting oracle anymore.

    Reply
  58. Cléo Saulnier

    Thony: Your defense is that “giants” wrote it. Weak!
    Mark C. Chu-Carroll:

    In the theoretical realm where this exists, there’s no such thing as a program that crashes – there’s no such thing as an invalid program!

    It’s trite to include invalid programs and will not waste my time with it. If you insist on this, then I’m satisfied that your proof is bogus. We can end it there. The very definition of invalid programs is that you don’t know what happens. Gee! How amazing? We can’t tell what happens to programs defined as being unknowable. BORING! Show me a real proof! Don’t include invalid programs. This one just repeats a definition. Irrelevant.

    the domain of this proof, is that every program is valid.

    Trite! Who cares about programs that don’t work? Wish you’d mentioned this before though. I could have laughed it off and moved on. At least now I can back up the fact that this proof is a joke. I never thought you’d admit the inclusion of invalid programs though. Thank you for being big enough to admit it!
    Xanthir:

    If H doesn’t work on that Q, then H isn’t a halting oracle.

    Of course it is. Q doesn’t exist. So H doesn’t need to work on it. You’ve invalidated the set that H was supposed to work on, not H itself. Your definition ended up being wrong, not the existence of the oracle program.

    Then you find some logical consequence of that assumption. Finally, you show that the logical consequence is untrue. This proves your assumption false.

    You don’t understand how proof by contradiction works. If it ends up that your contradiction is invalid, so is your proof. Do you understand that? Obviously not. It doesn’t matter what conclusion you seem to think your proof comes to if the contradiction doesn’t hold up. Do you not get that? Without a contradiction, you get nothing.

    The point of the whole thing is that when Q is analyzed, it looks like it does one thing, but when it’s actually run, it does the other.

    Again, this can only happen if Q exists. If it doesn’t, then your argument falls apart because there is no contradiction.
    Don’t you see? You’re saying Q is a contradiction, yet Q doesn’t exist to show this contradtion. You can’t do that with proofs by contradiction. This is basic stuff.

    The rest of Q is completely trivial.

    No, this is wrong. Let’s have a set R that includes all programs including Q. Let’s also have a set S that does NOT include Q. Your proof only says that R cannot exist since Q does not exist. So all it says is that H could have never worked on this set in the first place. You used the WRONG SET of programs in your assumption. Big deal. Who cares about proofs that says something about a set of programs that included non-existant programs? Of course you’re going to get a result of undecidability. You choose that result by including a non-existant program in the set of all programs.

    If we define H as something other than a halting oracle, then the proof that H isn’t a halting oracle fails!

    You’ve finally got it! This is what your proof does. It’s your definition that falls apart. NOT H.
    That’s exactly right! This is what I’m saying all along. The way your proof is set up, you don’t know for sure that H is the oracle that works on all P. You’re ASSUMING that P is indeed the full set of programs. But it ends up that P isn’t the P you thought it was. So you contradicted P, NOT H. P is contradicted because Q doesn’t exist within P. This is what I’ve been saying all along. It says NOTHING about H.

    A crash is simply a halt. It may not be the type of halt you want, but the program definitely stops.

    You can’t say this. Invalid opcodes are undefined. You’re making arbitrary decisions on how the machine works. You can’t do that. If you do, you’re making stuff up. Again, this is my point exactly. Thank you for making the demonstration.

    Yup, by defining H so that it isn’t a halting oracle anymore.

    But your assumption is linked to Q. If Q doesn’t hold up, your definition of H is invalid meaning you were incorrect in assuming it was the oracle in the first place. This means your definition of the oracle was wrong. Not that there wasn’t any. You seem to think that just because you declared it the oracle that this is all you need. Not so. You’ve linked this definition of H to P. If P fails, then the H in your wording was not really H after all. It’d be a different story if Q existed throughout because P would not be affected and your assumption would remain valid and there would be a valid contradiction. But this doesn’t happen. You’re invalidating your definition of H, not H itself.
    Unfortunately, I’m not interested in flights of fantasy. So if there is an insistence that invalid programs are part of P, then DUH! it’s undecidable. Damn, I can tell you right now no one can tell what an invalid program will do. It’s invalid. Don’t need a proof for that. That’s the VERY definition of invalid programs. Mark just made my point for me that the proof sets up its own answer of undecidabality by forcing a valid answer on invalid programs. Thank you for at least admitting that much.

    Reply
  59. Mark C. Chu-Carroll

    This is exactly why I originally didn’t want to get into this argument. It’s exactly like the other example I mentioned – the old crackpots who try to argue against Cantor’s diagonalization.
    Seriously, if you believe that your refutation is valid, you should go and publish it. This proof is a fundamental proof from the mathematics of the 20th century. *If* you really believe that you’ve got a valid refutation of it, it’s enough to guarantee you millions of dollars of prize money, fame, and a faculty position or job at an institution of your choice.
    Two questions for you, which you have so far refused to answer.
    Why do you think that the program Q doesn’t exist? If you accept the existence of H, and H halts and gives an answer, then why can’t Q be constructed? Its an incredibly straightforward construction; given H, construct Q.
    Second, proof by contradiction is a whole lot simpler than you make out. The *only* necessity in a PbC is to show that if you accept the premise, and follow valid logical inferences from it, you get a contradiction. The steps just need to be valid logical inferences.
    In this case:
    (1) There exists a halting oracle, H. (The assumption.)
    (2) Properties of effective computing systems:
    (2a) In an effective computing system, given a program A,
    you can write a program B that invokes A. (A premise; fundamental property of effective computing systems, sometimes called the recursion principle.)
    (2b) In an effective computing system, given programs
    A, B, and C, where A is guaranteed to halt, I can construct a program that runs A, then runs B if A answers “yes”, or otherwise runs B. (We’ll take this as a premise;
    It’s provable from the recursion principle.)
    (3) By 1 and 2a, I can construct a program that invokes
    H.
    (4) By the fixed point theorem, if I can construct a program
    that invokes H, I can construct a program that invokes
    H *on itself*.
    (5) By 3 and 4, I can construct a program that invokes H
    on itself.
    (6) By 5 and 2b, I can construct exactly the program Q in
    the original proof.
    What step in this is invalid logic? Where’s the invalid inference that invalidates the proof? If each of these inferences is valid, then the proof is correct.
    Given a program H, I can construct a program Q(H). (A
    logical inference from the fact that H exists, and
    programs can

    Reply
  60. Thony C.

    Thony: Your defense is that “giants” wrote it. Weak!
    I did not defend anything! I said that if you genuinly believe that the proof of the halting problem is false then publish your results because if you are right, it makes you the greatest meta-mathematician of the last hundred years and you wouldn’t want to miss out on all the fame and glory that that entails, would you?

    Reply
  61. Cléo Saulnier

    Are you guys really pround to say that H(undecidable) = undecidable? That’s what the proof says. Can someone tell me why anyone would accept a proof that just repeats a definition? You know BEFOREHAND that the program in undecidable. By using that definition, you then go out to prove that it’s undecidable. It’s so dumb, it’s ridiculous.
    Mark, there’s a serious flaw in your understanding of proofs by contradiction. Look at the prime proof.
    The prime proof says that each item in the set P MUST be prime. If at any point during the proof by contradiction it is shown that any of the items were NOT prime, then the proof would fail. This would be so even if you somehow managed to show some other contradiction. You’d say Q is a new prime and thus the set P could not have held ALL primes. But this conclusion wouldn’t hold up if it’s found that one of the p’s is not actually a prime. In that case, the proof would break down.
    This is exactly what’s goin on with the halting problem proof. The rules that you set up fail. They break down. If Q ends up not holding, then the set P was incorrectly defined. The items in it were not what you claimed they were.
    Your proof ends up saying that H can’t exist, right? So Q could not possibly invoke it, right? That means neither H nor Q were actually part of P as you thought and could never have been built in the first place. (And if you pass H to Q, then the combination of (Q,I) can’t exist, so whatever way you define your sets, it still fails. I use set P for simplicity where Q has H as part of itself.) So the H you defined wasn’t the oracle after all. Your proof broke down in the same manner the prime proof would break down if the set of P were found out to contain compound or non-existent numbers.
    In your proof, there are many flaws. First, your assumption isn’t that H is the oracle. It’s that H is the oracle if and only if the set P includes H and Q. If H or Q are found to not exist, then you are precluded from coming to any conclusion on the existance of an oracle program because P was incorrectly defined and thus H was not actually the oracle.
    This is a fundamental rule of proof by contradiction. I can’t believe that you’re not mentioning it. You must make sure that the rules used to show the contradiction don’t fall apart. You can’t end up contradicting the contradiction. Proofs by contradiction that do this are invalid.
    Thony: I’ll take that suggestion under consideration.

    Reply
  62. Mark C. Chu-Carroll

    Cleo:
    H(undecidable)=undecidable *is* a profound result. Before Gödel and Turing, it was widely believed that it was possible to produce a mathematical system in which all statements were provably true or false. That would mean that it would be possible to create a device which could, for *any* well-formed logical statement, produce a proof that it was either true or false. That every decision problem was solvable in a finite amount of time. Showing that that could not be done was astonishingly profound, and had an immeasurable impact. (That’s also why Thony and I are saying that if you really believe your “disproof”, that you should publish it. You’d be overturning one of the most important mathematical discoveries of the last century. You’d be putting yourself right up there in the ranks of the greatest mathematicians of all time: Gödel, Cantor, Turing, and Saulnier! If you really have discovered a fundamental flaw in this, why are you wasting your time arguing with a bunch of two-bit nobodies on a blog?
    ———————–
    Shifting gears:
    The assumption *IS* that H is the oracle. You don’t need to specify that it’s a halting oracle that will work for the program Q – it works *for all programs*. If it doesn’t, it’s not a halting oracle. (If it makes you happier, you can make it be a three state oracle: instead of returning “Halts”, or “Doesn’t Halt”, it can return “Halts”, “Doesn’t Halt”, “Invalid”. But it’s part of the definition that it has to work on all inputs, or it’s not a halting oracle.)
    Given the existence of a halting oracle, H, the existence of Q is proven by the premises + inferences that I listed above. The applicability of H on Q is proven by the fact that to be an oracle, H must work for all possible input programs Q.
    The steps are there. Which one is invalid?
    The supposition is that H is a halting oracle – which means that H is a program, and H works for all possible input programs. The steps in my previous comment show how the existence of H necessarily implies the existence of Q, and
    exactly how to construct Q as a valid program. If you want the three-state oracle, it doesn’t actually change Q *at all* – because the construction, by the very nature of computing systems, guarantees that the construction will produce a valid program Q.

    Reply
  63. Cléo Saulnier

    Your logical steps are flawed.
    First, H is not the oracle unconditionally. It’s assumed to be the oracle based on certain conditions. If these conditions fail, then H was NOT the oracle after all. These conditions are that P contains H (as defined) and Q. If either ceases to exist in this proof, then the proof fails. Got that? You want to contradict H, but not the assumed conditions needed for its existence. If the proof ultimately proves that H or Q does not exist, then the proof fails. Simple as that. It’s not complicated. This is why this proof cannot succeed.

    Reply
  64. Mark C. Chu-Carroll

    Where are the logical steps flawed? You keep saying they are – but you can’t say where the flaw is.
    Note the definition of H in the assumption: H is a halting oracle. Not “H is an oracle *IF* the following conditions are met.” The assumption is, unconditionally, H is a halting oracle for all programs in the effective computing system. No ifs, ands, or buts. A halting oracle is a program that *for all programs* provides a yes or no answer to “Does it halt?”.
    If H fails to be a halting oracle, then the proof *succeeds*. You’re adding conditions that aren’t there. If we assume that H exists, then everything else follows from that – per the proof and construction above. Every bit of the proof logically follows from either “H is a halting oracle”, or from the definitions of an effective computing system.
    It’s a simple chain of logic, which I’ve provided with labelled steps. Exactly what step of the logic do you claim is invalid? And by what rule of logic is it invalid?

    Reply
  65. Cléo Saulnier

    Your definition of H is incompatible with that of P. Your proof can’t succeed. P contains a program that does not exist and H has no requirement to process it. That’s one of the flaws. You think that H must give a result for Q. It does not.
    I know you’re trying to say that we only prove it doesn’t exist afterwards. But that’s where you trip yourself up. We’re talking about existence here. So if something ends up not being in P, H never had to process it and it’s perfectly valid for it to not give an answer for things that do not exist. So you can’t make conclusions on the results that H (as defined in your proof) gives according to the way your proof is laid out. You assume it gives either true or false when it has no obligation to do so.

    Reply
  66. Mark C. Chu-Carroll

    Cleo:
    This is the last time I’m going to reply. This is getting stupid.
    A halting oracle *must* return an answer for *any* program. There’s no wiggle room for “it doesn’t need to answer for *Q*”; if Q is a constructable program, then H *must* answer it.
    Further, if H exists, *then* Q exists. Period. There’s no wiggle room to argue that Q doesn’t exist. The *only* way that Q doesn’t exist is if H doesn’t exist. It’s impossible for there to be a halting oracle H for which you cannot construct a deceiver Q.
    Once again, I’ll say it: there’s a proof up there, formed from a sequence of logical inferences. If the proof is invalid, then one of those inferences must be wrong. Which step is invalid? They’re *labeled* for goodness sake – which one isn’t a valid inference and why?

    Reply
  67. Cléo Saulnier

    Step #1: You assume that H exists. Fine. But that’s not all there is. H must exist for all P. If the actual definition of P ends up being different in actuality than your assumption, then your proof falls apart. Note that a Q that exists and one that doesn’t would create two different P’s. If set X is different than set Y, you would expect that it’d be possible to have two different functions to deal with each one. Function 1 need not deal with items in Y and vice-versa. Now apply this concept to H. This means you MUST make sure P NEVER changes throughout your proof, otherwise H will need to be updated. If H is in P, it must remain throughout and cannot be invalidated without invalidating this proof. See infinite prime proof for an example.
    Step #2A: This is flawed because it’s possible that A cannot invoke B.
    Step #3: You can’t invoke H if you don’t exist or if H doesn’t exist with the assumed P. If you want your proof to hold, both the invoker and H must exist throughout. You can’t have a contradiction on either H or the program invoking it for this step to hold true. Otherwise, you are changing the definition of P and we’d need to start over.
    Step #4-6: As long as H exists, you’re fine. But there cannot be a contradiction here otherwise your proof fails. P must not change!
    So I agree with your steps on the condition that at the conclusion of your proof, you do not invalidate H (though I have issues with the way you compose programs). If you invalidate H, every step of your proof fails. They are all conditional on the existence of H. See the infinite prime proof for an example where the properties of the items within P must remain intact. Any contradiction showing that H is invalid will defeat your proof! All your steps are conditional on this.

    Reply
  68. Mark C. Chu-Carroll

    Gods but you’re dense!
    Any proof of computation starts with something called an *effective computing system* – that is, essentially, a formal mathematical way of saying “A turing complete computer”. The set of all possible programs for an ECS is fixed: it’s a recursively enumerable set, which cannot change, and which contains all possible computable functions. You’re basically insisting on a mutable set of possible programs. *There isn’t*. You can’t change the set of programs. It’s fixed. And the definition of a halting oracle H is that it works *on all programs defined for the ECS.* You can’t change that set. And if you exclude anything from it, you’ve admitted that H isn’t a halting oracle.
    From there, everything else is perfectly straightforward, derived from the *definition* of a computing system.
    Step 2a of my proof is valid by the definition of a computing system. If A is a program, then you can write a program B that invokes A. That’s not up to argument: it’s part of the definition of an effective computing system. There are no exceptions.
    If you reject that, then sure, you can reject this proof. But – and this is a huge but – if you reject 2a, you’re rejecting the recursion principle, and you’re no longer talking about a Turing complete computing system. You’ve downward defined the computing system to something sub-Turing.
    The proof doesn’t hold for computing systems that don’t support the recursion principle. But that’s not a particularly interesting fact: we know that there are sub-Turing computing systems for which the Halting problem is solvable.
    If the recursion principle holds, then you’ve got a Turing complete system, and the proof is interesting. But you can’t go raving about “P” changing. The premise is that H is a program. By the simple logic above, if H is a program, then Q is a program (∃H ⇒ ∃Q). Again, by simple logic, if Q is *not* a program, then H is not a program.
    ((∃H ⇒ ∃Q) ∧ ¬∃Q ⇒ ¬∃H).
    The only way that your supposed refutation stands is if you reject the fundamental properties of an effective computing system. But the proof is built on the premise that we are talking about a ECS.

    Reply
  69. Cléo Saulnier

    I don’t think we need to resort to name calling.

    You’re basically insisting on a mutable set of possible programs. *There isn’t*.

    That’s MY argument. Your proof is changing P and you won’t accept this fact. P cannot change, so I cannot accept any proof that changes P (unless H is the only thing removed from it).
    Let’s assume that Q is fictional. That it never was in P. I know this isn’t a proof or anything, but I just want to show why your proof is bad. Assume Q is fictional. Does H have to process it? No. H does not need to give a valid answer. And if you look at your proof, the HUMAN who wrote it is assuming that H will return true or false for Q. But there is no such requirement on fictional programs. This third option is left out and you must consider it when talking about existence. So when H cannot possibly return a valid answer on something that does not exist, you claim victory. That’s an ugly, ugly argument.
    Or simply take the infinite prime proof. If I end up concluding that one of the items in P is not a prime, the proof would fall apart, correct? Why does this principle not apply to the halting problem proof is all I’m saying. At least ONE thing must hold up in a proof. 100% of everything is the halting problem proof gets contradicted INCLUDING the contradiction itself. There’s a REAL reason why all items in P must be prime througout the prime proof. Please consider this. Tell me what part holds up in your proof.
    I’m also having doubt that you understand that Q could never be built.

    Reply
  70. Mark C. Chu-Carroll

    Q is fictional *if and only if* H is fictional.
    What you don’t understand is that your insistence that the existence of Q requires exactly two things: (one) that we have an effective computing system, and (two) that H exists.
    By the fundamental, definitional properties of effective computing systems, the construction of Q follows by simple logic from the existence of H. Q exists IF AND ONLY IF H exists. There is exactly one possible reason why Q wouldn’t exist in an ECS: because H doesn’t exist. *IF* there is a halting oracle, *THEN* the construction is valid and Q exists. *IF* Q doesn’t exist, the *ONLY REASON* that I can’t exist is because H doesn’t exist.
    You *cannot* have a halting oracle H such that a deceiver Q doesn’t exist: the existence of Q *necessarily follows* from the existence of H.
    The construction of Q isn’t *creating* a new program. It’s just *identifying* an existing program. The rules of construction, including the recursion principle, are built on the fact that *every possible program* is already in the enumeration E of programs that are possible inputs to the system. The construction rules are *total functions* from programs in E to programs in E. They don’t *create* new programs; they *select* programs for the infinite enumeration of possible programs. Any program which can be constructed by valid construction rules *already exist* in the enumeration of programs.
    In formal computation theory, we even model programs as natural numbers, where every natural number is a program, and the construction rules are *total functions* from natural numbers to natural numbers. If H is a program, then there exists some natural number h that represents the program H. The construction of Q is a function from a number h to a number q, such that the number q represents the program Q. The construction doesn’t *create* a new number. It just maps a *known* number h – the representation of a program we have in our hands – to another number, which is a desired program based on h. The construction rules are closed total operations on the set of programs. It is *impossible* for there to be an H for which a Q isn’t computable; it’s like saying that there is an integer N for which the function “f(N) = N2 + 3n – 2” doesn’t
    compute an integer.

    Reply
  71. Cléo Saulnier

    This will be my last comment.
    About your construction of Q, what if H returns something that Q can’t handle? Q is a recursive definition. We all know that in a recursive definition, you will get a relationship. Relationships are, by definition, undecidable. So you start with the premise of undecidability because this is the only valid answer H can give. You don’t even need a proof. This is the premise.
    Let’s keep going though. You don’t allow H to return this relationship because the halting problem defines the oracle as only returning true or false, and this effectively tosses out valid answers. But H has no choice but to return a relationship. It’s the only correct answer. So H will return a relationship, but it won’t be the oracle as far as Q is concerned because it doesn’t fit the official definition. So Q ends up being invalid because H can’t be combined with Q. Q, as DEFINED, was never in P to begin with. The oracle still exists. It just doesn’t exist in a way that Q can use it. So this means that the oracle will indeed still return true or false for all P and the proof fails. Q was never in P to begin with, so H never needed to give it a correct answer.
    Only if you allow invalid programs can your proof succeed. Like I said before: undecidable = H(undecidable). Big deal. You don’t prove it’s undecidable. You start off that way.

    Reply
  72. Mark C. Chu-Carroll

    Cleo:
    Most importantly: the entire point of the halting proof is that there *exist* undecidable programs. Before the development of the theory of computation, most mathematicians believed that *all* problems were decidable. If you accept the idea of an undecidable program, the question is, *why*? The halting proof is so important because it provides a simple demonstration that there *are* undecidables.
    Second: the proof is about the halting problem. You don’t get to redefine the halting problem, and then criticize the proof because it doesn’t work for your redefined problem.
    The basic ideas behind the halting problem are incredibly simple.
    If you run a program on a computing device, either the program will eventually stop; or it won’t. There is no grey area. There’s no third choice. There’s no fuzzy relation. Run any computer program, and eventually it will stop, or it won’t. It’s completely deterministic,
    with no fuzz.
    A halting oracle is just a program that takes an input, and generates a result “Halt” or “Doesn’t halt”. The input is another program. Regardless of what that other program is,
    the oracle will *always* halt, and will *always* generate an answer. If there exist any input for which the oracle doesn’t return either “Halts” or “Doesn’t halt”, then it’s not a halting oracle. Once again, there is no wiggle room here.
    The oracle is a program. Once again, no wiggle room.
    The construction of an oracle deceiver is a total function from programs to programs. By the very definition of computing systems, there is no such thing as a program for which a valid application of program construction rules does not result in a valid program. Once again – no wiggle room. The construction is total – there is no possible input for which the construction doesn’t generate a valid program. What this means is that *IF* H is a program, there is *no possible way* that Q is not a program, unless you aren’t dealing with a Turing-equivalent computing system. There is no possible way for the construction to generate something outside of the domain of H.
    Think of programs as natural numbers. You can enumerate the set of all possible programs – so assign each program to a natural number. Then the construction is a total function from natural numbers to natural numbers; and the supposed oracle is a natural number. What you are arguing is that somehow, taking a total function from naturals to naturals, and applying it to a natural number, is allowed to produce a result that is not a natural number. Nope, sorry. You can’t do that. There’s no wiggle room: a total function is a total function; the domain of the function is total; the range of the function doesn’t include any non-natural numbers. There is *no room* in the definition for the result of the function to be outside the set of natural numbers (that is, program labels) for any of its inputs. *It’s a total function* from natural to natural. There is no way for it to produce a result outside of the set of natural numbers.
    That’s exactly what’s going on in the halting proof. Under the supposition, the supposed oracle is a program. Therefore the construction process *must* produce a program – because by the definition of a computing system, the construction is
    a *total* function from programs to programs. Given *any* program as input, you get
    a program as output. There is no possible way to produce something outside the set of valid programs using the construction rules from the definition of a computing system.
    The oracle *by definition*, it total. By definition, must work on *all programs*. Therefore if the oracle is in fact an oracle, it *must* work on the constructed program – because by the definition of a halting oracle, it must work on all programs; and by the definition of a computing system, if the halting oracle is a valid program, then the deceiver is a program. There is absolutely no wiggle room here.
    There is no such thing as a result from a halting oracle other than “Halts” or “Doesn’t halt”. There is no such thing as a result from valid construction that isn’t a valid program. There is no such thing as a program for which a genuine halting oracle doesn’t generate a result. There is no wiggle room. *If* a halting oracle exists, *then* a deceiver exists. *If* a deciever does not exist, *then* a halting oracle does not exist.
    The only way around any of that is by either redefining “halting oracle”, or by redefining “computing system”. But then you’re no longer talking about this proof – because the halting proof uses the standard definitions of computing system and halting oracle.

    Reply
  73. Jonathan Vos Post

    Cléo Saulnier seems uncomfortable with basic defintions about formal computing systems, but not troubled by integers or equations. Hence I suggest reading (from which I give an excerpt):
    A Wonderful Read On Gödel’s Theorem
    http://processalgebra.blogspot.com/2007/11/wonderful-read-on-gdels-theorem.html
    Let us say that a formal system has an arithmetical component if it is possible to interpret some of its statements as statements about the natural numbers, in such a way that the system proves some basic principles of arithmetic having to do with summation and multiplication. Given this, we can produce (using Barkley Rosser’s strengthening of Gödel’s theorem in conjunction with the proof of the Matiyasevich-Davis-Robinson-Putnam theorem about the representability of recursively enumerable sets by Diophantine equations) a particular statement of the form “The Diophantine equation
    p(x1 , . . . , xn ) = 0 has no solution” which is undecidable in the theory, provided it is consistent.
    Franzén remarks that
    ….it is mathematically a very striking fact that any sufficiently strong consistent formal system is incomplete with respect to this class of statements,….
    However, no famous arithmetical conjecture has been shown to be undecidable in ZFC…..From a logician’s point of view, it would be immensely interesting if some famous arithmetical conjecture turned out to be undecidable in ZFC, but it seems too much to hope for.
    Wikipedia has a short list of mathematical statements that are independent of ZFC on this page. You may find it interesting.
    Saharon Shelah’s result to the effect that the Whitehead problem is undecidable in ZFC is discussed in this paper by Eklof. Shelah has also shown that the problem is independent of ZFC+CH.
    There is also an interesting post on a combinatorial problem that is independent of ZFC on the complexity blog. That post also includes several pointers to pieces that are well worth reading.

    Reply
  74. Cléo Saulnier

    I wanted to drop this, but for fear that people may actually believe what Mark is saying, I want to clear up a few things.
    First, your undecidable program is undecidable because it’s is invalid. Not because there is a deceiver.
    Second, I do get to criticize the halting problem. Open your mind a little. What good does it do to restrict answers? If you do this, then the problem is defining its own answer of undecidability as a premise and has nothing to do with computing systems at all. It’s restricting what answers are allowed. By doing this, you get invalid, and thus undecidable, programs. The undecidability comes from the wording of the problem. NOT from any computing system, however it may be defined.
    But I don’t need to redefine the problem to show that the proof is incorrect.
    You say a program is completely deterministic. I agree. But only for valid programs.

    If there exist any input for which the oracle doesn’t return either “Halts” or “Doesn’t halt”, then it’s not a halting oracle.

    Sure it is. If you don’t allow answers other than “Halts” or “Doesn’t halt”, then you’re refusing an answer of a relationship which is the only correct answer for a program that acts on the result of the oracle. Again, you are the one who is deciding that this problem is undecidable, not the computing system. The only way that a relation is not allowed is if you are dealing with something that is not a formal computing system. This is why I keep pointing back to the flawed definition and why you start off with a premise of undecidability.
    Why is it so difficult to accept to let the oracle return whatever it wants, but if it ends up returning true or false for all programs, then it’s the oracle? Why is this any different? Ah!!! Because this would show why Q is not a valid program. Again, human decisions get into it rather than determining a true property of computing systems. So instead of accepting that Q could never be built, you’re arguing that it’s the oracle instead that can’t be built. The only way this holds up is if you disallow the oracle from returning correct answers. Unfortunately, you don’t seem to realise that if you do this, you’re also contradicting the definition of a computing system.

    By the very definition of computing systems, there is no such thing as a program for which a valid application of program construction rules does not result in a valid program.

    Not so. You must stop repeating this lie. If you write a program that can only handle 0 and 1 but the function can return any natural number, the definition of your program will be invalid. It won’t exist. Your definition is flawed. Forget composability. You can’t build your definition from incompatible parts. Sure, you can create the program ANYWAYS. But it won’t be the program you think it is. Like I said, something about your proof must hold up. If Q ceases to be Q, then your proof falls apart.

    What this means is that *IF* H is a program, there is *no possible way* that Q is not a program, unless you aren’t dealing with a Turing-equivalent computing system.

    No! You’re stuck in assuming that the parts exist before checking if your definition of the program is valid. Invalid programs can’t exist. If what you say it does and what it actually does aren’t the same, then the program is invalid. It exists, but it’s invalid.

    There is no possible way for the construction to generate something outside of the domain of H.

    Nobody said this. You’re the only one pushing this argument. I’m saying the program is invalid. It doesn’t do what you claim it does.

    Think of programs as natural numbers. You can enumerate the set of all possible programs – so assign each program to a natural number. Then the construction is a total function from natural numbers to natural numbers; and the supposed oracle is a natural number. What you are arguing is that somehow, taking a total function from naturals to naturals, and applying it to a natural number, is allowed to produce a result that is not a natural number. Nope, sorry. You can’t do that.

    First, you’re assuming there is a number that does what Q does. There is no such number because the oracle MUST return a relation for any program that acts on the results of the oracle. Again, any program may return any sequence of natural numbers. YOUR DEFINITION! Yet you do not allow this for H? Why not? Ah, the definition! A human restriction. Do you not see that a relationship is the ONLY correct answer? Allow it to return the only possible correct result. If your argument is that it’s a human restriction in the wording of the halting problem, you must accept that the proof and the problem say nothing about computing systems. If you allow the correct result from the oracle, there is no natural number that represents your description for Q. Refusing to accept the correct answer from the oracle is trite. It’s really childish. It means you’re starting with the premise of undecidability since you’re restricting the definition of a computing system.

    There is *no room* in the definition for the result of the function to be outside the set of natural numbers (that is, program labels) for any of its inputs.

    Ah, so you allow the oracle to have the same range for its results? See, you’re arguing on both sides of the fence. I’ve discredited every single one of your arguments. Yet you have not debunked a single one of mine. You keep repeating things you’ve read and assume I can’t be right. That’s an unhealthy frame of mind. Look at what the halting problem REALLY means. Open your mind and look again for the first time without all the baggage you’ve been preconditioned to believe about the halting problem.

    There is no possible way to produce something outside the set of valid programs using the construction rules from the definition of a computing system.

    Of course there is. If your definition is wrong, you won’t find any natural number that represents your program. Like saying that R is a program that returns two boolean values using one bit. Can’t do it. That program is invalid. You could still create one where one of the values is lost. The program exists, but it’s invalid. It doesn’t do what you think it does. That’s what you’re doing with H (and Q). You’re saying the oracle is the oracle, but not the oracle. Well, big deal if you prove a contradiction there.

    There is no such thing as a result from a halting oracle other than “Halts” or “Doesn’t halt”.

    This is only true if no program can call the oracle and act on the results. If a program does act on the result, then “Halt” or “Doesn’t halt” are invalid results. They are incorrect. It doesn’t mean it’s undecidable. It simply means that your definition of the oracle is incompatible with your definition of possible programs.

    There is no such thing as a program for which a genuine halting oracle doesn’t generate a result.

    But you just said you don’t allow valid results (a relation). Don’t you see that if a program acts on the results of the oracle, it MUST be a relation. That this is the only correct answer. If you don’t allow it, then it’s a human decision to reject that answer. It means that the premise is that of undecidability and not a conclusion.
    BTW, why is this an invalid program and not your Q? I hope you say it’s the definition because then you’d have to admit that the wording of the halting problem is incompatible with that of any formal computing system.

    The only way around any of that is by either redefining “halting oracle”, or by redefining “computing system”. But then you’re no longer talking about this proof – because the halting proof uses the standard definitions of computing system and halting oracle.

    You have it backwards. The wording of the halting problem is incompatible with that of a computing system by not allowing the oracle to return the only possible correct value of a relation. You don’t even need Q to do the opposite. It could do exactly what the oracle says and you’d still have a result of undecidability. This is because both true and false are valid. Hence a relation.
    Mark, you really need to take a second look at the halting problem or start using a little logic. Up until now, you’re still convinced I don’t understand the halting problem. That’s your argument. It’s rather insulting, but one that is ultimately incorrect. The wording of the halting problem is incompatible with computing systems. Nothing about the proof holds up. Not even the definition of Q. Have you ever thought that the only reason Q can exist under the so-called proof is because the definition of the oracle is incompatible with that of any formal computing system? Open your mind for once.
    Besides, I’ve answered and rebuked every single one of your arguments. OTOH, you persist on thinking that I don’t understand the problem and only repeat what you’ve read in books (or wherever else) assuming that everything you read is correct at face value. You completely ignore my arguments instead of attacking them. Also note that my conclusions are more consistent than the proof. Under my arguments, the conclusions are the same no matter if a program does exactly what the oracle says or if it does the opposite. The current proof cannot do that.

    Reply
  75. Mark C. Chu-Carroll

    Cleo:
    A Field Medal and million dollar prize are waiting for you. Why don’t you stop wasting time on this pathetic little blog, and go publish your brilliant results?
    Oh, yeah. It’s because you’re full of shit.
    *If* you redefine the halting problem, *and* you redefine what it means to be a computing system, *then* you are correct. Of course, if you do that, then you’re not actually disproving the halting theorem. You’re disproving “Cleo’s Idiotic Halting Theorem”.
    You can’t redefine a halting oracle to do something other than what it’s defined to do, and then claim that because your definition allows it to do something that a computing system *can’t* do, that a proof based on the standard meaning of the halting problem is invalid.
    If you allow “maybe” as an answer, then you can define a “halting oracle” which is always correct. It’s really simple: always answer “maybe”, and you’re never wrong. But it’s *not* a halting oracle. It doesn’t tell you anything useful. The whole point of the halting problem is to answer a very simple question: if I run a program P, will it ever stop?
    If a computing system is deterministic, then you can simply answer “yes” or “no”. There’s no such thing as a deterministic program for which the answer to that question is anything other than “yes” or “no”. There is no “maybe”. There is no relationship. There is no fuzz. Either it will stop, or it won’t. A halting oracle will either say it halts, or it doesn’t.
    You keep trying to introduce this idea of an “invalid program*. What you don’t seem to be able to understand is that that’s not possible. The rules of program construction describe “how to construct valid programs*. There is *no way* to construct an invalid program by following the construction rules of the computing system.
    The only way that you can get an *invalid* program from the construction rules is if you’re not using an effective computing system. (So, for example, if you’re using a linear-bounded automaton, then some applications of the recursion principle will create programs that can’t run on an LBA. But if you’re using an ECS – that is, a turing equivalent computing system – then there is no such thing as an invalid program generated by construction from ECS rules.
    That was the point of my “natural numbers” example. If you have a total, one-to-one, onto function f from the natural numbers to the natural numbers. then there’s no such thing as a natural number n such that f(n) isn’t a natural number. If both f and g are total, one-to-one, onto functions from natural numbers to natural numbers, then the composition f*g is a total, one-to-one onto function, and there is no natural number n such that f*g(n) is not a natural number.
    Program construction rules are like that. If you have a valid program, P – then invoking P as a subroutine is a valid step in another program. If you have a valid program P that generates a result, then the invocation of P can be used to control a choice between two alternative sub-computations. There is no way around this.
    To be very concrete about it – an example of what a program construction rule says is: if E is a program that computes the value of a boolean expression, and F and G are two different valid programs, then the if/then/else statement “if E then F else G” is a valid program.
    That’s *all* that the halting construction does. If that construction doesn’t produce a valid program, then *by definition* you aren’t talking about an effective computing system.
    The only way that your “disproof” is correct is if you redefine halting oracle to mean something different than what mathematicians mean by “halting oracle”, and if you redefine “computing system” to mean something different than what mathematicians mean by “computing system”.

    Reply

Leave a Reply