Dear Bruno,

Thank you very much for this explicit remark. It is very helpful for my research. I have some comments and a question.

On 12/1/2011 11:34 AM, Bruno Marchal wrote:
Hello Stephen,

On 01 Dec 2011, at 13:16, Stephen P. King wrote:

Hi Bruno,

Could you eleborate a bit about how "Computability is the only notion immune to Cantor's diagonalization"?

Cantor proved that infinity is sensible to diagonalization. Given an infinite set, you can find a bigger set by diagonalization.

Gödel proved that any effective provability system (theory) is incomplete. Given a theory rich enough to talk on numbers, you can build a richer provability system by diagonalization.

Tarski proved that any system of definition will lack the expressive power to define some notion, notably its truth notion. Again, this follows from diagonalization.

Diagonalization is a sort of transcendental operation in mathematics. If you have about anything pretending to be a universal notion in the domain, you can diagonalize against it.

That is why Stephen C. Kleene was skeptical when Church told him that his lambda-calculus defined a universal notion of computability.

At first, it looks like computability is sensible, NOT immune, to diagonalization. Imagine that there is a universal language for computability L. Consider all the computable function defined on N and with value in N, enumerated from their code in that language:

f_0, f_1, f_2, f_3, f_4, etc.

Let g be defined on n by g(n) = f_n(n) + 1 (g is said to be defined by diagonalization)

Each f_i are computable function from N to N, so f_n(n) is well defined, and "+ 1" is obviously computable, so g seems to be computable.

But if L is universal, the g should be in the list. So g = f_k, for some k.

But then

g(n) = f_k(n), given that g = f_k

In particular, with n = k

g(k) = f_k(k)

But by definition of g, we know that

g(k) = f_k(k) + 1

By Leibniz identity rule

f_k(k) = f_k(k) + 1

And by using the fact that f_k is a function from N to N, we know that f_k(n) is a number for any n, so f_k(k) is a number, and this number can be subtracted at the left and right hand side of the equality above, so

0 = 1.

Church's pretension that L is universal seems to be refuted.

But that proof is wrong. Do you see why?

Take the time to find the mistake by yourself before reading the solution below.

------

It seems that g is not necessarily specified by g(k), since k assumes too much.


What is wrong is that the language L might define more than the computable functions from N to N, but can also define functions from from subset of N to N. In that case, the reasoning just shows that g(k) = f_k(k) is not defined.

This shows that an universal machine can crash (run in a loop without ever giving an output), and this necessarily so to be universal.

    Part of the Non-halting result... OK.

Worse, there will be no effective means (and thus no complete theory of universal machine or language) to decide if some f_i is defined on N or a proper subset of N. If that was the case, we would be able to filter out the functions from a proper subset of N to N from the functions from N to N, and then the diagonalization above would lead to 0 = 1.

Let us call a function partial if that function if either from N to N, or from a subset of N to N, and a function is total if it is defined on N. The reasoning above shows that the set of total functions is not immune to diagonalization. But the superset of the partial functions is, and that is a deep strong argument for Church thesis.

But it still seems that there is something missing in this conclusion about the Church thesis. It is that for computation all that one needs to consider in a model is the N -> N and n /subset N -> N maps/functions. The problem of time that I have complained about is part of this problem that I see. It reminds me strongly of the problem of the axiom of choice, where the existence of unmeasurable sets cannot be excluded inducing such things as the Banach-Tarski paradox <http://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox>. The same problem that occurs in the result you discuss above: there is a function/object that cannot be exactly defined. How do we get around this impasse? What if there is a way to sequester the pathological parts of the function without having to define the function? (Something like this is discussed here http://ndpr.nd.edu/news/24157-on-preserving-essays-on-preservationism-and-paraconsistent-logic/) We see a similar process in physics where the abstract spaces are defined to be well behaved ab initio. What if this "good behavior" is emergent, like what happens when we couple a large number of chaotic systems to each other. The entrainment process between them forces them to behave as if they are nice and linear! (L. M. Pecora and T. L. Carroll, "Synchronization in chaotic systems," Phys. Rev. Lett. *64*, 821 (1990). )


As far as I know, in mathematics, only computability, and computability related notion (like their relativization on oracles) are immune for the diagonalization. Gödel, in his Princeton lecture called that immunity a miracle.

Immune from diagonalization but not from forcing <http://en.wikipedia.org/wiki/Forcing_%28mathematics%29>. It seems to me, and I admit that this is just an intuition, that we are constraining the notion of computation to too small of a box to realize its full potential. Why is the notion of time so scary for computer scientists and logicians?


Everything that I explain depends on this. UDA works thanks to the universality of the UD, which relies on that diagonalization closure, and AUDA uses the fact that self-reference is build on the diagonalization procedure. Unfortunately, or fortunately, all diagonalizations are hidden in Solovay's theorem on the arithmetical completeness of G and G*.

OK, but I want to go further. We need for logics the equivalent of a principle of variation <http://en.wikipedia.org/wiki/Variational_principle> so that we do not have to resort to the ansatz of Platonic walls to explain where universal computation is implemented. You seem to know a something about this idea given your previous reference to amenable groups <http://en.wikipedia.org/wiki/Amenable_group> , but it is as if you are afraid to look over the edge and stare into the abyss. I think that the Tennenbaum theorem <http://en.wikipedia.org/wiki/Tennenbaum%27s_theorem> offers us a clue. It states that "no countable </wiki/Countable_set> nonstandard model </wiki/Non-standard_model_of_arithmetic> of Peano arithmetic </wiki/Peano_axioms> (PA) can be recursive </wiki/Recursion_theory>". We need to retain the property of countability and recursivity for computation but need the spectrum of possibilities that non-standard models allows to act as a range of variation. *Is there a version or model of arithmetic that would allow this but retain the expressiveness of PA?*


You might search on "diagonalization" in the archive of this list to find more on this.

Bruno

Onward!

Stephen


PS.

Some related papers:

http://arxiv.org/abs/1110.5456 "Meaning in Classical Mathematics: Is it at Odds with Intuitionism?" A good discussion with examples of consequences. http://arxiv.org/abs/1109.5886 "Non-Archimedean Whitney-stratifications" Another good example of a similar idea. http://arxiv.org/abs/1108.5062 "A Non-Standard Semantics for Kahn Networks in Continuous Time" The picture that this paper paints is very close to my intuition.

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to