Warning, I progressed in my thinking as I responded below, so please read the whole post before taking time to respond/correct my earlier paragraphs.

Bruno Marchal wrote: > Le 15-juil.-06, à 02:56, Tom Caylor a écrit : > > ... > > You've written a sort of intuitive code for G above, where you say > > "generate". > > > Yes. With Church thesis fortran *is* universal. > Fortran, like Java, Python, Lisp, Diophantine equations, rational > unitary matrices or other rich recursively presented groups, etc... > Chose your favorite one, once and for all. To fiw the idea I take > fortran, and old and venerable programming language. > Those language are grammatically well defined. > So you *can* write a well defined precise (I would even say concrete) > program fortran capable of generating all fortran programs computing > function of one variable. It is enough to write a program which > generate successively all strings of length n, and which filter out > those strings which are not fortran one variable programs. > I gave an intuitive code just for not alarming people with piece of > code, but it should be clear that anyone knowing at least one > programming language, and knowing the notion of alphabetic order on a > finite set of symbols (like the keyboard buttons of a computer) should > be able to write, in that programming language, a program generating > (just generating) successively all the (one variable) programs in that > language. Then by coding "finite inputs" by natural numbers, you can > think of those programs as computing functions from N to N, or from > subset of N to N. > OK. I'm OK with this if it is "just generating" the programs. I see where I was getting wrapped around the self-referential axle by trying to make the generation program self-aware. But GEN just generates all programs, so when it gets to "itself" it just generates the code and then goes on to generating the next program, totally oblivious to the fact that it just generated itself. > If you agree with this, you agree with the fact that there is a > program, let us call it GEN, in fortran which generates the sequence of > codes P1 P2 P3 P4 P5 P6 ... Now the partial computable functions are > those functions computed by those programs, and I wrote the sequence of > those functions Fi. That is F1 F2 F3 F4 F5 F6 F7 F8 ... > So GEN, as you said, could be implemented as a program that generates all possible character sequences and filters out the valid fortran programs. Perhaps it would do that by running each character sequence through a fortran compiler (which would be included as part of GEN) and sees which ones don't end up with a fatal compilation error. I will call this GENfilter, for reasons below. begin GENfilter do for i = 1 to infinity generate "character sequence i" run "character sequence i" through a fortran compiler if the result is valid output "character sequence i" else i = i + 1 end if end do end GENfilter > Note that GEN is not in the list {P1 P2 P3 P4 P5 P6 ...} for the non > interesting contingent fact that GEN is a 0 variable program. > But GEN2, defined by GEN2(n) = the code of the nth program in the list, > belongs to the list, given that GEN2 is a one variable program. So > GEN2(1) = P1, GEN2(2) = P2, GEN2(3) = P3, etc. So if you had all of the programs GEN2(n) for n = 1 to infinity, could you implement a GEN equivalent to GENfilter, which I will call GENcall in the following manner? begin GENcall do for n = 1 to infinity call GEN2(n) and output its output (i.e. Pn) end do end GENcall > And now, giving that GEN2 is in the list, there is a number k such that > GEN2 = Pk. Nothing magic here. True: GEN2(k) = Pk. Nothing paradoxical > here. GEN2 compute a total function, that is GEN2 on any n gives the > nth programs, and (diagonlaization), on its own indice k it gives its > own code Pk. OK. Now I agree there's nothing magical about the generation part. > > Now *your* G is just defined by G(n) = GEN2(n). But doesn't G output the range of one of the set of *all* partial recursive functions, whereas GEN2 outputs the code of a *fortran* program? So shouldn't it be the following, where execute() actually executes the fortran program generated by GEN2(n)? G(n) = execute(GEN2(n)) > It will use most > probably GEN as subroutine. I have already send to this list the code > of a GEN2 in LISP. I guess I was assuming that G would be implemented as the following, similar to the method of GENcall, but as a one-variable program. My G would be begin G(n) call GEN2(n) and store its output in Pn execute(Pn) and output its output end G Perhaps this is where I am going astray. I wondered why you kept saying, in order to compute G(k), you have to generate *all* of the programs for G(1) to G(k) (and then execute G(k)) , whereas if G is defined as above, you would just call G with input k. I guess in order to have the code for GEN2(k) available to call, you would have to first run GENfilter to find out which valid fortran program corresponds with k, so you would end up generating all of the programs from 1 to k. Actually, it seems we could do this by writing GEN2 to use GEN's "filter" method as follows: begin GEN2(n) i = 1 do until i = n generate "character sequence i" run "character sequence i" through a fortran compiler if the result is valid output "character sequence i" else i = i + 1 end if end do end GEN2 So then with GEN2 (with the filter method) and G programmed as above (calling GEN2), if we go on to include *execution* of the code Pk needed to compute G, then as I've said before, what will happen which we execute G on k, if G=Pk? When we "execute(Pk)" we literally will execute the code for G (again, defined above). This will call GEN2 to generate the code for (all the programs P1 through) Pk, and then *execute* Pk... Hence, the infinite loop, without the "+ 1", when you consider *execution* of Pk, which is required in order to compute G. > You can prove nothing with it. Like if an inhabitant of the Knight > Knave Island tell you "I am a Knight" It could be true (Knight always > tell the truth) or a lie (Knaves always lie). > > *My* function G is defined by G(n) = GEN2(n) +1. > > And given that we have already program GEN2, it is a child play to > modify the program computing your G into mine: just add the instruction > "+ 1" at the right place. > > Now, in case that G would be a total function, we would be in a > situation analog with what happens if you meet a inhabitant of the > Knight Knave Island saying "I am a Knave", a total impossibility (if a > knave says it it would tell the truth which a knave never does, and a > Knight will never say the falsity "I am a knave". It is the "+ 1" which > force the G function to be wrong in all case you give her its own > indice, as we have seen. > > I agree that if you add the "+ 1" then you can get a logical diagonalization argument that says perhaps says something more than without the "+ 1". I don't know yet what the "more" might be, because I haven't yet convinced myself that there isn't an infinite loop (in *execution*, not generation) without the "+ 1". > > > > But if G = Fk, then when we go to explicitly write the > > code for G, when we get to "generate the code for Fk" what to we write? > > > GEN generates all the codes. Like if I count 0, 1, 2, 3, .... without > ever stopping (in platonia) soon or later (actually later!) I will get > some of my correct Godel number describing me at the right level of > description. I will not be aware of that, but this is not the point. > GEN generates algorithmically, mechanically, "fortranically" all the > fortran codes. If you doubt this can be done, the best is that you > write the code for yourself. > If we're talking just generation, then I agree there is no problem. > > > > > Fk *is* G. So we start generating G from the beginning until we again > > get to the part "generate the code for Fk" and then we do this forever. > > > > It's sort of like at dinner when I ask my son or daughter what they did > > today, and they tell me everything they did starting with "I woke up > > and made my bed...", and jokingly they finally say, "...and then sat > > down for dinner and told you, 'I woke up and made my bed...' ..." But > > of course they finally stop and laugh, instead of going forever. > > > Something similar happens with the universal dovetailing. But here we > use just the program generating part of the dovetailing. Your G as > mine, applied to n, does compute directly, without dovetailing the > value Fn(n) or Fn(n)+1 (cf Fn is the function computed by the program > Pn). > So you are saying that we are doing just the generation part. But in order to talk about what values of n that G is defined on, don't we have to talk about which values of n that G crashes (runs forever) on when it is *executed*? I'm wondering if perhaps an "essential" diagonalization is happening even in self-reference, whether with in the dinner-table situation or with G(k)=Fk(k). Perhaps it is because when someone says, "I told you...", the "I" that is saying "I told you" is a *different* "I" from the "I" that "told you". Likewise the G that is called the first time is a *different* G from the G that is called the second time... Yes, they have the same code, but the first G is the "first G", whereas the second G is the "second G", if you understand what I mean. Perhaps this is doing something that is similar to the (classical diagonalization) *forcing* the G's to be even more different by changing their outputs by "+ 1". > > > > > > > This is where I'm stumped. You say that we escape the diagonalization > > and the program runs forever (because 0 is not equal to 1). I'm trying > > to get a firm handle on what is actually going on here. > > > You are right to do so. Understanding the (effective, programmable) > diagonalization of the Fi, is needed to proceed. It shows that IF > Church thesis is correct so that the Fi contains all the total > computable functions (that is the Pi contains all the codes of the > total computable functions) THEN the code of the total function is > necessarily HIDDEN in the Fi. No algorithmic procedure capable of > distinuishing the Pi computing total comp. function from the Pi > computing the proper partial one will ever exist. As I shown this shows > quickly both insolubility and incompleteness. > > > > > > Is there some > > intuitive definition that is causing an ambiguity, just like the > > definition of a function itself, as in my previous post? > > > I don't think there is anything conceptually ambiguous once you accept > CT. What has been proved to be necessarily mathematically ambiguous is > the notion of total computable function; in the sense that we have > prove that NO algorithm can test in general the totality or proper > partiality of the function computed by some code. > > This is gives us two path in the conquest of the "controllable world". > Either from inside by augmenting in the constructive transfinite RE > *subset* of the set of codes for total functions. This is mainly the > path toward the first person plenitude; or by accepting the jump toward > the partial functions and learning to live together with a partially > non controllable reality. It is akin to the third person jump of act of > faith, but it is also the openness toward the others (which you cannot > control). > > Tell me if you are convince that "your" and "my" G are programmable. > They are both programmable, but I think they are both non-*executable* on "k" (if G=Fk), for the same reason, self-reference. > You would be kind to tell me if you understand my preceding posts after > getting that G is programmable. (I thought you already get that GEN was > programmable). You should understand that I have proved rigorously > (thanks to CT) the incompleteness of computer science. To prove the > incompleteness of arithmetic/number theory from that consists only in > showing that enough of computer science can be translated in number > theory to inherit essential incompleteness. One path is Godel's > arithmetization, another one, quite beautiful, is the study of the > Diophantine equations, the work of Matiyasevich(*) ... (RE set can be > characterized by Diophantine sets of numbers. Diophantus is a > contemporary to Plotinus, about 200-300 after JC). > This is needed to understand how the many computational histories are > implemented in all possible sort of ways in the realm of numbers, and > later how they interfere when seen from inside, and how from inside > (from 1 povs) all plotinus hypostases get arithmetical interpretations, > including the one describing "matter", which we can then be compared to > the empirical quantum, to test the comp hypothesis. OK? > > Bruno > > (*) Yuri V. Matiyasevich (foreword by Martin Davis), Third printing > 1996, Hilbert's Tenth Problem, The MIT Press (1993 Russian Edition). > > http://iridia.ulb.ac.be/~marchal/ --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Everything List" group. To post to this group, send email to everything-list@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/everything-list -~----------~----~----~----~------~----~------~--~---