Le 30-mai-06, à 03:14, Tom Caylor a écrit :


> OK.  I see that so far (above) there's no problem.  (See below for
> where I still have concern(s).)  Here I was taking a fixed N, but G is
> defined as the diagonal, so my comparison is not valid, and so my proof
> that G is infinite for a fixed N is not valid.  I was taking G's
> assignment of an ordinal of omega as being that it is in every way
> larger than all Fi's, but in fact G is larger than all Fi's only when
> comparing G(n) to Fn(n), not when comparing G(Nfixed) to Fi(N) for all
> i's.



Right.





> OK.  I think you are just throwing me off with your notation.  Do you
> have to use transfinite ordinals (omega) to do this?  Couldn't you just
> stay with successively combining and diagonalizing, like below, without
> using omegas?
>
> G(n) = Fn(n)+1
> Gi(n) = G(...G(n)), G taken i times
> Then instead of using more and more additional letters, just add
> subscripts...
> H1(n) = Gn(n)+1
> H1i(n) = H1(...H1(n)), H1 taken i times
> H2(n) = H1n(n)+1
> H2i(n) = H2(...H2(n)), H2 taken i times
>
> Then the subscripts count the number of diagonalizations you've done,
> and every time you do the Ackermann thing, instead of adding an omega
> you add another subscript.   Then it continues ad infinitum.  You can
> do the Ackermann thing with the *number* of subscripts, i.e. do the
> Ackermann thing on the number of times you've done the Ackermann
> thing... etc.
>
> This may just be a technical point, but it doesn't seem precise to do
> very much arithmetic with ordinals, like doing omega [omega] omega,
> because you're just ordering things, and after a while you forget the
> computations that are being performed.  I can see that it works for
> just proving that you can continue to diagonalize and grow, which is
> what you're doing.  I just don't want to be caught off guard and
> suddenly realize you've slipped actual infinities in without me
> realizing it.  I don't think you have.



OK. And you are right, I could have done this without mentioning the 
constructive ordinal. But it is worth mentioning it, even at this early 
stages, because they will reappear again and again.
Note that all those infinite but constructive ordinal are all countable 
(in bijection with N), and even constructively so.  Note also, if you 
haven't already done so, that omega is just N, the set of natural 
numbers. I will soon give a more set-theoretical motivation for those 
ordinals.

Actually there is a cute theorem about constructive ordinal. Indeed 
they are equivalent to the recursive (programmable) linear 
well-ordering on the natural numbers. Examples:

An order of type omega: the usual order on N (0<1<2<3<4<5<6<...)

An order of type omega+1 : just decide that 0 is bigger than any non 
null natural numbers:

1<2<3<4<5<6< .... <0

It is recursive in the sense that you can write a program FORTRAN (say) 
capable of deciding it. For example such a program would stop on "yes" 
when asked if 4<8, and "no" if you ask 0<8, etc.

An order of type omega+omega: just decide that all odd numbers are 
bigger than the even ones, and take the usual order in case the two 
numbers which are compared have the same parity:

0<2<4<6<8<10< ..... 1<3<5<7<9<...

An order of type omega+omega+omega: just decide that multiple of 3 are 
bigger than the multiple of two, themselves bigger than the remaining 
numbers:

1<5<7<11<13<14<17<...  0<2<4<6<8<10<... 3<6<9<12<15<...

Again it should be easy to write a Fortran program capable of deciding 
that order (that is to decide for any x and y if x < y with that 
(unusual) order.

Exercise: could you find an order of type omega*omega? (Hint: use the 
prime numbers).

Those omega names are quite standard.




>
> OK.  So we haven't left the finite behind yet.  It makes intuitive
> sense to me that you can diagonalize till the cows come home, staying
> within countability, and still not be done.  Otherwise infinity
> wouldn't be infinite.
>
> On the tricky question, it also makes intuitive sense that you can
> sequence effectively on all computable growing functions.  This is
> because the larger the growing function gets, the more uncovered space
> (gaps) there are between the computable functions.  Any scheme for
> generating growing functions will also leave behind every-growing
> uncomputed gaps.  Very unmathematical of me to be so vague, but you've
> already given us the answer, and I know you will fill in the gaps.  :)



I will. Unfortunately this week is full of duty charges. Meanwhile, I 
would like to ask George and the others if they have a good 
understanding of the present thread, that is on the fact that growing 
functions has been well defined, that each sequence of such functions 
are well defined, and each diagonalisation defines quite well a precise 
programmable growing function (growing faster than the one in the 
sequence it comes from).
Just a tiny effort, and I think we will have all we need to go into the 
"heart of the matter", and to understand why comp makes our "universe" 
a godelized one in the Smullyan sense.






> I meant that it makes intuitive sense that you *cannot* sequence
> effectively on all computable growing functions.




You are right. But would that mean we cannot dovetail on all growing 
computable functions? I let you ponder this "not so easy" question.

Bruno

PS About Parfit, I have already said some time ago in this list that I 
appreciate very much its "Reasons and Persons" book, but, in the middle 
of the book he makes the statement that we are "token", where it 
follows---[easily? not really: you need the movie graph or some strong 
form of Occam]--- that comp makes us type, even abstract type. It just 
happens that from a first person point of view we cannot take ourselves 
as type because we just cannot distinguish between our many instances 
generated by the Universal Dovetailer. A similar phenomenon occur 
already with the quantum. But from the point of view of this thread, 
this is an anticipation. The things which look the more like token, 
with the comp hyp, are the numbers. This makes the second half part of 
Parfit's book rather inconsistent with comp, but, still, his analysis 
of personal identity remains largely genuine. (I don't like at all his 
use of the name "reductionism" in that context, also, it's quite 
misleading).

http://iridia.ulb.ac.be/~marchal/


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~----------~----~----~----~------~----~------~--~---

Reply via email to