Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-06-03 Thread Bruno Marchal


Le 30-mai-06, à 19:13, Tom Caylor wrote :


 From what you've said about dovetailing before, you don't have to have
 just a single sequence in order to dovetail.  You can jump among
 multiple sequences.  I have yet to understand how you could dovetail on
 something that is not effective.  That seems to be where you're going.
 I guess we have to get to non-effective diagonalization first.



It is true that we can dovetail on multiple sequences, once we can 
generate their codes, like all the sequence of growing computable 
functions we have visited.
But none of them are universal.
Once I can name a sequence of growing function, and thus can program an 
enumeration of their  codes, I will be able to diagonalize it, showing 
it was not universal.




---

Quentin Anciaux wrote:

 I think dovetailing is possible because the dovetailer only complete 
 sequences
 at infinity. So when you construct the matrice on which you
 will diagonalize, you are already diagonilizing it at the same time.
 Example: when you have the first number of the first growing function, 
 you
 can also have the first number of the diagonalize function (by adding 
 1) and
 the first number of the diagonalize*diagonalize function and ... ad 
 recursum.
 By dovetailing you execute in fact everything in parallel but all 
 infinites
 sequences are only completed at infinity.



Same answer as the one for Tom. You can diagonalize only on the 
transfinite sequences up to a nameable ordinal, and clearly this cannot 
be closed for diagonalization. Even in the limit, the transfinite 
construction will fail to name some computable growing function.




---

Hal Finney wrote:

 The dovetailer I know does not seem relevant to this discussion about
 functions.  It generates programs, not functions.



So does our sequence of growing functions. They are given by the 
programmable generation of the code of the growing function. The same 
for the diagonalization.





 For example, it
 generates all 1 bit programs and runs each for one cycle; then 
 generates
 all 2 bit programs and runs each for 2 cycles; then generates all 3
 bit programs and runs each for 3 cycles; and so on indefinitely.  (This
 assumes that the 3 bit programs include all 2- and 1-bit programs, 
 etc.)
 In this way all programs get run with an arbitrary number of cycles.


Close :)



-
Quentin Anciaux comments on Hal Finney:

 In fact it is relevant because of this :
 - Bruno showed us that it is not possible to write a program that will 
 list
 sequentially all growing functions.



...that will list sequentially all computable growing functions. Right.





 - But the dovetailer will not do it too, but what it will do instead is
 generate all program that list all growing functions.


Mmh...




 So the dovetailer will not list all the growing function but
 will generate (and execute in dovetailing) the infinite sequence of 
 programs
 that generate all of them.


The shadow of the truth, but what you describe would still be 
diagonalized out. (Or you are very unclear, to be sure, you will tell 
me later ... or you will not tell me later :)




--

Jesse Mazer (on Hal Finney):


 I was being a little sloppy...it's true that a non-halting program 
 would not
 be equivalent to a computable function, but I think we can at least 
 say that
 the set of all computable functions is a *subset* of the set of all
 programs.


Key remark.



 As for the programs taking input or not, if you look at the set of
 all programs operating on finite input strings, each one of these can 
 be
 emulated by a program which has no input string (the input string is 
 built
 into the design of the program).


Actually this is a key remark too, but it will be needed only later (I 
recall that I am trying to explain a the missing link between computer 
science and Smullyan's heart of the Matter in FU, after a question by 
George Levy). This key remark is related to a simple but basic theorem 
in computer science known as the SMN theorem, S for substitution and M 
and N are parameter, and grosso modo the theorem says that you can 
programs can be mechanically parametrized by freezing some of their 
inputs.




 So for any computable function, there
 should be some member of the set of all halting programs being run by 
 the
 dovetailer that gives the same output as the function, no?


Yes. Do you see or know or guess the many consequences?



---

George Levy wrote:


 To speak only for myself,  I think I have a sufficient understanding of
 the thread. Essentially you have shown that one cannot form a set of 
 all
 numbers/functions because given any set of numbers/functions it is
 always possible, using diagonalization,  to generate new
 numbers/functions: the Plenitude is too large to be a set. This leads 
 to
 a problem with the assumption of the 

Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-31 Thread Quentin Anciaux

Le Mercredi 31 Mai 2006 00:21, Hal Finney a écrit :
 The dovetailer I know does not seem relevant to this discussion about
 functions.  It generates programs, not functions.  For example, it
 generates all 1 bit programs and runs each for one cycle; then generates
 all 2 bit programs and runs each for 2 cycles; then generates all 3
 bit programs and runs each for 3 cycles; and so on indefinitely.  (This
 assumes that the 3 bit programs include all 2- and 1-bit programs, etc.)
 In this way all programs get run with an arbitrary number of cycles.

In fact it is relevant because of this :
- Bruno showed us that it is not possible to write a program that will list 
sequentially all growing functions.
- But the dovetailer will not do it too, but what it will do instead is 
generate all program that list all growing functions.

So it will first generate the programme that create the first sequence and 
also the pogram that create the sequence composed of diagonalisation of the 
first and so on... it can because program are countable because they are 
mapped to N. So the dovetailer will not list all the growing function but 
will generate (and execute in dovetailing) the infinite sequence of programs 
that generate all of them.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-31 Thread Jesse Mazer

Hal Finney wrote:

Jesse Mazer writes:
  The dovetailer is only supposed to generate all *computable* functions
  though, correct? And the diagonalization of the (countable) set of all
  computable functions would not itself be computable.

The dovetailer I know does not seem relevant to this discussion about
functions.  It generates programs, not functions.  For example, it
generates all 1 bit programs and runs each for one cycle; then generates
all 2 bit programs and runs each for 2 cycles; then generates all 3
bit programs and runs each for 3 cycles; and so on indefinitely.  (This
assumes that the 3 bit programs include all 2- and 1-bit programs, etc.)
In this way all programs get run with an arbitrary number of cycles.

These programs differ from functions in two ways.  First, programs may
never halt and hence may produce no fixed output, while functions must
have a well defined result.  And second, these programs take no inputs,
while functions should have at least one input variable.

What do you understand a dovetailer to be, in the context of computable
functions?

Hal Finney

I was being a little sloppy...it's true that a non-halting program would not 
be equivalent to a computable function, but I think we can at least say that 
the set of all computable functions is a *subset* of the set of all 
programs. As for the programs taking input or not, if you look at the set of 
all programs operating on finite input strings, each one of these can be 
emulated by a program which has no input string (the input string is built 
into the design of the program). So for any computable function, there 
should be some member of the set of all halting programs being run by the 
dovetailer that gives the same output as the function, no?

Jesse



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-30 Thread Bruno Marchal


Le 30-mai-06, à 03:14, Tom Caylor a écrit :


 OK.  I see that so far (above) there's no problem.  (See below for
 where I still have concern(s).)  Here I was taking a fixed N, but G is
 defined as the diagonal, so my comparison is not valid, and so my proof
 that G is infinite for a fixed N is not valid.  I was taking G's
 assignment of an ordinal of omega as being that it is in every way
 larger than all Fi's, but in fact G is larger than all Fi's only when
 comparing G(n) to Fn(n), not when comparing G(Nfixed) to Fi(N) for all
 i's.



Right.





 OK.  I think you are just throwing me off with your notation.  Do you
 have to use transfinite ordinals (omega) to do this?  Couldn't you just
 stay with successively combining and diagonalizing, like below, without
 using omegas?

 G(n) = Fn(n)+1
 Gi(n) = G(...G(n)), G taken i times
 Then instead of using more and more additional letters, just add
 subscripts...
 H1(n) = Gn(n)+1
 H1i(n) = H1(...H1(n)), H1 taken i times
 H2(n) = H1n(n)+1
 H2i(n) = H2(...H2(n)), H2 taken i times

 Then the subscripts count the number of diagonalizations you've done,
 and every time you do the Ackermann thing, instead of adding an omega
 you add another subscript.   Then it continues ad infinitum.  You can
 do the Ackermann thing with the *number* of subscripts, i.e. do the
 Ackermann thing on the number of times you've done the Ackermann
 thing... etc.

 This may just be a technical point, but it doesn't seem precise to do
 very much arithmetic with ordinals, like doing omega [omega] omega,
 because you're just ordering things, and after a while you forget the
 computations that are being performed.  I can see that it works for
 just proving that you can continue to diagonalize and grow, which is
 what you're doing.  I just don't want to be caught off guard and
 suddenly realize you've slipped actual infinities in without me
 realizing it.  I don't think you have.



OK. And you are right, I could have done this without mentioning the 
constructive ordinal. But it is worth mentioning it, even at this early 
stages, because they will reappear again and again.
Note that all those infinite but constructive ordinal are all countable 
(in bijection with N), and even constructively so.  Note also, if you 
haven't already done so, that omega is just N, the set of natural 
numbers. I will soon give a more set-theoretical motivation for those 
ordinals.

Actually there is a cute theorem about constructive ordinal. Indeed 
they are equivalent to the recursive (programmable) linear 
well-ordering on the natural numbers. Examples:

An order of type omega: the usual order on N (0123456...)

An order of type omega+1 : just decide that 0 is bigger than any non 
null natural numbers:

123456  0

It is recursive in the sense that you can write a program FORTRAN (say) 
capable of deciding it. For example such a program would stop on yes 
when asked if 48, and no if you ask 08, etc.

An order of type omega+omega: just decide that all odd numbers are 
bigger than the even ones, and take the usual order in case the two 
numbers which are compared have the same parity:

0246810 . 13579...

An order of type omega+omega+omega: just decide that multiple of 3 are 
bigger than the multiple of two, themselves bigger than the remaining 
numbers:

15711131417...  0246810... 3691215...

Again it should be easy to write a Fortran program capable of deciding 
that order (that is to decide for any x and y if x  y with that 
(unusual) order.

Exercise: could you find an order of type omega*omega? (Hint: use the 
prime numbers).

Those omega names are quite standard.





 OK.  So we haven't left the finite behind yet.  It makes intuitive
 sense to me that you can diagonalize till the cows come home, staying
 within countability, and still not be done.  Otherwise infinity
 wouldn't be infinite.

 On the tricky question, it also makes intuitive sense that you can
 sequence effectively on all computable growing functions.  This is
 because the larger the growing function gets, the more uncovered space
 (gaps) there are between the computable functions.  Any scheme for
 generating growing functions will also leave behind every-growing
 uncomputed gaps.  Very unmathematical of me to be so vague, but you've
 already given us the answer, and I know you will fill in the gaps.  :)



I will. Unfortunately this week is full of duty charges. Meanwhile, I 
would like to ask George and the others if they have a good 
understanding of the present thread, that is on the fact that growing 
functions has been well defined, that each sequence of such functions 
are well defined, and each diagonalisation defines quite well a precise 
programmable growing function (growing faster than the one in the 
sequence it comes from).
Just a tiny effort, and I think we will have all we need to go into the 
heart of the matter, and to understand why comp makes our universe 
a godelized one in the Smullyan sense.






 I meant 

Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-30 Thread Tom Caylor

Bruno Marchal wrote:

 OK. And you are right, I could have done this without mentioning the
 constructive ordinal. But it is worth mentioning it, even at this early
 stages, because they will reappear again and again.
 Note that all those infinite but constructive ordinal are all countable
 (in bijection with N), and even constructively so.  Note also, if you
 haven't already done so, that omega is just N, the set of natural
 numbers. I will soon give a more set-theoretical motivation for those
 ordinals.

OK.  I feel power sets coming.


 Actually there is a cute theorem about constructive ordinal. Indeed
 they are equivalent to the recursive (programmable) linear
 well-ordering on the natural numbers. Examples:

 An order of type omega: the usual order on N (0123456...)

 An order of type omega+1 : just decide that 0 is bigger than any non
 null natural numbers:

 123456  0

 It is recursive in the sense that you can write a program FORTRAN (say)
 capable of deciding it. For example such a program would stop on yes
 when asked if 48, and no if you ask 08, etc.

 An order of type omega+omega: just decide that all odd numbers are
 bigger than the even ones, and take the usual order in case the two
 numbers which are compared have the same parity:

 0246810 . 13579...

 An order of type omega+omega+omega: just decide that multiple of 3 are
 bigger than the multiple of two, themselves bigger than the remaining
 numbers:

 15711131417...  0246810... 3691215...

 Again it should be easy to write a Fortran program capable of deciding
 that order (that is to decide for any x and y if x  y with that
 (unusual) order.

 Exercise: could you find an order of type omega*omega? (Hint: use the
 prime numbers).


You could use the Sieve of Eratosthenes (spelling?):

2*12*22*3... 3*13*33*5(all multiples of 3 that are not multiples
of 2)...
5*15*55*7(all multiples of 5 that are not multiples of 3 or 2)...

It sounds like the cute theorem says that you can keep dividing up the
natural numbers like this forever.

 Those omega names are quite standard.

 
  OK.  So we haven't left the finite behind yet.  It makes intuitive
  sense to me that you can diagonalize till the cows come home, staying
  within countability, and still not be done.  Otherwise infinity
  wouldn't be infinite.
 
  On the tricky question, it also makes intuitive sense that you can
  sequence effectively on all computable growing functions.  This is
  because the larger the growing function gets, the more uncovered space
  (gaps) there are between the computable functions.  Any scheme for
  generating growing functions will also leave behind every-growing
  uncomputed gaps.  Very unmathematical of me to be so vague, but you've
  already given us the answer, and I know you will fill in the gaps.  :)

 I will. Unfortunately this week is full of duty charges. Meanwhile, I
 would like to ask George and the others if they have a good
 understanding of the present thread, that is on the fact that growing
 functions has been well defined, that each sequence of such functions
 are well defined, and each diagonalisation defines quite well a precise
 programmable growing function (growing faster than the one in the
 sequence it comes from).
 Just a tiny effort, and I think we will have all we need to go into the
 heart of the matter, and to understand why comp makes our universe
 a godelized one in the Smullyan sense.

  I meant that it makes intuitive sense that you *cannot* sequence
  effectively on all computable growing functions.

 You are right. But would that mean we cannot dovetail on all growing
 computable functions? I let you ponder this not so easy question.

 Bruno

 PS About Parfit, I have already said some time ago in this list that I
 appreciate very much its Reasons and Persons book, but, in the middle
 of the book he makes the statement that we are token, where it
 follows---[easily? not really: you need the movie graph or some strong
 form of Occam]--- that comp makes us type, even abstract type. It just
 happens that from a first person point of view we cannot take ourselves
 as type because we just cannot distinguish between our many instances
 generated by the Universal Dovetailer. A similar phenomenon occur
 already with the quantum. But from the point of view of this thread,
 this is an anticipation. The things which look the more like token,
 with the comp hyp, are the numbers. This makes the second half part of
 Parfit's book rather inconsistent with comp, but, still, his analysis
 of personal identity remains largely genuine. (I don't like at all his
 use of the name reductionism in that context, also, it's quite
 misleading).

 http://iridia.ulb.ac.be/~marchal/

From what you've said about dovetailing before, you don't have to have
just a single sequence in order to dovetail.  You can jump among
multiple sequences.  I have yet to understand how you could dovetail on
something that is not effective.  That seems to be where 

Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-30 Thread Tom Caylor

Tom Caylor wrote:

 It sounds like the cute theorem says that you can keep dividing up the
 natural numbers like this forever.


Oops.  I slipped in an actual infinity when I said forever.  Perhaps
I should have said indefinitely  ;)

Tom


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-30 Thread Quentin Anciaux

Hi,

From what you've said about dovetailing before, you don't have to have

 just a single sequence in order to dovetail.  You can jump among
 multiple sequences.  I have yet to understand how you could dovetail on
 something that is not effective.  

I think dovetailing is possible because the dovetailer only complete sequences 
at infinity. So when you construct the matrice on which you 
will diagonalize, you are already diagonilizing it at the same time. 
Example: when you have the first number of the first growing function, you 
can also have the first number of the diagonalize function (by adding 1) and 
the first number of the diagonalize*diagonalize function and ... ad recursum. 
By dovetailing you execute in fact everything in parallel but all infinites 
sequences are only completed at infinity.

Quentin Anciaux

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-30 Thread Tom Caylor


Quentin Anciaux wrote:
 Hi,

 From what you've said about dovetailing before, you don't have to have
 
  just a single sequence in order to dovetail.  You can jump among
  multiple sequences.  I have yet to understand how you could dovetail on
  something that is not effective.

 I think dovetailing is possible because the dovetailer only complete sequences
 at infinity. So when you construct the matrice on which you
 will diagonalize, you are already diagonilizing it at the same time.
 Example: when you have the first number of the first growing function, you
 can also have the first number of the diagonalize function (by adding 1) and
 the first number of the diagonalize*diagonalize function and ... ad recursum.
 By dovetailing you execute in fact everything in parallel but all infinites
 sequences are only completed at infinity.

 Quentin Anciaux

OK.  Thanks.  But so far we have done only effective diagonalization.
I'll follow along as Bruno goes step by step.  Also, it seems to me
even with non-effective diagonalization there will be another problem
to solve:  When we dovetail, how do we know we are getting sufficient
(which means indefinite) level of substitution in finite amount of
computation?  (Also, I am waiting for a good explanation of how Church
Thesis comes into this.)  Again, I'll wait for the step by step
argument.

Tom


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-30 Thread George Levy

Bruno Marchal wrote:

Meanwhile, I 
would like to ask George and the others if they have a good 
understanding of the present thread, that is on the fact that growing 
functions has been well defined, that each sequence of such functions 
are well defined, and each diagonalisation defines quite well a precise 
programmable growing function (growing faster than the one in the 
sequence it comes from).
Just a tiny effort, and I think we will have all we need to go into the 
heart of the matter, and to understand why comp makes our universe 
a godelized one in the Smullyan sense.
  


To speak only for myself,  I think I have a sufficient understanding of 
the thread. Essentially you have shown that one cannot form a set of all 
numbers/functions because given any set of numbers/functions it is 
always possible, using diagonalization,  to generate new 
numbers/functions: the Plenitude is too large to be a set. This leads to 
a problem with the assumption of the existence of a Universal Dovetailer 
whose purpose is to generate all functions. I hope this summary is accurate.

George

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-30 Thread Jesse Mazer

George Levy wrote:



Bruno Marchal wrote:

 Meanwhile, I
 would like to ask George and the others if they have a good
 understanding of the present thread, that is on the fact that growing
 functions has been well defined, that each sequence of such functions
 are well defined, and each diagonalisation defines quite well a precise
 programmable growing function (growing faster than the one in the
 sequence it comes from).
 Just a tiny effort, and I think we will have all we need to go into the
 heart of the matter, and to understand why comp makes our universe
 a godelized one in the Smullyan sense.
 
 

To speak only for myself,  I think I have a sufficient understanding of
the thread. Essentially you have shown that one cannot form a set of all
numbers/functions because given any set of numbers/functions it is
always possible, using diagonalization,  to generate new
numbers/functions: the Plenitude is too large to be a set. This leads to
a problem with the assumption of the existence of a Universal Dovetailer
whose purpose is to generate all functions. I hope this summary is 
accurate.

George

The dovetailer is only supposed to generate all *computable* functions 
though, correct? And the diagonalization of the (countable) set of all 
computable functions would not itself be computable.

Jesse



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-30 Thread Hal Finney

Jesse Mazer writes:
 The dovetailer is only supposed to generate all *computable* functions 
 though, correct? And the diagonalization of the (countable) set of all 
 computable functions would not itself be computable.

The dovetailer I know does not seem relevant to this discussion about
functions.  It generates programs, not functions.  For example, it
generates all 1 bit programs and runs each for one cycle; then generates
all 2 bit programs and runs each for 2 cycles; then generates all 3
bit programs and runs each for 3 cycles; and so on indefinitely.  (This
assumes that the 3 bit programs include all 2- and 1-bit programs, etc.)
In this way all programs get run with an arbitrary number of cycles.

These programs differ from functions in two ways.  First, programs may
never halt and hence may produce no fixed output, while functions must
have a well defined result.  And second, these programs take no inputs,
while functions should have at least one input variable.

What do you understand a dovetailer to be, in the context of computable
functions?

Hal Finney

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-29 Thread Tom Caylor

Bruno Marchal wrote:
 Le 26-mai-06, à 19:35, Tom Caylor a écrit :
 
  Bruno,
  You are starting to perturb me!  I guess that comes with the territory
  where you're leading us.

 You should not worry too much. I confess I am putting your mind in the
 state of mathematicians before the Babbage Post Markov Turing Church
 discovery. Everything here will be transparently clear.

  But of course being perturbed doesn't
  necessarily imply being correct.  I will summarize my perturbation
  below.  But for now, specifically, you're bringing in transfinite
  cardinals/ordinals.

 Only transfinite ordinal which are all countable, and even nameable,
 for example by name of growing computable functions as I am
 illustrating.

 Be sure you understand why G is a well defined computable growing
 function, and why it grows faster than each initial Fi. If you know a
 computer programming language, write the program!

   This is where things get perverse and perhaps
  inconsistent.  For instance, couldn't I argue that G is also infinite?

 In which sense? All functions are infinite mathematical object.
 Factorial is defined by its infiinite set of inputs outputs: {(0,1)
 (1,1)(2,2) (3,6) (4,24) (5,120) ...}.

  Take n = some fixed N1.  Then F1(N)  1, F2(N)  2, F3(N)  3, ...
  and Fn(N)  n, for all n.  So each member of the whole sequence F1, F2,
  F3 ... G is greater than the corresponding member of the sequence 1, 2,
  3, ... aleph_0 (countable infinity).  Thus, G (=) countable infinity,
  even for a fixed n=N1.

 You are right but G is a function. Actually it just does what it has
 been programmed to. I don't see any problem here.

OK.  I see that so far (above) there's no problem.  (See below for
where I still have concern(s).)  Here I was taking a fixed N, but G is
defined as the diagonal, so my comparison is not valid, and so my proof
that G is infinite for a fixed N is not valid.  I was taking G's
assignment of an ordinal of omega as being that it is in every way
larger than all Fi's, but in fact G is larger than all Fi's only when
comparing G(n) to Fn(n), not when comparing G(Nfixed) to Fi(N) for all
i's.

  Oh Oh Oh Oh Oh  A new pattern emerge (the Ackerman Caylor one, at
  a
  higher level).
 
  F_omega,
  F_omega + omega
  F_omega * omega
  F_omega ^ omega
  F_omega [4]  omega (omega tetrated to omega, actually this ordinal got
  famous and is named Epsilon Zéro, will say some words on it later)
 
  F_omega [5] omega
  F_omega [6] omega
  F_omega [7] omega
  F_omega [8] omega
  F_omega [9] omega
  F_omega [10] omega
  F_omega [11] omega
 
  ...
 
  In this case they are all obtained by successive diagonalzations, but
  nothing prevent us to diagonalise on it again to get
 
  F_omega [omega] omega
 
  OK, I think the following finite number is big enough:
 
  F_omega [omega] omega (F_omega [omega] omega (9 [9] 9))
 
 
  Next, we will meet a less constructivist fairy, and take some new kind
  of big leap.
 
  Be sure to be convinced that, despite the transfinite character of the
  F_alpha sequence, we did really defined at all steps precise
  computable growing functions ... (if not: ask question please).
 
 
  It seems to me that you are on very shaky ground if you are citing
  transfinite numbers in your journey to showing us your ultimate
  argument.

 Please Tom, I did stay in the realm of the finitary. Even intutionist
 can accept and prove correct the way I named what are just big finite
 number. I have not until now transgressed the constructive field, I
 have not begin to use Platonism! There is nothing controversial here,
 even finitist mathematician can accept this. (Not ultra-finitist,
 though, but those reject already 10^100)

   I also think that if you could keep your arguments totally
  in the finite arena it would less risky.

 I have. You must (re)analyse the construction carefully and realize I
 have not go out of the finite arena. Ordinals are just been used as a
 way to put order on the successive effective diagonalizations. Those
 are defined on perfectly well defined and generable sequence of well
 defined functions. I have really just written a program (a little bit
 sketchily, but you should be able to add the details once you should a
 programming language).


OK.  I think you are just throwing me off with your notation.  Do you
have to use transfinite ordinals (omega) to do this?  Couldn't you just
stay with successively combining and diagonalizing, like below, without
using omegas?

G(n) = Fn(n)+1
Gi(n) = G(...G(n)), G taken i times
Then instead of using more and more additional letters, just add
subscripts...
H1(n) = Gn(n)+1
H1i(n) = H1(...H1(n)), H1 taken i times
H2(n) = H1n(n)+1
H2i(n) = H2(...H2(n)), H2 taken i times

Then the subscripts count the number of diagonalizations you've done,
and every time you do the Ackermann thing, instead of adding an omega
you add another subscript.   Then it continues ad infinitum.  You can
do the Ackermann thing with the *number* of subscripts, 

Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-29 Thread Tom Caylor

I meant that it makes intuitive sense that you *cannot* sequence
effectively on all computable growing functions.

Tom


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-27 Thread Bruno Marchal


Le 26-mai-06, à 19:35, Tom Caylor a écrit :




 Bruno Marchal wrote:
 Hi,

 OK, let us try to name the biggest natural (finite) number we can, and
 let us do that transfinite ascension on the growing functions from N 
 to
 N.

 We have already build some well defined sequence of description (code)
 of growing functions.

 Let us choose the Hall Finney sequence to begin with (but the one by
 Tom Caylor can be use instead).

 F1 F2 F3 F4 F5 ...

 With F1(n) = factorial(n), F2(n) = factorial(factorial n), etc.

 Note this: Hal gave us a trick for getting from a growing function f, 
 a
 new function growing faster, actually the iteration of the function.
 That is, Hal gave us a notion of successor for the growing function.
 Now the diagonalization of the sequence F1 F2 F3 F4 ..., which is 
 given
 by the new growing function defined by

 G(n) = Fn(n) + 1

 gives us a growing function which grows faster than any Fi from Hal's
 initial sequence. Precisely, G will grow faster than any Fi on *almost
 all* number (it could be that some Fi will grow faster than G on some
 initial part of N, but for some finite value (which one?) G will keep
 growing faster. Technically we must remember to apply our growing
 function on sufficiently big input' if we want to benefit of the
 growing phenomenon. We will make a rough evaluation on that input
 later, but let us not being distract by technical point like that.
 The diagonalization gives an effective way to take the limit of the
 sequence F1, F2, F3, ...

 G grows faster than any Fi. Mathematician will say that the order type
 of g, in our our new sequence F1 F2 F3 ... G,  is omega (the greek
 letter).


 Bruno,
 You are starting to perturb me!  I guess that comes with the territory
 where you're leading us.




You should not worry too much. I confess I am putting your mind in the 
state of mathematicians before the Babbage Post Markov Turing Church 
discovery. Everything here will be transparently clear.






 But of course being perturbed doesn't
 necessarily imply being correct.  I will summarize my perturbation
 below.  But for now, specifically, you're bringing in transfinite
 cardinals/ordinals.




Only transfinite ordinal which are all countable, and even nameable, 
for example by name of growing computable functions as I am 
illustrating.

Be sure you understand why G is a well defined computable growing 
function, and why it grows faster than each initial Fi. If you know a 
computer programming language, write the program!




  This is where things get perverse and perhaps
 inconsistent.  For instance, couldn't I argue that G is also infinite?





In which sense? All functions are infinite mathematical object. 
Factorial is defined by its infiinite set of inputs outputs: {(0,1) 
(1,1)(2,2) (3,6) (4,24) (5,120) ...}.






 Take n = some fixed N1.  Then F1(N)  1, F2(N)  2, F3(N)  3, ...
 and Fn(N)  n, for all n.  So each member of the whole sequence F1, F2,
 F3 ... G is greater than the corresponding member of the sequence 1, 2,
 3, ... aleph_0 (countable infinity).  Thus, G (=) countable infinity,
 even for a fixed n=N1.




You are right but G is a function. Actually it just does what it has 
been programmed to. I don't see any problem here.








 But G is just a well defined computable growing function and we can 
 use
 Hall Finney successor again to get the still faster function, namely
 G(G(n)).

 The order type of G(G(n)) is, well, the successor of omega: omega+1

 And, as Hall initially, we can build the new sequance of growing
 functions (all of which grows more than the preceding sequence):

 G(n) G(G(n)) G(G(G(n))) G(G(G(G(n etc.

 which are of order type omega, omega+1, omega+2, omega+3, omega+4, 
 etc.

 Now we have obtained a new well defined infinite sequence of growing
 function, and, writing it as:

 G1, G2, G3, G4, G5, G6, ...  or better, as

 F_omega, F_omega+1, F_omega+2, F_omega+3

 just showing such a sequence can be generated so that we can again
 diagonalise it, getting

 H(n) = Gn(n) + 1, or better

 H(n) = F_omega+n (n) + 1


 Getting a function of order type omega+omega: we can write H =
 F_omega+omega

 And of course, we can apply Hall's successor again, getting
 F_omega+omega+1
 which is just H(H(n), and so we get a new sequence:

 F_omega+omega+1, F_omega+omega+2, F_omega+omega+3, ...

 Which can be diagonalise again, so we get

 F_omega+omega+omega,

 and then by Hal again, and again ...:

 F_omega+omega+omega+1, F_omega+omega+omega+2, F_omega+omega+omega+3

 ...

 Oh Oh! a new pattern emerges, a new type of sequence of well defined
 growing functions appears:

 F_omega, F_omega+omega, F_omega+omega+omega, 
 F_omega+omega+omega+omega,

 And we can generated it computationnaly, so we can diagonalise again 
 to
 get:

 F_omega * times omega,

 and of course we can apply Hal's successor (or caylor one of course)
 again, and again

 

 Oh Oh Oh Oh Oh  A new pattern emerge (the Ackerman Caylor one, at 
 a
 higher 

Re: Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-26 Thread Tom Caylor

Bruno Marchal wrote:
 Hi,

 OK, let us try to name the biggest natural (finite) number we can, and
 let us do that transfinite ascension on the growing functions from N to
 N.

 We have already build some well defined sequence of description (code)
 of growing functions.

 Let us choose the Hall Finney sequence to begin with (but the one by
 Tom Caylor can be use instead).

 F1 F2 F3 F4 F5 ...

 With F1(n) = factorial(n), F2(n) = factorial(factorial n), etc.

 Note this: Hal gave us a trick for getting from a growing function f, a
 new function growing faster, actually the iteration of the function.
 That is, Hal gave us a notion of successor for the growing function.
 Now the diagonalization of the sequence F1 F2 F3 F4 ..., which is given
 by the new growing function defined by

 G(n) = Fn(n) + 1

 gives us a growing function which grows faster than any Fi from Hal's
 initial sequence. Precisely, G will grow faster than any Fi on *almost
 all* number (it could be that some Fi will grow faster than G on some
 initial part of N, but for some finite value (which one?) G will keep
 growing faster. Technically we must remember to apply our growing
 function on sufficiently big input' if we want to benefit of the
 growing phenomenon. We will make a rough evaluation on that input
 later, but let us not being distract by technical point like that.
 The diagonalization gives an effective way to take the limit of the
 sequence F1, F2, F3, ...

 G grows faster than any Fi. Mathematician will say that the order type
 of g, in our our new sequence F1 F2 F3 ... G,  is omega (the greek
 letter).


Bruno,
You are starting to perturb me!  I guess that comes with the territory
where you're leading us.  But of course being perturbed doesn't
necessarily imply being correct.  I will summarize my perturbation
below.  But for now, specifically, you're bringing in transfinite
cardinals/ordinals.  This is where things get perverse and perhaps
inconsistent.  For instance, couldn't I argue that G is also infinite?
Take n = some fixed N1.  Then F1(N)  1, F2(N)  2, F3(N)  3, ...
and Fn(N)  n, for all n.  So each member of the whole sequence F1, F2,
F3 ... G is greater than the corresponding member of the sequence 1, 2,
3, ... aleph_0 (countable infinity).  Thus, G (=) countable infinity,
even for a fixed n=N1.

 But G is just a well defined computable growing function and we can use
 Hall Finney successor again to get the still faster function, namely
 G(G(n)).

 The order type of G(G(n)) is, well, the successor of omega: omega+1

 And, as Hall initially, we can build the new sequance of growing
 functions (all of which grows more than the preceding sequence):

 G(n) G(G(n)) G(G(G(n))) G(G(G(G(n etc.

 which are of order type omega, omega+1, omega+2, omega+3, omega+4, etc.

 Now we have obtained a new well defined infinite sequence of growing
 function, and, writing it as:

 G1, G2, G3, G4, G5, G6, ...  or better, as

 F_omega, F_omega+1, F_omega+2, F_omega+3

 just showing such a sequence can be generated so that we can again
 diagonalise it, getting

 H(n) = Gn(n) + 1, or better

 H(n) = F_omega+n (n) + 1


 Getting a function of order type omega+omega: we can write H =
 F_omega+omega

 And of course, we can apply Hall's successor again, getting
 F_omega+omega+1
 which is just H(H(n), and so we get a new sequence:

 F_omega+omega+1, F_omega+omega+2, F_omega+omega+3, ...

 Which can be diagonalise again, so we get

 F_omega+omega+omega,

 and then by Hal again, and again ...:

 F_omega+omega+omega+1, F_omega+omega+omega+2, F_omega+omega+omega+3

 ...

 Oh Oh! a new pattern emerges, a new type of sequence of well defined
 growing functions appears:

 F_omega, F_omega+omega, F_omega+omega+omega, F_omega+omega+omega+omega,

 And we can generated it computationnaly, so we can diagonalise again to
 get:

 F_omega * times omega,

 and of course we can apply Hal's successor (or caylor one of course)
 again, and again

 

 Oh Oh Oh Oh Oh  A new pattern emerge (the Ackerman Caylor one, at a
 higher level).

 F_omega,
 F_omega + omega
 F_omega * omega
 F_omega ^ omega
 F_omega [4]  omega (omega tetrated to omega, actually this ordinal got
 famous and is named Epsilon Zéro, will say some words on it later)

 F_omega [5] omega
 F_omega [6] omega
 F_omega [7] omega
 F_omega [8] omega
 F_omega [9] omega
 F_omega [10] omega
 F_omega [11] omega

 ...

 In this case they are all obtained by successive diagonalzations, but
 nothing prevent us to diagonalise on it again to get

 F_omega [omega] omega

 OK, I think the following finite number is big enough:

 F_omega [omega] omega (F_omega [omega] omega (9 [9] 9))


 Next, we will meet a less constructivist fairy, and take some new kind
 of big leap.

 Be sure to be convinced that, despite the transfinite character of the
 F_alpha sequence, we did really defined at all steps precise
 computable growing functions ... (if not: ask question please).


It seems to me that you are on very 

Ascension (was Re: Smullyan Shmullyan, give me a real example)

2006-05-25 Thread Bruno Marchal

Hi,

OK, let us try to name the biggest natural (finite) number we can, and 
let us do that transfinite ascension on the growing functions from N to 
N.

We have already build some well defined sequence of description (code) 
of growing functions.

Let us choose the Hall Finney sequence to begin with (but the one by 
Tom Caylor can be use instead).

F1 F2 F3 F4 F5 ...

With F1(n) = factorial(n), F2(n) = factorial(factorial n), etc.

Note this: Hal gave us a trick for getting from a growing function f, a 
new function growing faster, actually the iteration of the function. 
That is, Hal gave us a notion of successor for the growing function.
Now the diagonalization of the sequence F1 F2 F3 F4 ..., which is given 
by the new growing function defined by

G(n) = Fn(n) + 1

gives us a growing function which grows faster than any Fi from Hal's 
initial sequence. Precisely, G will grow faster than any Fi on *almost 
all* number (it could be that some Fi will grow faster than G on some 
initial part of N, but for some finite value (which one?) G will keep 
growing faster. Technically we must remember to apply our growing 
function on sufficiently big input' if we want to benefit of the 
growing phenomenon. We will make a rough evaluation on that input 
later, but let us not being distract by technical point like that.
The diagonalization gives an effective way to take the limit of the 
sequence F1, F2, F3, ...

G grows faster than any Fi. Mathematician will say that the order type 
of g, in our our new sequence F1 F2 F3 ... G,  is omega (the greek 
letter).

But G is just a well defined computable growing function and we can use 
Hall Finney successor again to get the still faster function, namely 
G(G(n)).

The order type of G(G(n)) is, well, the successor of omega: omega+1

And, as Hall initially, we can build the new sequance of growing 
functions (all of which grows more than the preceding sequence):

G(n) G(G(n)) G(G(G(n))) G(G(G(G(n etc.

which are of order type omega, omega+1, omega+2, omega+3, omega+4, etc.

Now we have obtained a new well defined infinite sequence of growing 
function, and, writing it as:

G1, G2, G3, G4, G5, G6, ...  or better, as

F_omega, F_omega+1, F_omega+2, F_omega+3

just showing such a sequence can be generated so that we can again 
diagonalise it, getting

H(n) = Gn(n) + 1, or better

H(n) = F_omega+n (n) + 1


Getting a function of order type omega+omega: we can write H = 
F_omega+omega

And of course, we can apply Hall's successor again, getting 
F_omega+omega+1
which is just H(H(n), and so we get a new sequence:

F_omega+omega+1, F_omega+omega+2, F_omega+omega+3, ...

Which can be diagonalise again, so we get

F_omega+omega+omega,

and then by Hal again, and again ...:

F_omega+omega+omega+1, F_omega+omega+omega+2, F_omega+omega+omega+3

...

Oh Oh! a new pattern emerges, a new type of sequence of well defined 
growing functions appears:

F_omega, F_omega+omega, F_omega+omega+omega, F_omega+omega+omega+omega,

And we can generated it computationnaly, so we can diagonalise again to 
get:

F_omega * times omega,

and of course we can apply Hal's successor (or caylor one of course) 
again, and again



Oh Oh Oh Oh Oh  A new pattern emerge (the Ackerman Caylor one, at a 
higher level).

F_omega,
F_omega + omega
F_omega * omega
F_omega ^ omega
F_omega [4]  omega (omega tetrated to omega, actually this ordinal got 
famous and is named Epsilon Zéro, will say some words on it later)

F_omega [5] omega
F_omega [6] omega
F_omega [7] omega
F_omega [8] omega
F_omega [9] omega
F_omega [10] omega
F_omega [11] omega

...

In this case they are all obtained by successive diagonalzations, but 
nothing prevent us to diagonalise on it again to get

F_omega [omega] omega

OK, I think the following finite number is big enough:

F_omega [omega] omega (F_omega [omega] omega (9 [9] 9))


Next, we will meet a less constructivist fairy, and take some new kind 
of big leap.

Be sure to be convinced that, despite the transfinite character of the 
F_alpha sequence, we did really defined at all steps precise 
computable growing functions ... (if not: ask question please).

Tricky Problem: is there a sequence in which all growing computable 
functions belong? Is it possible to dovetail on all computable growing 
functions, ...

I let you think,

Bruno

http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---