Re: Revised Computing Randomness & the UD

2001-04-21 Thread Marchal

Hi Hal,

>Unfortunately I still miss some of your posts because of the absence of a 
>date stamp, my mailer puts them way at the top of my list since I save all 
>of some categories of e-mail.

I'm really sorry. I *will* come back to Eudora one day

>Now I consider myself in favor of the idea that some input from a random 
>oracle is required in a SAS supporting universe so I am posing my arguments 
>in an effort to defeat determinism in such a setting.

Determinism has been defeated in the comp setting through the UDA, IMO.

To be sure, oracle appears too. That is, the UD generates and executes
all programs, including the one which are using (Turing) oracle.

Bruno




Re: Revised Computing Randomness & the UD

2001-04-21 Thread Marchal

Hi Hal,

>These are some of the things I want to explore as a result of formulating 
>my question re the UD.
>
>To start:
>
>1) Are you saying that the UD contains all other computations as data?

No. The UD is a program without data. It generates and executes all
computations. It "dovetails", i.e. it executes all programs by little
pieces, so it does not stuck itself in a non stopping computation.


>... But for sake of  covering all bases I thought 
>"compute" = "prove" was established by Turing.

The work of Post, Church, and Turing (and others) shows the contrary. It
shows the abyssal difference between provability and computability.
Godel realises that his own incompleteness protect Church thesis and 
gives to the notion of computability a miraculous absolute nature.
At the same time it shows that provability is always relative to a
choosen formal system. That is what makes the Universal Dovetailer
really universal. It computes everything computable, but is does not
makes proof. If you look at it as a theorem prover, most formal
description of the UD will give a not really powerful theory.

>3)  If in order to work the UD contains all other computations 
>as data then the UD is a highly complex program. ...

I wrote one in LISP. It makes more or less 100 lines. You can write
very little one in PROLOG. It is not a highly complex program.
But, please, the computations are not the data, they are the output.

>Random strings are complex and I thought "all Theorems" was a very low 
>complexity object.  Or is it random?

It depends how you code it. "All theorems" is an ambiguous expression.
Never use the word "theorem" without mentionning the theory.
"All computations" is not ambiguous (it does not depend on the formal
system in use). In some sense Chaintin OMEGA number codes all 
computations in a terrible compressed way. It is "random". But then
there exists nice natural way to encode all computation in a non
random, but deep (in Bennett sense) manner.

>4) Why not run a highly parallel computer rather than a UD? [That is rather 
>like a part of my model.]

First, the meaning of quasi-geometrical word like  "parallel" are 
among the things I want explain. I cannot take such meaning for granted.
Actually, with some terrestrial sense for parallel, it will not
change a thing in the limit (see the UDA argument which explain why).

Also, The UD has a property I never mentionned, it generates by itself
an infinity of variant of itself which on almost all inputs (almost all 
= all with a finite number of exceptions) is "quasi-infinitely" more
quick. The UD is self-speedable. But that is not interesting, for the UD
does not need to go quickly. Conscience and matter are inside view
of UD*, the complete (infinite) work of the UD; belonging to Plato's
heavens.

>5) My meaning for "knowing" is at first order like proving.  Chaitin is 
>actually talking about the complexity of the FAS's theorem checker as the 
>complexity of the FAS.  The theorem  checker "knows" all theorems of the 
>FAS.  If a proof is an elegant proof by default then the theorem checker 
>also knows this proof or it would not know how to check the output for 
>theorem hood.  It can prove this proof is elegant because there is no other 
>choice.

OK. But be careful. In my work knowing is quite different of proving.
The modalities are not the same. This will be discussed in my explanation
to George Levy. In fact I will define at some point "knowing p" by
proving p and p is true. knowing and proving will appear equivalent only
for the guardian angel, but the consistent machine cannot know that,
nor prove it!

Bruno






Re: Revised Computing Randomness & the UD

2001-04-21 Thread Hal Ruhl

Dear Bruno:

Unfortunately I still miss some of your posts because of the absence of a 
date stamp, my mailer puts them way at the top of my list since I save all 
of some categories of e-mail.

I found one from last week which I will respond to soon.  However, here is 
an addition to my latest response.

6) The UD as I see it is a FAS in its own right.  It is generating a proof 
"in" itself.  There is no "out"  or we would be discussing just a piece of 
the Everything.  I would call this proof a  deterministic cascade since 
there are no other proofs of any of its outputs [no "in coming branches] 
and it is unique at each step [no outgoing branches].

Now I consider myself in favor of the idea that some input from a random 
oracle is required in a SAS supporting universe so I am posing my arguments 
in an effort to defeat determinism in such a setting.

Hal




Re: Revised Computing Randomness & the UD

2001-04-21 Thread Hal Ruhl

Dear Bruno:

These are some of the things I want to explore as a result of formulating 
my question re the UD.

To start:

1) Are you saying that the UD contains all other computations as data?

2) By your comment "You confuse computability and provability." I think you 
mean the UD indeed contains all computations as data.  But for sake of 
covering all bases I thought "compute" = "prove" was established by 
Turing.  Though I know some want to expand beyond this.   Regardless, IMO 
even if it has all this data it is still an expression plus data plus 
self-delimiter that outputs [proves] a value.  That is the "sense" I am using.

3)  If in order to work the UD contains all other computations as data then 
the UD is a highly complex program.  A small expression with a lot of data 
and a very large self-delimiter.  If you have to put the data string in to 
get it out that is a sign you are working with a random string.  Random 
strings are complex and I thought "all Theorems" was a very low complexity 
object.  Or is it random?

4) Why not run a highly parallel computer rather than a UD? [That is rather 
like a part of my model.]

5) My meaning for "knowing" is at first order like proving.  Chaitin is 
actually talking about the complexity of the FAS's theorem checker as the 
complexity of the FAS.  The theorem  checker "knows" all theorems of the 
FAS.  If a proof is an elegant proof by default then the theorem checker 
also knows this proof or it would not know how to check the output for 
theorem hood.  It can prove this proof is elegant because there is no other 
choice.

Hal




Re: Revised Computing Randomness & the UD

2001-04-21 Thread Marchal

Hal Ruhl wrote:

>1) The UD proof of the object "all theorems" is complex because each step 
>is a unique slice of progress towards some sub component of the target 
>object thus all steps are different and there are a great many of them.
>
>2) The UD knows its proof is complex and since it is the only way it has to 
>the target object it knows it is elegant.


>The UD [a simple FAS] - uses an incredibly complex proof [known by the UD 
>it to be elegant and incredibly complex] - to exhibit "all theorems" [an 
>object of extremely low complexity].

You confuse computability and provability. The UD does not generates 
formal
proof in a formal system, it executes all possible computations.

There is a sort of absolute sense (with Church's thesis) in saying that 
the
UD generates all the proof of \Sigma_1 sentences, though. A sentence which
is provably equivalent to ExP(x) (It exist a x such that P(x) with P 
recursive).

And there are surely a lot of other senses, but then you need to define
the one you are using.

How do you define "knowing"? In what sense does the UD know something?

It can be shown that the set of *all theorems* of a reasonable first 
order 
arithmetic theory is not so low; with reasonable logical complexity 
definition.

Bruno





Re: Revised Computing Randomness & the UD

2001-04-20 Thread Hal Ruhl

Dear Russell:

As simple as I can get the idea:

The UD [a simple FAS] - uses an incredibly complex proof [known by the UD 
it to be elegant and incredibly complex] - to exhibit "all theorems" [an 
object of extremely low complexity].

This does not work.

Hal




Re: Revised Computing Randomness & the UD

2001-04-20 Thread Hal Ruhl

Dear Russell:

I think you miss what I am saying.

At 4/20/01, you wrote:
>I disagree. The UD will have a particular way of generating (or
>enumerating) the theorems of the FAS, such that it doesn't generate
>the same theorem twice.

The UD is [so it is said] generating all theorems.  Some of these are no 
doubt not elegant.  It does not generate proofs twice but may generate more 
than one proof of a given object.

All this is not germane to the fact that it is doing all this by the only 
path [proof] - {its "particular way" as you say} that it has 
available.  That makes this path elegant by default.

Thus we have an algorithm - the generating FAS - generating a very simple 
object [so it is said] by an incredibly complex and elegant "particular way".

Incredibly complex elegant proofs do not end in very simple objects.

Hal




Re: Revised Computing Randomness & the UD

2001-04-19 Thread Russell Standish

I disagree. The UD will have a particular way of generating (or
enumerating) the theorems of the FAS, such that it doesn't generate
the same theorem twice. However, that is not to say that the proofs it
generates are elegant, as other proof algorithms exist, which may
generate shorter proofs. I would think that moreover it is impossible
to ascertain in general whether a particular proof output by the UD is
elegant or not, as this appears to be equivalent to the problem of
evaluating the K-complexity of a given program, which is known to be
uncomputable. 

Cheers

Hal Ruhl wrote:
> 
> Dear Russell:
> 
> To condense the idea the UD is generating the collection of strings by the 
> only path it has so the proof is automatically elegant.
> 
> It is also extremely complex.
> 
> Hal
> 




Dr. Russell Standish Director
High Performance Computing Support Unit, Phone 9385 6967
UNSW SYDNEY 2052 Fax   9385 6965
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks





Re: Revised Computing Randomness & the UD

2001-04-19 Thread Hal Ruhl

Dear Russell:

To condense the idea the UD is generating the collection of strings by the 
only path it has so the proof is automatically elegant.

It is also extremely complex.

Hal




Re: Revised Computing Randomness & the UD

2001-04-19 Thread Hal Ruhl

Dear Russell:

The idea I am trying to exploit in my latest posts is that a deterministic 
cascade means one that uses every ounce of inference in the FAS in every 
possible way on all the current data at every single step of the 
cascade.  Each step is individually elegant and since deterministic is 
taken to mean no other proof of any string of the cascade exists then I 
called it everywhere elegant.  Thus the cascade is known to be elegant by 
the FAS.

Known elegant proofs can not be more complex than the FAS itself plus a 
constant.

Cascades can not stop.

Thus a contradiction.

The UD as I understand it as described by various sources I have read it is 
applying all of its inference in every possible way to the current data of 
each step.   It is a single track machine.  See below:


At 4/20/01, you wrote:
>But if P(A)=B and P'(B)=C are elegant proofs, it is very unlikely for
>P'(P(A))=C to be an elegant proof.

That is not as the UD is described as I have read it.  It executes one sub 
component at each step in a determined sequence.  It has no choice in this 
sequence.  The fact that it may produce C more than once is not an issue 
since this is not its goal.  Its goal is to produce some collection of 
strings and that just might contain C more than once but the overall proof 
of that collection is still the only one available thus it is elegant.

snip

>But whilst it is necessary for each step of
>an elegant proof to be elegant, it is not sufficient

I add the constraint of "deterministic" cascade.

Hal




Re: Revised Computing Randomness & the UD

2001-04-19 Thread Russell Standish

But if P(A)=B and P'(B)=C are elegant proofs, it is very unlikely for
P'(P(A))=C to be an elegant proof. This is what the dovetailer is
constructing - it is not possible to know whether the any particular
proof output by the UD is elegant, only that it must contain elegant
proofs since it is comprehensive.

What exactly do you mean by everywhere elegant proof? That each step
of the proof is elegant? But whilst it is necessary for each step of
an elegant proof to be elegant, it is not sufficient

Cheers

Hal Ruhl wrote:
> 
> 
> Let me correct one little issue which I think helps to clarify what I am 
> saying.  I add a  comment on the universal dove-tailer.
> 
> 2) A universal dove-tailer generating all strings using a fixed algorithm 
> every part of which applies to all the current data in the same way at each 
> step seems an odd thing.
> 
> A dove-tailer is not directly generating the "whole" ensemble.  What it is 
> doing is selecting by fixed rules a particular string out of the ensemble 
> and adding some quantity of bits, putting that back in and selecting 
> another and adding some quantity of bits to it etc., etc. That is a very 
> selective and complex process on a step by step basis.   You wind up 
> constructing this incredibly complex everywhere elegant proof of what must 
> then be an incredibly complex object that is nevertheless considered to be 
> a very low complexity object.
> 
> If I have the ideas of a UD, elegant proof, and AIT complexity correct the 
> UD appears to be a contradiction.
> 
> The contradiction again is that we have a FAS that constructs a proof it 
> knows is elegant that is nevertheless far more complex than any proof it 
> can know is elegant.
> 
> Hal
> 




Dr. Russell Standish Director
High Performance Computing Support Unit, Phone 9385 6967
UNSW SYDNEY 2052 Fax   9385 6965
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks





Re: Revised Computing Randomness & the UD

2001-04-19 Thread Hal Ruhl


Let me correct one little issue which I think helps to clarify what I am 
saying.  I add a  comment on the universal dove-tailer.

1) Yesterday I said that the cascade 1+1=2, 2+1=3, 3+1=4, etc. was not 
everywhere elegant.  But I went outside my identification process for 
"determinism" = "everywhere elegant proof" to do so. The error was to slip 
into full number theory and think: "84" one of the strings this cascade 
will eventually reach for example has more than one proof. That is true 
because number theory is richer than just "data + 1 =".

But here the working definition of "deterministic" I use is that all of the 
selected set of rules of a cascade act at each step on all the data. You 
should not have within the idea of "deterministic" some of the rules active 
today and some others active tomorrow, some of the data regions exempt from 
some rules today and other data regions partially exempt tomorrow unless 
that was itself in Rj.

Deterministic as I understand it = all the rules of physics always apply to 
the entire state of the universe.

So for the above cascade we have selected "data + 1 =" as the exclusive 
expression - the entire rule set of the operative FAS - for which we are 
seeking the cascade of values given some start data [effectively the axiom 
of that cascade] in this case "1".

The fact that this rule may also belong to a different and richer FAS is 
not germane to the cascade viewed as an attempted deterministic sequence. 
The operative FAS contains just this one rule as its Rj and applies it to 
all the data at each step. That makes this cascade everywhere elegant 
because there is no other proof of any of the output strings available in 
the operative FAS.  Thus hits the complexity wall established by the 
complexity of the operative FAS.

2) A universal dove-tailer generating all strings using a fixed algorithm 
every part of which applies to all the current data in the same way at each 
step seems an odd thing.

A dove-tailer is not directly generating the "whole" ensemble.  What it is 
doing is selecting by fixed rules a particular string out of the ensemble 
and adding some quantity of bits, putting that back in and selecting 
another and adding some quantity of bits to it etc., etc. That is a very 
selective and complex process on a step by step basis.   You wind up 
constructing this incredibly complex everywhere elegant proof of what must 
then be an incredibly complex object that is nevertheless considered to be 
a very low complexity object.

If I have the ideas of a UD, elegant proof, and AIT complexity correct the 
UD appears to be a contradiction.

The contradiction again is that we have a FAS that constructs a proof it 
knows is elegant that is nevertheless far more complex than any proof it 
can know is elegant.

Hal




Re: Revised Computing Randomness

2001-04-18 Thread Hal Ruhl


More detail:

Start with an axiom Aj, use it as data for a LISP expression Rj.  A program 
that computes the value of this expression "B" in AIT has length: 
Expression + data + self-delimiter or in this case Rj + Aj + 
Self-delimiter.  Like this:

P = {Expression(data) + self-delimiter}  computes [produces value] "B"

If P is the shortest program to do so then it is called elegant and its 
length is the complexity of "B".

In an everywhere elegant cascade each P(i) is longer than P(i -1) because 
the data for P(i) alone has the same complexity as P(i - 1) then you add 
the complexity of the expression and the self-delimiter.

What is left is to 1) identify "everywhere elegant cascade" = deterministic 
evolution of the isomorphism and 2) find an upper bound on the complexity 
of the P(i).

Chaitin did the latter by putting an upper limit on the complexity an N-bit 
FAS could prove an elegant program has i.e. it own complexity plus a 
constant.  But a deterministic cascade is already known to be an everywhere 
elegant program.

A contradiction is established unless the cascade stops but it can not stop.

The only way out is a spontaneous increase in the complexity of the FAS.

Hal

 




Re: Revised Computing Randomness

2001-04-18 Thread Hal Ruhl

Dear Russell:

Also since 1+ 1 = 2, 2 + 1 = 3, ... etc. is not everywhere elegant each 
successive string is not necessarily more complex than its 
preimage.  However, in a cascade that is everywhere elegant each successive 
string is more complex than its preimage by definition of elegant proof.

Hal




Re: Revised Computing Randomness

2001-04-18 Thread Hal Ruhl

Dear Russell:

Here is the cascade rewritten:

Pj(i) = {Rj(Uj(i - 1)) + Self-delimiterj(i)} computes Uj(i)

with the added constraint that the cascade is everywhere elegant.

It is similar to:

1+ 1 = 2, 2 + 1 = 3, ... which eventually produces strings of arbitrary 
complexity despite its mechanical repetition.  The difference is that 1 + 1 
= 2, 2 + 1 = 3 etc is not everywhere elegant so does not meet a complexity 
wall.

Hal

Uj(i) is more complex than Uj(i -1)



At 4/19/01, you wrote:

>This last statement is surely incorrect. Because the map Rj is a
>mechanical application of rules, the complexity of B is no greater
>than that of Aj - you can only get out what you put in.





Re: Revised Computing Randomness

2001-04-18 Thread Russell Standish

Hal Ruhl wrote:
> 
> Here is a revised version of my comments on this subject.  I think it fixes 
> several aspects of what I have had to say earlier.
> 
> Standalone deterministic evolving universes:
> 
> Such a universe is describable as a concatenation of single output programs 
> of the form:
> 
>   Rj(Aj) -> B; Rj(B) -> C; ... Rj(F) -> G; Rj(G) -> H; Rj(H) -> I; 
> 
> 
> "Rj" is a subset of the rules of an N-bit FAS.  "Aj" is an axiom of that 
> FAS.  There are no potential branches in or out.  It is an elegant proof 
> and defines the complexity of each of its objects B,C,...F,G,H,I ...
> 
> Because the proof is everywhere elegant each successive object is more 
> complex than its preimage.  This is the type of theorem cascade I was 
> trying to identify.
> 

This last statement is surely incorrect. Because the map Rj is a
mechanical application of rules, the complexity of B is no greater
than that of Aj - you can only get out what you put in.

Cheers


Dr. Russell Standish Director
High Performance Computing Support Unit, Phone 9385 6967
UNSW SYDNEY 2052 Fax   9385 6965
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks