Re: Combined response Re: Computing Randomness

2001-05-01 Thread Hal Ruhl

Dear Bruno:

Thank you for your patience and the excellent response.

You should try to make your model part of established mathematics.
Not for the glory, but for making it comprehensible.

That is what I am trying to do here, but since I have proven to have too 
few current mathematical skills to do so in isolation I deeply appreciate 
the help.

 1a) FAS: A symbol string judge.  It can judge all possible symbol strings
 either acceptable or not acceptable.
 
 1b) My FAS contains a single symbol string that is given to be acceptable
 called its axiom.

Let us say.

I take it then that the above is ok.


 1c) The axiom contains all the allowed symbols - the alphabet.

Why not put the fridge in the bottle of milk?

Contain in which sense and why ?

Actually that is a very excellent example you came up with.

Take a perfectly insolating closed bottle to define the boundary,

Then

1) add an axiom to the bottle consisting of :

a) milk - the alphabet mostly plus almost no correlation between these 
symbols - a nearly uniform distribution of different symbols meaning: 
[milk with some assigned value of each meaningful property],

b) a battery, a thermoelectric chiller, heat sink [more alphabet and 
considerable correlation between some alphabet symbols],

2) add some rules of state succession and we have a little universe.

The successive states of this universe are isomorphic to a succession of 
finite symbol strings that are judged acceptable by the mathematical [ I 
think] FAS we just effectively installed in the bottle.

Change the axiom so that the alphabet contains say tea symbols or apple 
juice symbols in place of the milk symbols and we have more little 
universes.


 1d) My FAS contains a set of rules for identifying additional acceptable
 symbol strings with the axiom as the basis.

If you give us a precise FAS, then give it to us. If you are defining
a collection of FAS, then give us an example.

I am defining a collection of FAS.

 1e) My FAS contains the rule that any acceptable string contains the
 encoded FAS as a prefix.

It looks like your FAS is a self-delimiting UTM. Is that it?

At this point you are far more able to classify it than I am, but I am not 
done yet.  Once I get this much to be agreeable I have some modifications 
to introduce.

 I believe my FAS meets the requirements to make it a FAS in the accepted
 sense.

Not yet. Have you try to implement your FAS, or its computational part
in some programming language?

I am reasonably sure that my extensions of your suggestion could be 
implemented in a deterministic [single valued machine - no branches in or 
out] venue given Toffoli's paper at:

http://www.interjournal.org/cgi-bin/manuscript_abstract.cgi?345678901

I think my examples are single valued machines - single valued FAS - 
leaving out QM etc.

As to the post modification final version of my FAS collection I attempt a 
complete elimination of a deterministic venue and do so in a way that I 
believe incorporates a part of your UDA  plus additional features.

Thank you again.

Hal




Re: Combined response Re: Computing Randomness

2001-04-26 Thread Hal Ruhl

Dear Bruno:

I appreciate the conversation so I will try to build a common reference so 
each additional step to my model can be built on that base and individually 
commented on.   As requested these are definitions and terms relevant to my 
model not necessarily to established mathematics.

1a) FAS: A symbol string judge.  It can judge all possible symbol strings 
either acceptable or not acceptable.

1b) My FAS contains a single symbol string that is given to be acceptable 
called its axiom.

1c) The axiom contains all the allowed symbols - the alphabet.

1d) My FAS contains a set of rules for identifying additional acceptable 
symbol strings with the axiom as the basis.

1e) My FAS contains the rule that any acceptable string contains the 
encoded FAS as a prefix.

I believe my FAS meets the requirements to make it a FAS in the accepted sense.

Hal





Re: Combined response Re: Computing Randomness

2001-04-25 Thread Hal Ruhl

Dear Bruno:

At , you wrote:
Hal Ruhl wrote:

 The assumption leads to a contradiction when String N exceeds the
 complexity allowed by Chaitin.  More information must be added to the
 cascade for it to continue.

Why ? Only if your FAS produces as output just the string N
and then stop, then there would indeed be a contradiction.

That seems to be mostly what I said.  Each cascade is a self contained 
FAS.  Each is a one trick pony.  Each trick is a universe. Each step in the 
trick is a state of that universe.  It is a very very big pony show.  The 
result is universal computation including random history universes.

But cascades of this sort suffer the contradiction.  The FAS has to grow - 
the cascade gets an injection of complexity .

Now identify each cascade current step as actually a particular isomorphism 
linked to a particular pattern in an ocean of patterns - my 
Superverse.  Each new step is a jump to a new pattern.

The cascade steps are shifts of the link to another pattern.

Hal






Combined response Re: Computing Randomness

2001-04-24 Thread Hal Ruhl

Dear Juergen and Bruno:

Clearly I have a problem when I try to use mathematical terminology in 
which I am not formally trained to explain my approach.

So here is an attempt to explain it in just a few normal words.  My 
system be it a FAS or not is modeled on the logistics equation process 
not the equation itself.

Here is the cascade:

{Rules(String 0) + SD(0)} - String 1;
{Rules(String 1) + SD(1)} - String 2;
{Rules(String 2) + SD(2)} - String 3;
etc.

String 0 is like an axiom.
Rules define a cascade [universe] and is the entire rule set.
String 0 contains the entire initial alphabet.
SD(N) is the self-delimiter.
All the Rules apply to each String N.
The cascade is considered to define a universe as opposed to imposing from 
outside a restriction to mathematical structure.
For this reason I prefer Pattern N instead of String N.
The cascade is a self contained system.  I call it a FAS because I believe 
it meets the definition.
The cascade is initially assumed everywhere [each step and overall] single 
valued and elegant = deterministic.  As a result each String N is more 
complex than String (N -1).

The assumption leads to a contradiction when String N exceeds the 
complexity allowed by Chaitin.  More information must be added to the 
cascade for it to continue.
Add to this Godelian incompleteness and a touch of just plain do not care 
as possible aspects of the Rules.  The result is a succession of fresh 
String O initiations to the cascade.  Some cascades are sufficiently well 
behaved to support SAS.

The cascade is modified by considering it to be an isomorphism and its 
association with a particular pattern to be an isomorphic link.  All 
patterns as considered to exist simultaneously in infinite repetition in a 
Superverse.  The Rules act as a comparator mitigating isomorphic link 
shifts between successor patterns.

The necessary gradients within the Superverse are provided and stirred by 
the Superverse/Nothing alternation which is historyless and driven by an 
incompleteness in both the Superverse and the Nothing.

I will expand my reading in logic to help my communication, but I believe 
the above total Superverse be an infinite collection of  FAS of all 
complexities including those where the Rules are completely do not care.

Hal

   




Re: Late response to Bruno Re: Computing Randomness

2001-04-24 Thread Marchal

Hal Ruhl wrote:

In what sense 4+1= is a proof chain ? A proof must be a sequence of
formula each of which are either axiom instance or theorems.

IMO it is ...

Definitions are not matter of opinion, but of conventional consensus. 

   ... a sequence of:

1) A formula which in this case is of the form (data string + ), it is a 
rule of inference acting on a initializing theorem.

data string  +  can hardly be considered as a formula. You should 
decribe
the formal system you are using (if any ?). 
Then you tell us it is a rule of inference as if it was possible for a 
formula to *be* a rule of inference. I don't understand.

2) The initializing theorem is data string.  Let us ignore the fact that 
the strings above are so short as to be individual alphabet elements.

Not an a priori problem, but not very helpful without a presentation of 
your
formal system.

The next step in the sequence is the axiom 1.

1 is not an axiom. It is a symbol denoting the number 1. It is neither 
an axiom, nor a formula. 

The value of this formula is designated by =.

 I see what you mean. This will not help us.

If you interpret the theorem 4+1=5 as 5 is a number, how will you
interpret 3+2=5 ?

It is another proof the theorem  string 5 is a number or WFF  [ Again 
ignoring that this string is so short as to be a single alphabet 
element.]  There can be more than one proof of a theorem.

Of course you can build a FAS such that the number will be considered as 
theorem. But then it is a very ad hoc special fas which has nothing
to do with number theory, and Juergen's remark remains valid.


In another post Hal Ruhl wrote:

Ok so computation is more than prove but prove is computation is that 
the idea?

If you consider an axiomatisable theory, you can  consider
proof as a (partial) computable function defined on a set of
sentences and with value in {0,1}. If the theory is decidable then
the function is total.
Even a theory as weak as Robinson Arithmetic can be use to 
compute all partial recursive function (programmable function): in
that sense proving is more than computation. 
Well, you can compare proof and computation in a lot of manner, and
depending of the criteria you choose, proof or computation will
be more or less than computation or proof.

A good book is Introduction to Mathematical Logic by Eliott
Mendelson (third edition), Wadsworth  Brooks, California, 1987.

Here it is in one line:

  N - N   -  S;  S +  N   -   N;   N - N   -   S

N is the Nothing and S is the Superverse.
All information = no information
N is no information.
N - N is the smallest possible computable perturbation brought on by a need 
of N to test its own stability.
S is all information absent the N.  It is as close to the N as can be.
S also needs to respond to the stability question via test because it 
contains almost all information which is almost no information.  Like the 
UD it can not prove anything it just generates stuff.  The smallest 
perturbation is to add back the N which results in all information and S 
becomes N.  Both events destroy history.

It is your scanner-duplicator acting at the lowest level.

My difference is that when S is manifest it is not a UD but is like a vast 
collection of individual computers running in parallel.  A great dove 
tailed structure.  Each is isomorphic to a universe.

I also introduce two additional arguments re each computer as to why 
determinism does not apply.  1) Deterministic cascades hit a complexity 
wall,  2) The random oracle is in S anyway - it gets used or it is a 
selection - it would be information unused by an exception.  These plus the 
lowest level scanner-duplicator mean there is some degree of random oracle 
present in each one including those that have SAS or attempt deterministic 
cascades.  There is an even distribution of universes since each computer 
is present an infinite number of times again to avoid any absolute or 
relative information in S.

Since N and S can not prove anything it is good that they can nevertheless 
compute the respective perturbations that run the scanner-duplicator 
alternation.

Im willing to believe you try to say something, but
I don't understand a bit. I'm sorry.

After looking at all of your recent posts re my questions I conclude that 
my Superverse idea and your UDA idea are cousins.  See also my just 
previous post re Some progress I think.

The UDA is used to prove something, mainly that physics is 
ultimately a branch of machine psychology.

My argument carries a rationale for a lowest level repeating 
scanner-duplicator type of behavior [i.e. stability], my Superverse is 
generated immediately and repeatedly at one end of that behavior,  the 
Superverse acts as data food for all my parallel computers which are 
actually comparators orchestrating isomorphic link shifts between all the 
patterns in the Superverse [link shifts = evolving universes], and it takes 
no selected information to construct all these 

Late response to Bruno Re: Computing Randomness

2001-04-21 Thread Hal Ruhl

Dear Bruno:

Sorry I missed this.  Here is my response.

At , you wrote:
Hal Ruhl wrote:

 Juergen: Hal, here is an infinite chain of provable unique theorems:
  1+1=2, 2+1=3, 3+1=4, 4+1=5, ...
 
 First these are not theorems they are proof chains ending in theorems.

If you reinterpret Juergen's word then you can tell him anything.
In all presentations of arithmetics I have seen proposition like 4+1 = 5
are theorems.

 4 + 1 = is a proof chain and the theorem proved is: 5 is a number.

In what sense 4+1= is a proof chain ? A proof must be a sequence of
formula each of which are either axiom instance or theorems.

IMO it is a sequence of:

1) A formula which in this case is of the form (data string + ), it is a 
rule of inference acting on a initializing theorem.

2) The initializing theorem is data string.  Let us ignore the fact that 
the strings above are so short as to be individual alphabet elements.

3) The next step in the sequence is the axiom 1.

4) The value of this formula is designated by =.

5) The value is the output string i.e. the theorem (string is a WFF) 
which was just proven.


If you interpret the theorem 4+1=5 as 5 is a number, how will you
interpret 3+2=5 ?

It is another proof the theorem  string 5 is a number or WFF  [ Again 
ignoring that this string is so short as to be a single alphabet 
element.]  There can be more than one proof of a theorem.

 Since most strings of length L are not
 compressible and have a complexity on the order of  L + H(L) eventually the
 cascade will encounter a base theorem string that makes the proof chain
 itself too complex to fit into a number theory of a given finite
 complexity.

Jacques Mallah answered this one. Actually it can be shown that there are
arbitrary complex and lengthy proofs in all axiomatisation of
elementary arithmetic.

Yes, as I explained in a later post I was here making a mistake so I 
explicitly added in the constraint I was using on a sub conscious level: 
The comment is true of single valued programs or deterministic cascades.


 If a FAS is consistent and finite and doing arithmetic it is incomplete?

That is true. You can even weaken finite with countable.


 So Godel is already a corollary of Turing and perhaps of Chaitin as well.


Godel first incompleteness theorem is indeed easy to derive from
Turing works on Non-Halting machines.

Although Chaitin presents its work as a generalisation of Godel, I
would say it is half true. Godel gives a constructive proof of the
existence of an undecidable statement in (all) axiomatisable formal
systems. Chaitin gives a non constructive proof of the existence
of an infinity of undecidable statements (linked with the notion
of complexity).

Well I think I fixed that as above.

I think that those who reason with formal systems should take
standart one and gives the precise presentation of it, or point
toward it through, perhaps some bibliographical links.

Bruno

That would be helpful.

Hal




Re: Computing Randomness

2001-04-13 Thread Hal Ruhl

Dear Hal

Here is the second quote. It is from Chaitin's The Limits of Mathematics 
page 90.

The first of these theorems states that an N-bit formal axiomatic system 
cannot enable one to exhibit any specific object with a program-size 
complexity greater than N + c.

Hal




Re: Computing Randomness

2001-04-13 Thread hal

Hal writes:
 Here is a direct quote from page 24 of Chaitin's The Unknowable:
 The general flavor of my work is like this.  You compare the complexity of 
 the axioms to the complexity of the result you're trying to derive, and if 
 the result is more complex than the axioms, then you can not get it from 
 those axioms.

I think Chaitin is paraphrasing his results here.  As he says, this just
gives the flavor of it.

 Here is the second quote. It is from Chaitin's The Limits of Mathematics 
 page 90.

 The first of these theorems states that an N-bit formal axiomatic system 
 cannot enable one to exhibit any specific object with a program-size 
 complexity greater than N + c.

When you look at this in detail, he means that the FAS cannot exhibit a
specific object that it can PROVE to have large complexity.  This article
is online at http://www.cs.auckland.ac.nz/CDMTCS/chaitin/rov.html and
the statement above is found in the third paragraph.  But when we read
farther down we find, a more formal statement:

Now consider a formal axiomatic system A of complexity N, i.e.,
with a set of theorems T_A that considered as an r.e. set as above has
self-delimiting program-size complexity H(TA) = N. We show that A cannot
enable us to exhibit a specific S-expression s with self-delimiting
complexity H(s) greater than N+c. Here c = 4872. See godel2.r.

And if we follow the link to godel2.r, which is the lisp program that
establishes the result, it says,

Show that a formal system of complexity N can't prove that a specific
object has complexity  N + 4872.

That's what is actually proven.  The FAS cannot PROVE that a specific
object has high complexity.  The earlier statements were attempts to
paraphrase this result and were perhaps somewhat misleading.  When he
said the FAS would not enable us to exhibit a specific s-expression with
high complexity, he meant that the FAS was not able to supply us with
a proof of this fact.

The whole thrust of Chaitin's result, when you follow it closely and
study the LISP code as I did for several hours last night, is that the
FAS is limited in the complexity it can prove, not in the complexity of
what it can produce.

Hal




Re: Computing Randomness

2001-04-13 Thread Marchal

Hal Ruhl wrote:

Juergen: Hal, here is an infinite chain of provable unique theorems:
 1+1=2, 2+1=3, 3+1=4, 4+1=5, ...

First these are not theorems they are proof chains ending in theorems.

If you reinterpret Juergen's word then you can tell him anything.
In all presentations of arithmetics I have seen proposition like 4+1 = 5
are theorems.

4 + 1 = is a proof chain and the theorem proved is: 5 is a number.

In what sense 4+1= is a proof chain ? A proof must be a sequence of
formula each of which are either axiom instance or theorems.

If you interpret the theorem 4+1=5 as 5 is a number, how will you 
interpret 3+2=5 ?

Since most strings of length L are not 
compressible and have a complexity on the order of  L + H(L) eventually the 
cascade will encounter a base theorem string that makes the proof chain 
itself too complex to fit into a number theory of a given finite 
complexity.

Jacques Mallah answered this one. Actually it can be shown that there are
arbitrary complex and lengthy proofs in all axiomatisation of 
elementary arithmetic.


If a FAS is consistent and finite and doing arithmetic it is incomplete?

That is true. You can even weaken finite with countable.


So Godel is already a corollary of Turing and perhaps of Chaitin as well.


Godel first incompleteness theorem is indeed easy to derive from
Turing works on Non-Halting machines.

Although Chaitin presents its work as a generalisation of Godel, I
would say it is half true. Godel gives a constructive proof of the 
existence of an undecidable statement in (all) axiomatisable formal 
systems. Chaitin gives a non constructive proof of the existence
of an infinity of undecidable statements (linked with the notion
of complexity).

I think that those who reason with formal systems should take
standart one and gives the precise presentation of it, or point
toward it through, perhaps some bibliographical links.

Bruno


 




Re: Computing Randomness

2001-04-13 Thread Hal Ruhl

Dear Hal

Since I was previously convinced by another that side bar discussions 
should be avoided I will respond to this on the list.

At 4/12/01, you wrote:
Hal writes:
  You are writing programs and they have a complexity.  Chaitin limits this
  complexity to no more than the complexity of the FAS plus a 
 constant.  Thus
  there can not be an infinite number of such constructions.

This is not quite right.

Well lets see if I can better explain my view and eliminate the unspoken parts.

Chatin limits how much complexity these programs
can be *proven* to have using the FAS.  The FAS is unable to show that
an object has more complexity than what is in the FAS itself (plus a
constant).

Yes but then the object is not fully describable by the FAS since one of 
its properties remains a mystery.  If a FAS is to prove a specific object 
is a well formed formula then the FAS should be able to fully describe it.

Here is a more careful description of my view.

See pages 24 and 25 of Chaitin's The Unknowable.

This is my interpretation:

Paraphrasing the middle of page 24:

Chaitin claims that his results demonstrate an incompleteness.  That there 
is an ocean of true but unprovable assertions.

How does he get this?

Definition from page 24:

Let's call a program elegant if no smaller program written in the same 
programming language has the same output.

He claims that his approach demonstrates that [quote from page 25]

If a formal axiomatic system A has LISP program size complexity N, then 
you can't use A to prove that any LISP expression more than N + 356 
characters long is elegant.  So A can only prove that finitely many 
expressions are elegant!

How does this get you an ocean of true but unprovable assertions?

Well any assertion [object] with a LISP elegant program size greater than N 
+ 356 can not be fully described by A since you can not identify its 
elegant program with A.

Now Chaitin says on page 24 that he can not exhibit specific true, 
unprovable assertions.

But can we indeed point to some?

I consider evolving universes - at first examination - to be theorem 
cascades in finite FAS.

So let us look at Juergen's example:

1+1=2, 2+1=3, 3+1=4, 4+1=5, ...2873465+1=2873466, 

As the iterations proceed numerous added steps to the computing program 
unavoidably contain longer strings [most strings are incompressible] than 
those in any previous iteration and thus these steps have more program 
length complexity.  This complexity will eventually exceed any finite 
limit.  We will have arrived at strings whose properties are not fully 
describable by any finite FAS.

What happens to the cascade when it reaches unprovable assertions [strings]?
Cascades do not have internal stop rules.

IMO the issue is cured by the FAS itself becoming more complex.  A bit of 
incompleteness is resolved.

I apply this to examine evolving universes.

It might be wrong but that is the current state of the idea.

Hal 




Re: Computing Randomness

2001-04-13 Thread hal

Hal writes:
 Well any assertion [object] with a LISP elegant program size greater than N 
 + 356 can not be fully described by A since you can not identify its 
 elegant program with A.

Agreed.

 Now Chaitin says on page 24 that he can not exhibit specific true, 
 unprovable assertions.

 But can we indeed point to some?

No, I don't think you have done so.  Keep in mind that Chaitin's
assertions in this context means potential theorems of the FAS (Formal
Axiomatic System, right?), that is, well-formed strings which might turn
out to be true or false.  Being fully described is not a well-formed
string.  It is not a potential theorem of the FAS.  So pointing at a
number and saying that it is not fully described (because the FAS
can't numerically determine its complexity) does not represent a true,
unprovable assertion in the FAS.

 I consider evolving universes - at first examination - to be theorem 
 cascades in finite FAS.

 So let us look at Juergen's example:

 1+1=2, 2+1=3, 3+1=4, 4+1=5, ...2873465+1=2873466, 

 As the iterations proceed numerous added steps to the computing program 
 unavoidably contain longer strings [most strings are incompressible] than 
 those in any previous iteration and thus these steps have more program 
 length complexity.  This complexity will eventually exceed any finite 
 limit.  We will have arrived at strings whose properties are not fully 
 describable by any finite FAS.

Sure.  Most of the strings almost certainly have greater complexity
than that of the FAS.  We may even be able to prove that they do so,
outside the system, by using assumptions which are stronger than the FAS.
But the FAS itself will not be able to prove that the complexity of
any string is greater than itself (plus a constant).  This is shown by
a trivial program which reaches a contradiction if it were possible.
Basically you would have a small program which outputs a string of large
complexity, which violates the definition of complexity (the size of
the smallest program which outputs it).

 What happens to the cascade when it reaches unprovable assertions [strings]?
 Cascades do not have internal stop rules.

It doesn't reach unprovable assertions [strings]; the
assertions/theorems are just as provable at this size as they were at
the smaller size.  Rather, the FAS can no longer compute how complex
the strings are.  That's the only difference.

I wonder if you are confusing the idea that the strings have unprovable
complexity with the idea that the strings themselves, as theorems,
are unprovable.  These are two different matters.

An FAS with a given complexity CAN and DOES produce theorems with
greater complexity.  You should not misread Chaitin to think that he
says otherwise.  There is NO LIMIT on the complexity of a theorem which
can be produced using an FAS, any more than there is a limit to the
complexity of a string that can be produced with a finite alphabet.

What Chaitin says is that the FAS can't PROVE that a string has greater
complexity than what the FAS itself starts with.  It is a limitation on
the power of the FAS to PROVE complexity, not on the power of the FAS
to GENERATE complexity.

 IMO the issue is cured by the FAS itself becoming more complex.  A bit of 
 incompleteness is resolved.

Why is there a need for a cure?

Hal (the other Hal)




Re: Computing Randomness

2001-04-12 Thread Hal Ruhl

Dear Juergen:

In case what I tried to say was not clear the idea is that there are no 
more than 2^(N + c) shortest possible unique proofs in an N-bit FAS.  How 
can number theory if it is a finite FAS contain an infinite number of 
unique theorems?

Hal





Re: Computing Randomness

2001-04-12 Thread Hal Ruhl

Dear Jacques:

At 4/12/01, you wrote:

Maybe Hal, Russel and Jurgen should take this discussion to email and 
 just let us know how it turns out, because I get enough junk mail already.

I have run into those who do not like the side bar approach.  I tend to 
agree that it cuts all the others out of the chance to participate.


I don't want to get involved in this, but let me just remind you that 
 the complexity of an infinite set of objects is much often lower than the 
 complexity of a typical member drawn from that set.  I would think no one 
 on this list forgets that.

Indeed but the debate is really about a possible fundamental source of true 
noise in some selected members of the Everything.

Hal





Re: Computing Randomness

2001-04-12 Thread Hal Ruhl

Dear Juergen:

You demonstrate my point.

At 4/12/01, you wrote:

Hal, here is an infinite chain of provable unique theorems:
1+1=2, 2+1=3, 3+1=4, 4+1=5, ...

First these are not theorems they are proof chains ending in theorems.
For example:

4 + 1 = is a proof chain and the theorem proved is: 5 is a number.

The base or data for the proof chain is 4 is a number which has just been 
proven by the previous step in the cascade.

Now notice that the data or base theorem [a well formed base 10 string] for 
each proof chain in the cascade counts up in value.  Therefore it must 
inexorably increase in length.  Since most strings of length L are not 
compressible and have a complexity on the order of  L + H(L) eventually the 
cascade will encounter a base theorem string that makes the proof chain 
itself too complex to fit into a number theory of a given finite 
complexity.   The cascade must then stop or number theory must become more 
complex.  Cascades do not have an internal stop rule.  Here we have a 
contradiction.  To cure it the applicable FAS must become more complex.

Below I say the same thing in a more formal way.

x

Let us first arbitrarily call this theorem cascade #27 in number theory. So 
an individual theorem in cascade #27 is identified as T27(i).

Let us look at say T27(4) in this cascade. Its decompressed proof program 
looks like:

(1) P27(4) = {RULES27(T27(3)) + Self-delimiter27(4)} computes T27(4) [which 
is the base ten string 5]

Where:
RULES27 = Add 1 to the base ten string that is the current data
T27(3) = is the data for the proof program of T27(4) [which is just the 
base ten string 4]

The most compressed version of its proof program would be:

(2) P27'(4) = {RULES27(P27'(3)) + Self-delimiter27'(4)} computes T27(4) 
[which is the base ten string 5]

Where:
P27'(3) = the max compression of T27(3)

Now let cascade #27 continue. Eventually the complexity of the data: P27'(i 
- 1) plus the complexity of RULES27 plus the complexity of 
Self-delimiter27'(i) when combined into P27'(i) makes P27'(i) exceed the 
complexity of number theory if number theory has a finite complexity.

Then what happens?

This cascade has no internal stop but it must stop [there is no rule of the 
cascade saying it can hop over unacceptably complex n(i) and even so the 
hop computation would become more and more complex as the base string gets 
more complex] or the FAS must get more complex.

If the FAS gets more complex where does the added information come from?

Does that not look a lot like:

If a FAS is consistent and finite and doing arithmetic it is incomplete?

So Godel is already a corollary of Turing and perhaps of Chaitin as well.

Going a step further when looking at universes in general I currently see 
no need for the new information to preserve a FAS. It just gets a running 
cascade to the next iteration at which point more new information may be 
needed.

So perhaps an essential dash of random oracle finishes the entree we call 
our universe.

Hal






Re: Computing Randomness

2001-04-12 Thread Hal Ruhl

Dear Russell:

Yes we did indeed have a similar debate some time ago.
At that time I was still trying to express this point of view correctly and 
admittedly made a number of mistakes back then [and still do].
Our debate helped me considerably and I thank you.

In response:

I just posted a response to Juergen that may help.  But here are a few 
comments.

At 4/13/01, you wrote:

Basically, Hal believes a finite FAS by definition implies that each
theorem is constrained to be no more than N-bits in length.

Well more precisely that the shortest possible proof chain of any theorem 
of a finite FAS can not be more complex than the limit which Chaitin 
provides.  Is this not Chaitin's position?  Given that, then there are only 
a finite number of theorems that satisfy this limit on the complexity of 
their shortest proof chains.

So by his
definition, number theory is not a finite FAS.

That of course is one possible view if we allow theorem cascades.  However, 
I prefer a more conservative view that this is an unresolved incompleteness 
that theorem cascades force towards completeness.

By contrast, almost everyone else believes finiteness in FASes refers
to a finite number of axioms, not that the theorems are bounded in any
fashion.

Actually I believe that all the components of a finite FAS - axioms, rules, 
alphabet, and number of theorems must be finite.

Whilst I can appreciate diversity of viewpoints, I fail to see how
Hal's position actually yields a useful mathematical object. In
Juergen's chain below, what is the use of a system where a+1=b
(lets say) is a valid theorem, but b+1=c (where c=b+1=a+2) is an
invalid theorem because of an arbitrary cutoff rule?

As I tried to show in my latest response to Juergen it seems to introduce a 
need for a random oracle which I believe you fell is an essential 
ingredient of SAS supporting universes.

Hal




Re: Computing Randomness

2001-04-12 Thread Hal Ruhl

Dear Russell:

You wrote:

Why bound the proof?

It was not my idea.  Chaitin equated complexity with a computing program's 
length and a proof chain is a computing program according to Turing.

[rearranging your post]

1+1=2, 2+1=3, 3+1=4 ...

are all distinct theorems.

My view:

Again as in my response to Juergen these are not theorems but proof chains 
leading to theorems.

3 + 1 = is a proof chain using a theorem as its base.  It leads to the 
theorem 4 is a number

1 + 1 = is the cascade founding proof chain and its base is the axiom 1 
is a number. It leads to the theorem 2 is a number.

There can only ever be a finite number of independent theorems, in
fact the number is equal to the number of axioms.

Since as I understand it overall proof chains start with an axiom then I agree.


However, one can
easily construct an infinite chain of theorems through logical
operations:

If T1, T2, T3 etc are theorems, then

T4=T1 and T2, T5=T1 or T2, T6=T1 and T2 or T3, are also theorems.

We can construct an infinite variety of these theorems.

You are writing programs and they have a complexity.  Chaitin limits this 
complexity to no more than the complexity of the FAS plus a constant.  Thus 
there can not be an infinite number of such constructions.

Hal


Of course
there will be many tautological relationships between the theorems,
but they're still distinct theorems.

And finite in number.

Hal




Re: Computing Randomness

2001-04-12 Thread Hal Ruhl

Dear Russell:

At 4/13/01, you wrote:
Bounded complexity does not imply bounded length. Examples include an
infinite sting of '0's, and the string '1234...9101112...'

That was part of the old debate and one of my initial mistakes.  I am not 
now talking about the length of theorems but the length of their proofs.


It must be true that the set of all theorems derivable from a finite
set of axioms contains no more information (or complexity) than is
contained in the set of axioms itself. However, as pointed out, this
doesn't imply the theorems are bounded in length, merely that their
complexity is bounded.

Does this shed light on this issue?

With this I agree.   There are however only a finite number of theorems 
with a finite complexity.  So number theory is either finite in theorem 
count or it is infinite in complexity.

Hal





Re: Computing Randomness

2001-04-12 Thread Russell Standish

Hal Ruhl wrote:
 
 Dear Russell:
 
 At 4/13/01, you wrote:
 Bounded complexity does not imply bounded length. Examples include an
 infinite sting of '0's, and the string '1234...9101112...'
 
 That was part of the old debate and one of my initial mistakes.  I am not 
 now talking about the length of theorems but the length of their proofs.
 

Why bound the proof?

 
 It must be true that the set of all theorems derivable from a finite
 set of axioms contains no more information (or complexity) than is
 contained in the set of axioms itself. However, as pointed out, this
 doesn't imply the theorems are bounded in length, merely that their
 complexity is bounded.
 
 Does this shed light on this issue?
 
 With this I agree.   There are however only a finite number of theorems 
 with a finite complexity.  So number theory is either finite in theorem 
 count or it is infinite in complexity.

There can only ever be a finite number of independent theorems, in
fact the number is equal to the number of axioms. However, one can
easily construct an infinite chain of theorems through logical
operations:

If T1, T2, T3 etc are theorems, then

T4=T1 and T2, T5=T1 or T2, T6=T1 and T2 or T3, are also theorems. 

We can construct an infinite variety of these theorems. Of course
there will be many tautological relationships between the theorems,
but they're still distinct theorems, just as 

1+1=2, 2+1=3, 3+1=4 ...

are all distinct theorems.

 
 Hal
 
 




Dr. Russell Standish Director
High Performance Computing Support Unit, Phone 9385 6967
UNSW SYDNEY 2052 Fax   9385 6965
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks





Re: Computing Randomness

2001-04-11 Thread Hal Ruhl

Dear Juergen:

At 4/11/01, you wrote:
Hal, Chaitin just says you cannot prove 20 pound theorems with 10 pound
axioms.

Please refer to Chaitin's The Unknowable generally and page 25, Chapter 
V, and note 10 at the bottom of page 97 in particular.

But the infinite cascade of all provable theorems of number
theory collectively does not convey more information than the axioms.

Do you consider the following as being a reasonable AIT expression of a 
theorem cascade j?

1)  Pj(i) = {Rules(Pj(i -1)) + Self-delimiter(i)}  is the shortest AIT 
program that computes Tj(i)

where:

Tj(i) = the ith theorem in the cascade
Self-delimiter(i) = the required AIT self contained length of Pj(i)
Pj(i -1) = the compressed form of Tj(i - 1)
Rules(Pj(i -1)) = the rules of the FAS acting on the previous theorem as data

If so and the cascade is the only way to arrive at any Tj(i) [which I think 
is inherent in the idea of theorem cascade] then each theorem Tj(i) in the 
cascade is more complex than its preimage Tj(i - 1) since Pj(i) contains 
Pj(i - 1) and thus Tj(i - 1) as data plus its own Self-delimiter's bit 
string making it longer than Pj(i -1).  Actually the increase in complexity 
with each step just accelerates because the Self-delimiter has to grow in 
length.  With the right class of isomorphism this can be interpreted as a 
universe with an accelerating expansion.

  But what if you declare theorems which say the same thing to be the
  _same theorem_, rewritten?
Duraid

Since a theorem is a symbol string, different strings cannot be the same
theorem.

Yes and to quote Chaitin's note on page 97 referenced above More 
precisely, the number  of N-bit strings X such that H(X)  [N + H(N) - K] 
is less than 2^(N - K + c).

Perhaps you mean you can easily derive certain theorems from
others stating the same thing, rewritten. But this is vague - you can
derive all theorems from the axioms stating the same thing, rewritten.

Then perhaps you will agree that we need not add to the above those strings 
shorter than N since they are just the same thing, rewritten without a 
long boring prefix.

Hal






Re: Computing Randomness

2001-04-10 Thread juergen

Hal, you wrote:

 I believe that attempting an extensive detailed formal description of
 the Everything is the wrong approach. IMO - at least in this case - the
 more information used to describe, the smaller the thing described.

I was not able to follow this, but informal and vague descriptions of
everything have been around for millennia.  Now it is time to go beyond
this. One cannot scientifically discuss nonformal statements or theories.

You did refer to formal describability when you repeatedly wrote that
if a universe is a theorem cascade in an N-bit formal axiomatic system
then Chaitin's incompleteness dictates that the cascade must stop,
and there are no more than 2^[N + c] theorems.  For some reason this
statement even made it into your FAQ list.

Counter example: number theory is describable by finitely many bits of
axioms yielding an infinite cascade of provable theorems.

JS




Re: Computing Randomness

2001-03-27 Thread Russell Standish

I appreciate Juergen's view. In essence he is assuming a nonuniform
distribution on the ensemble of descriptions, as though the
ensemble of descriptions are produced by the FAST algorithm. This is
perhaps the same as assuming a concrete universe.

In my approach (which didn't have this technical nicety), the ensemble
of descriptions obeyed a uniform distribution.

The ultimate observed distribution is obtained by equivalencing
distibutions according to the observer's interpretation. If the
observer is a universal Turing machine, the Universal Prior results.

Now humans appear to equivalence class random strings in a most
un-Turing machine like way. (ie random (incompressible) strings
usually contain no information). This may or may not be true of
conscious beings in general.

Occam's razor still follows as a kind of theorem regardless of whether
the initial distribution were uniform or had the character of being
generated by FAST. However the FAST distribution would weight pseudo
random descriptions far higher than truly random strings, assuming the
observer had a magical way of distinguishing them, which is a
different result from assuming an initial uniform distribution.

Aside from the philosophical issues of whether one likes a concrete
universe (a la Schmidhuber) or not, there is a vague possibility of
testing the issue through the prediction Schmidhuber makes about
random sequences being found to be pseudorandom in nature.

I suppose I raise the banner for the opposing camp - uniform
distribution over the ensemble, no concrete universe and true
randomness requires for consciousness and free-will.

Cheers

[EMAIL PROTECTED] wrote:
 
 Where does all the randomness come from?
 
...
 
 Is there an optimally efficient way of computing all the randomness in
 all the describable (possibly infinite) universe histories?  Yes, there
 is. There exists a machine-independent ensemble-generating algorithm
 called FAST that computes any history essentially as quickly as this
 history's fastest algorithm. Somewhat surprisingly, FAST is not slowed
 down much by the simultaneous computation of all the other histories.
 
 It turns out, however, that for any given history there is a speed
 limit which greatly depends on the history's degree of randomness.
 Highly random histories are extremely hard to compute, even by the optimal
 algorithm FAST. Each new bit of a truly random history requires at least
 twice the time required for computing the entire previous history.
 
 As history size grows, the speed of highly random histories (and most
 histories are random indeed) vanishes very quickly, no matter which
 computer we use (side note: infinite random histories would even require
 uncountable time, which does not make any sense). On the other hand,
 FAST keeps generating numerous nonrandom histories very quickly; the
 fastest ones come out at a rate of a constant number of bits per fixed
 time interval.
 
 Now consider an observer evolving in some universe history. He does not
 know in which, but as history size increases it becomes less and less
 likely that he is located in one of the slowly computable, highly random
 universes: after sufficient time most long histories involving him will
 be fast ones.
 
 Some consequences are discussed in
 http://www.idsia.ch/~juergen/toesv2/node39.html
 
 Juergen Schmidhuber
 




Dr. Russell Standish Director
High Performance Computing Support Unit, Phone 9385 6967
UNSW SYDNEY 2052 Fax   9385 6965
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks





RE: Computing Randomness

2001-03-22 Thread Hal Ruhl

Dear Juergen:

In reply:

Where does all the randomness come from?

Many physicists would be content with a statistical theory of everything
(TOE) based on simple probabilistic physical laws allowing for stochastic
predictions such as We do not know where this particular electron will
be in the next nanosecond, but with probability 0.04 we will find it in a
certain volume V.

Any source of randomness, however, leaves us with an unsatisfactory
TOE, since randomness does not have a compact or simple explanation, by
definition.

That seems incorrect. Randomness can have a simple explanation. However, a 
particular packet of random information is considered to be incompressible.

Here is what I consider to be a simple explanation for randomness in the 
Everything:

The very essence of the Everything is that it contains no information. In 
that case it can not internally resolve the question of its own stability 
because that would be information. There is another polar expression of no 
information and that is the Great Empty or the Nothing. The Nothing is 
not in the Everything because it is not a pattern.  [See next 
paragraph]  It also cannot internally resolve the question of its own 
stability for the same reason. However, the question of stability is not 
avoidable. The only way to attempt to resolve it is for each to make the 
only test available which is the minimum available perturbation - a 
conversion into its polar opposite. The two of these must then alternate 
existences. The alternation can not inject any information into either pole 
so it can not have a selected particular pattern. [Neither pole can contain 
a history of the process since that too would be information thus each 
alternation is an independent event resulting in no pattern to the 
alternation.]

Consider each manifestation of the Everything to be a particular pattern of 
all patterns each repeated an infinite number of times. The Everything can 
not be a fixed pattern of patterns. But it is not because each 
Everything/Nothing alternation conveys no history so the particular pattern 
of patterns manifest at any alternation is a random selection from all 
possible patterns of patterns.

Each pattern internal to the Everything has associated with it all possible 
interpretations of pattern. Each [pattern,interpretation] pair could be 
isomorphic to a particular possible/actual state of a particular 
universe. Write these isomorphismic links as: 
{[pattern,interpretation]/[possible/actual state of a universe]}.

Again the possible/actual pattern of these links within the Everything can 
not be static or it is information. So there needs to be a no information 
generating shifting of these isomorphismic links from possible to actual 
and back again.

The only essential aspect is that the current pattern of all these links 
can not endure.  The historic active states are in no way essential to the 
link shifting process.  Only the active state and its possible states are 
essential.

Each universe is defined by its rules of link shift.  Those rules that 
insist on even a partial consideration of the set of that universe's 
historic links in addition to the current link are more complex than those 
that do not.  If we live in a universe with simple rules it is unlikely our 
universe's history is an issue in determining its future - only its current 
state is germane.

Rules that have a do not care component can be even simpler than fully 
deterministic rules.

The rules need just compare all nearby links vs the current one in an 
increasing region of the current Everything pattern until a match is found 
then the shift takes place.  This process is interrupted by the 
Everything/Nothing alternation which results in a new pattern of patterns 
at the next manifestation of the Everything.


Where does the enormous information conveyed by a particular
history of random events come from? A TOE that cannot explain this
is incomplete.

No such data is essential.

The in hindsight obvious solution is an ensemble TOE which covers all
possible universe histories.  The ensemble conveys less information than
most particular histories - one main motivation of this mailing list.

Since both the Everything and the Nothing have no information and thus no 
history, any particular selected historic information is more more 
information.


Which are the possible histories?  Let us focus on well-defined ensembles
only, and ignore those that cannot be sufficiently specified to permit
reconstruction through a formally describable computer. In particular,
we may ignore uncountable ensembles such as continua, or other ensembles
including histories without finite descriptions.

Well you are focusing on excessively complex rules of universe evolution 
IMO.  Also see further comments below.

Is there an optimally efficient way of computing all the randomness in
all the describable (possibly infinite) universe histories?  Yes, there
is. There exists a