Re: White Rabbit vs. Tegmark

2005-06-15 Thread jamikes
Dear Russell and Hal:
thanks for the compassionate speedy replies. I would be happy to cpomply
with Hal's advice, alas, I have no browser working yet(?), only the mailbox.
Cluttered with garbage. I didn't believe a friend who had similar troubles
when installing Symantec, I lost my cyberways by the 2005 ed. And Excuse me
for keeping that irrelevant subject line. It is quite unfair to the rabbit.

I have no difficulty with 'plain' HTML, OE6 can handle it - or my
Word-viewer. It was the frightening 270 posts among the accumulated 13 -
1500  spam  what I collected first out, into an 'everything' folder for
simplicity's sake 
(I did the same with the other 7 listposts as well.)
*
Could anybody still remember for me WHAT was originally what "we could just
simply accept..." - in that former (other) (stupid) Subject line?

I always try to "read only" but sometime I cannot keep my mouse shut. Will
catch up hopefully soon.
Best wishes

John M

- Original Message -
From: ""Hal Finney"" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, June 14, 2005 7:41 PM
Subject: Re: White Rabbit vs. Tegmark




Re: White Rabbit vs. Tegmark

2005-06-14 Thread "Hal Finney"
John Mikes wrote:
> ... Those posts were accessible (for me) that started with a
> statement of the writer and not a lot of copies with some reply-lines
> interjected. I know (and like to use) to copy the phrases to reply to but
> even in a 2-week archiving it turns sour. After the first 30-40 post-reviews
> I got dizzy. This pertains to the regular listpost.

And with regard to Russell's posts:
> Your posts are one degree worse:
> You had 32 attachment-convoluted posts in the 270 of the 20 day everything
> list. The procedure
> to glance at your text (and I like to read what you wrote) I have to select
> the attachment, then call it up, then select open, then read it, and - If I
> want to save it: copy or cut it to file in 3-5 more clicks.

The list has certainly been active lately!

John, you might want to read the list via the archive at
http://www.escribe.com/science/theory/ .  It's very readable and it does
not choke on Russell's posts, which are actually perfectly legal email
formats and are legible on many mail readers.  Plus you won't be bothered
by email interruptions.  You can also use the thread index there which
will help you to follow conversations.

However I would like to echo John's complaint about excessive quoting.
There is no need to quote the complete content of another message before
replying.  Especially lately, people are conversing quickly and the
previous message was typically sent only a few hours or a day earlier.
We can all remember what was said, or go back in the thread and see it.
Just a brief quote to set the stage should be enough.

If you want to reply point by point, fine, in that case it does make sense
to quote each point before replying.  But quoting the whole message is
almost never necessary.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-06-14 Thread Russell Standish
I appreciate your difficulty - I have the same problem whenever someone
sends a pure HTML email - I have to navigate down several layers of
menus, and the result is a barely human readable message.

I also understand Microsoft Outlook has trouble understanding RFC
compliant signed emails, hence I have turned off autosigning for list
posts. I'll try to look at messages sent back from the everything list
email server to see if they're still being signed.

Have you tried reading the messages on the escribe archive? I do know
that those messages are plain text, and threaded, so this may be more
convenient for you.

Also, some email threads started off on the FoR list, which I
unsubscribed because I got too irritated with its censureship
policy. Consequently, I missed some of the thread beginnings too.

Cheers

On Tue, Jun 14, 2005 at 05:59:06PM -0400, jamikes wrote:
> Dear Russell and list:
> this is a personal problem due to my extremely feeble skills in computering.
> 
> I had (optimistically in past tense) problems with my internet e-mail
> connection and could not get/send e-mail since the date of this post. Then 2
> times I was lucky and got hundreds of email at a time, (the browser is still
> closed from my usage). To separate chaff from seed, I extracted the
> list-post before deleting the "slumscumspam". From May 24 to June 13 I
> accumulated 270 listposts -absolutely impossible to scan for topical and
> content interest. Those posts were accessible (for me) that started with a
> statement of the writer and not a lot of copies with some reply-lines
> interjected. I know (and like to use) to copy the phrases to reply to but
> even in a 2-week archiving it turns sour. After the first 30-40 post-reviews
> I got dizzy. This pertains to the regular listpost.
> 
> Your posts are one degree worse:
> You had 32 attachment-convoluted posts in the 270 of the 20 day everything
> list. The procedure
> to glance at your text (and I like to read what you wrote) I have to select
> the attachment, then call it up, then select open, then read it, and - If I
> want to save it: copy or cut it to file in 3-5 more clicks.
> A regular e-mail requires ONE click to read and ONE to save (in my Outlook
> Express).
> 
> I don't want to "reformulate" this list, just tried to communicate a point
> that may make MY life easier.
> 
> Apologies for whining
> 
> John Mikes
> 
> PS: I lost the 'original' post of the Subject:
> "RE:What do you lose if you simply accept..."
> and reading the posts I could not find out WHAT to accept? the posts were
> all over the Library of Congress, nothing to the original subject to
> explain. Just curious.  JM
> ==
> 
> ----- Original Message -
> From: "Russell Standish" <[EMAIL PROTECTED]>
> To: "Hal Finney" <[EMAIL PROTECTED]>
> Cc: 
> Sent: Tuesday, May 24, 2005 2:45 AM
> Subject: Re: White Rabbit vs. Tegmark
> 
> 
> 

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02




Re: White Rabbit vs. Tegmark

2005-06-14 Thread jamikes
Dear Russell and list:
this is a personal problem due to my extremely feeble skills in computering.

I had (optimistically in past tense) problems with my internet e-mail
connection and could not get/send e-mail since the date of this post. Then 2
times I was lucky and got hundreds of email at a time, (the browser is still
closed from my usage). To separate chaff from seed, I extracted the
list-post before deleting the "slumscumspam". From May 24 to June 13 I
accumulated 270 listposts -absolutely impossible to scan for topical and
content interest. Those posts were accessible (for me) that started with a
statement of the writer and not a lot of copies with some reply-lines
interjected. I know (and like to use) to copy the phrases to reply to but
even in a 2-week archiving it turns sour. After the first 30-40 post-reviews
I got dizzy. This pertains to the regular listpost.

Your posts are one degree worse:
You had 32 attachment-convoluted posts in the 270 of the 20 day everything
list. The procedure
to glance at your text (and I like to read what you wrote) I have to select
the attachment, then call it up, then select open, then read it, and - If I
want to save it: copy or cut it to file in 3-5 more clicks.
A regular e-mail requires ONE click to read and ONE to save (in my Outlook
Express).

I don't want to "reformulate" this list, just tried to communicate a point
that may make MY life easier.

Apologies for whining

John Mikes

PS: I lost the 'original' post of the Subject:
"RE:What do you lose if you simply accept..."
and reading the posts I could not find out WHAT to accept? the posts were
all over the Library of Congress, nothing to the original subject to
explain. Just curious.  JM
==

- Original Message -
From: "Russell Standish" <[EMAIL PROTECTED]>
To: "Hal Finney" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, May 24, 2005 2:45 AM
Subject: Re: White Rabbit vs. Tegmark






RE: White Rabbit vs. Tegmark

2005-05-29 Thread "Hal Finney"
Jonathan Colvin writes:
> That's rather the million-dollar question, isn't it? But isn't the
> multiverse limited in what axioms or predicates can be assumed? For
> instance, can't we assume that in no universe in Platonia can (P AND ~P) be
> an axiom or predicate?

No, I'd say that you could indeed have a mathematical object which had P
AND ~P as one of its axioms.  The problem is that from a contradiction,
you can deduce any proposition.  Therefore this mathematical system can
prove all well formed strings as theorems.

As Russell mentioned the other day, "everything" is just the other side of
the coin from "nothing".  A system that proves everything is essentially
equivalent to a system that proves nothing.  So any system based on P AND
~P has essentially no internal structure and is essentially equivalent
to the empty set.

One point is that we need to distinguish the tools we use to analyze
and understand the structure of a Tegmarkian multiverse from the
mathematical objects which are said to make up the multiverse itself.
We use logic and other tools to understand the nature of mathematics;
but mathematical objects themselves can be based on nonstandard "logics"
and have exotic structures.  There are an infinite number of formal
systems that have nothing to do with logic at all.  Formal systems are
just string-manipulation engines and only a few of them have the axioms
of logic as a basis.  Yet they can all be considered mathematical objects
and some of them might be said to be universes containing observers.

Hal Finney



RE: White Rabbit vs. Tegmark

2005-05-28 Thread Jonathan Colvin

>   Stephen:  Should we not expect Platonia to be Complete?

I'd like to think that it should not be (by Godel?); or that it is not
completely self-computable in finite meta-time. Or some such. But that's
more of a faith than a theory.

Jonathan Colvin


> >>Brent:  I doubt that the concept of "logically possible" has any
> >> absolute meaning.  It is relative to which axioms and
> >> predicates are assumed.
> >
> > That's rather the million-dollar question, isn't it? But isn't the
> > multiverse limited in what axioms or predicates can be assumed? For
> > instance, can't we assume that in no universe in Platonia 
> can (P AND ~P) 
> > be
> > an axiom or predicate?
> >
> >>Not long ago the quantum weirdness
> >> of Bell's theorem, or special relativity would have been
> >> declared "logically impossible".
> >
> > That declaration would simply have been mistaken.
> >
> >  Is it logically possible
> >> that Hamlet doesn't kill Polonius?
> >
> > Certainly. I'm sure there are people named "Hamlet" who 
> have not killed a
> > person named "Polonius".
> >
> >> Is it logically possible
> >> that a surface be both red and green?
> >
> > If you are asking whether it is logically possibly that a 
> surface that can
> > reflect *only* light at a wavelength of 680 nm can reflect 
> a wavelength of
> > 510 nm, the answer would seem to be "no".
> >
> > Jonathan Colvin
> >
> > 
> 
> 



Re: White Rabbit vs. Tegmark

2005-05-28 Thread Stephen Paul King

Hi Jonathan,

   Should we not expect Platonia to be Complete?

Stephen

- Original Message - 
From: "Jonathan Colvin" <[EMAIL PROTECTED]>

To: "'Everything-List'" 
Sent: Saturday, May 28, 2005 1:30 PM
Subject: RE: White Rabbit vs. Tegmark





Brent:  I doubt that the concept of "logically possible" has any
absolute meaning.  It is relative to which axioms and
predicates are assumed.


That's rather the million-dollar question, isn't it? But isn't the
multiverse limited in what axioms or predicates can be assumed? For
instance, can't we assume that in no universe in Platonia can (P AND ~P) 
be

an axiom or predicate?


Not long ago the quantum weirdness
of Bell's theorem, or special relativity would have been
declared "logically impossible".


That declaration would simply have been mistaken.

 Is it logically possible

that Hamlet doesn't kill Polonius?


Certainly. I'm sure there are people named "Hamlet" who have not killed a
person named "Polonius".


Is it logically possible
that a surface be both red and green?


If you are asking whether it is logically possibly that a surface that can
reflect *only* light at a wavelength of 680 nm can reflect a wavelength of
510 nm, the answer would seem to be "no".

Jonathan Colvin






RE: White Rabbit vs. Tegmark

2005-05-28 Thread Jonathan Colvin

>Brent:  I doubt that the concept of "logically possible" has any 
> absolute meaning.  It is relative to which axioms and 
> predicates are assumed.  

That's rather the million-dollar question, isn't it? But isn't the
multiverse limited in what axioms or predicates can be assumed? For
instance, can't we assume that in no universe in Platonia can (P AND ~P) be
an axiom or predicate?

>Not long ago the quantum weirdness 
> of Bell's theorem, or special relativity would have been 
> declared "logically impossible".

That declaration would simply have been mistaken.

  Is it logically possible 
> that Hamlet doesn't kill Polonius?

Certainly. I'm sure there are people named "Hamlet" who have not killed a
person named "Polonius".

> Is it logically possible 
> that a surface be both red and green?

If you are asking whether it is logically possibly that a surface that can
reflect *only* light at a wavelength of 680 nm can reflect a wavelength of
510 nm, the answer would seem to be "no". 

Jonathan Colvin





RE: White Rabbit vs. Tegmark

2005-05-28 Thread Jonathan Colvin
Hal: 
> To summarize, logic is not a property of universes.  It is a 
> tool that our minds use to understand the world, including 
> possible universes.
> We may fail to think clearly or consistently or logically 
> about what can and cannot exist, but that doesn't change the 
> world out there.
> 
> Rather than expressing the AUH as the theory that all 
> "logically possible"
> universes exist, I would just say that all universes exist.  
> And of course as we try to understand the nature of such a 
> multiverse, we will attempt to be logically consistent in our 
> reasoning.  That's where logic comes in.

But "all universes exist" is merely a tautology. To say anything meaningful,
one is then faced with attempting to define what one means by "universe", or
"exist". The concept of "logically possible" seems to me to be useful in
this latter endeavor.

Jonathan Colvin



Re: White Rabbit vs. Tegmark

2005-05-28 Thread Bruno Marchal


Le 27-mai-05, à 20:19, Hal Finney a écrit :


Brent Meeker writes:
I doubt that the concept of "logically possible" has any absolute 
meaning.  It

is relative to which axioms and predicates are assumed.


I agree but that is the reason why if we want to talk  *about* or to 
find measure *on*
logical possibilities, we should make clear our axioms and inference 
rules.

(And this is as in think compatible with Hal Finney's comment).


BTW Hal, sorry for having misinterpret your use of "UD". (Universal 
distribution versus Universal Dovetailer).



Bruno



http://iridia.ulb.ac.be/~marchal/




Re: White Rabbit vs. Tegmark

2005-05-28 Thread Alastair Malcolm
- Original Message -
From: Alastair Malcolm <[EMAIL PROTECTED]>
To: Patrick Leahy <[EMAIL PROTECTED]>
Cc: Everything-List 
Sent: 27 May 2005 09:52
Subject: Re: White Rabbit vs. Tegmark
.
.
> But I agree that going any further with particular predictions,
particularly
> after factoring in Tegmark's other levels, is likely to prove, shall we
say,
> at least a little way off ;)
>
> Alastair

Perhaps I should have added, less pessimistically, that the simplicty
requirement itself could conceivably lead to in-principle verifiable
predictions (eg that we are almost certainly in the simplest possible
universe containing properly self-aware structures (Yes, I know that idea
itself leads to all sorts of reference class considerations)). These kinds
of predictions may not distinguish certain different candidate measure
bases, but if fulfilled would boost the credibility of AUH ideas generally.

Alastair




Re: White Rabbit vs. Tegmark

2005-05-28 Thread Alastair Malcolm
- Original Message -
From: "Hal Finney" <[EMAIL PROTECTED]>
To: 
Sent: 27 May 2005 19:19
Subject: RE: White Rabbit vs. Tegmark
.
.
> To summarize, logic is not a property of universes.  It is a tool that
> our minds use to understand the world, including possible universes.
> We may fail to think clearly or consistently or logically about what
> can and cannot exist, but that doesn't change the world out there.
>
> Rather than expressing the AUH as the theory that all "logically possible"
> universes exist, I would just say that all universes exist.  And of course
> as we try to understand the nature of such a multiverse, we will attempt
> to be logically consistent in our reasoning.  That's where logic comes in.

I basically agree with this (and with Brent's similar comment). In
philosophy we tend to use 'logically possible' to distinguish from
'physically possible', the first usually means 'not violating deductive
logic', the second 'not violating physical laws'. So no 'round squares' in
the case of 'logically possible' - but, as you imply, that should really be
thought of as a feature of descriptions, not universes themselves. (I make
that very point in my paper.)

Alastair



Re: White Rabbit vs. Tegmark

2005-05-27 Thread Benjamin Udell
I don't see what practical difference there is between saying "all universes 
exist and we need to think with logical consistency about the subject" and "all 
logically possible universes exist."

Whatever the case, a related point might be worth considering:  whether logic 
is the only such "invariant" involved.

Level IV is supposed to involve variations of mathematical structure. But would 
there be various universes for various structures in deductive mathematical 
theories of optimization, probability, information, or logic? If probability 
theory supplied various different structures for different universes, it could 
certainly complicate the measure problem -- one could be contending with 
various theories of measure itself as sources of differences between various 
universes.
If, on the other hand, all these mathematics (and they amount together to a 
great deal of mathematics, maths often regarded as applied, but also often 
regarded as mathematically deep) place no additional constraints, nor broaden 
the variety of universes, then they seem correlated not to Level IV, but 
perhaps to Level III, since Level III (if I understand and phrase it correctly) 
neither adds constraints nor broadens the variety of universes on top of what 
one already has at Level II, and also since they seem to be mathematics of the 
structures of the kinds of alternatives which MWI says are all actualized.

Best, Ben Udell

Hal Finney writes:

>>Brent Meeker writes:
>>I doubt that the concept of "logically possible" has any absolute meaning.  
>>It is relative to which axioms and predicates are assumed.  Not long ago the 
>>quantum weirdness of Bell's theorem, or special relativity would have been 
>>declared "logically impossible".  Is it logically possible that Hamlet 
>>doesn't kill Polonius?  Is it logically possible that a surface be both red 
>>and green?
>
>I agree.  We went around on this "logically possible" stuff a few weeks ago.  
>A universe is not constrained by logical possibility. Our understanding of 
>what is or is not a possible universe is constrained by our mental abilities, 
>which include logic as one of their components.
>
>If I say a universe exists where glirp glorp glurp, that is not meaningful.  
>But it doesn't constrain or limit any universe, it is simply a non-meaningful 
>description.  It is a problem in my mind and my understanding, not a problem 
>in the nature of the multiverse.
>
>If I say a universe exists where p and not p, that has similar problems. It it 
>is not a meaningful description.
>
>Similarly if I say a universe exists where pi = 3.  Saying this demonstrates 
>an inconsistency in my mathematical logic.  It doesn't limit any universes.
>
>More complex descriptions, like whether green can be red, come down to our 
>definitions and what we mean.  Maybe we are inconsistent in our minds and 
>failing to describe a meaningful universe; maybe not. But again it does not 
>limit what universes exist.
>
>To summarize, logic is not a property of universes.  It is a tool that our 
>minds use to understand the world, including possible universes. We may fail 
>to think clearly or consistently or logically about what can and cannot exist, 
>but that doesn't change the world out there.
>
>Rather than expressing the AUH as the theory that all "logically possible" 
>universes exist, I would just say that all universes exist.  And of course as 
>we try to understand the nature of such a multiverse, we will attempt to be 
>logically consistent in our reasoning.  That's where logic comes in.
>
>Hal Finney




RE: White Rabbit vs. Tegmark

2005-05-27 Thread "Hal Finney"
Brent Meeker writes:
> I doubt that the concept of "logically possible" has any absolute meaning.  It
> is relative to which axioms and predicates are assumed.  Not long ago the
> quantum weirdness of Bell's theorem, or special relativity would have been
> declared "logically impossible".  Is it logically possible that Hamlet doesn't
> kill Polonius?  Is it logically possible that a surface be both red and green?

I agree.  We went around on this "logically possible" stuff a few
weeks ago.  A universe is not constrained by logical possibility.
Our understanding of what is or is not a possible universe is constrained
by our mental abilities, which include logic as one of their components.

If I say a universe exists where glirp glorp glurp, that is not
meaningful.  But it doesn't constrain or limit any universe, it is
simply a non-meaningful description.  It is a problem in my mind and my
understanding, not a problem in the nature of the multiverse.

If I say a universe exists where p and not p, that has similar problems.
It it is not a meaningful description.

Similarly if I say a universe exists where pi = 3.  Saying this
demonstrates an inconsistency in my mathematical logic.  It doesn't
limit any universes.

More complex descriptions, like whether green can be red, come down
to our definitions and what we mean.  Maybe we are inconsistent in
our minds and failing to describe a meaningful universe; maybe not.
But again it does not limit what universes exist.

To summarize, logic is not a property of universes.  It is a tool that
our minds use to understand the world, including possible universes.
We may fail to think clearly or consistently or logically about what
can and cannot exist, but that doesn't change the world out there.

Rather than expressing the AUH as the theory that all "logically possible"
universes exist, I would just say that all universes exist.  And of course
as we try to understand the nature of such a multiverse, we will attempt
to be logically consistent in our reasoning.  That's where logic comes in.

Hal Finney



RE: White Rabbit vs. Tegmark

2005-05-27 Thread Brent Meeker


>-Original Message-
>From: Alastair Malcolm [mailto:[EMAIL PROTECTED]
>Sent: Friday, May 27, 2005 8:53 AM
>To: Patrick Leahy
>Cc: Everything-List
>Subject: Re: White Rabbit vs. Tegmark
>
>
>- Original Message -
>From: Patrick Leahy <[EMAIL PROTECTED]>
>To: Brent Meeker <[EMAIL PROTECTED]>
>Cc: Everything-List 
>Sent: 26 May 2005 19:54
>Subject: RE: White Rabbit vs. Tegmark
>.
>.
>.
>> * But the arbitrariness of the measure itself becomes the main argument
>> against the everything thesis, since the main claimed benefit of the
>> thesis is that it removes arbitrary choices in defining reality.
>
>I don't think we can reject the thesis that all logically possible
>universes exist, just because its analysis is proving tricky (or even if it
>were to prove beyond us), and certainly not if we have reasonable candidates
>for a measure basis.

I doubt that the concept of "logically possible" has any absolute meaning.  It
is relative to which axioms and predicates are assumed.  Not long ago the
quantum weirdness of Bell's theorem, or special relativity would have been
declared "logically impossible".  Is it logically possible that Hamlet doesn't
kill Polonius?  Is it logically possible that a surface be both red and green?

Brent Meeker



Re: White Rabbit vs. Tegmark

2005-05-27 Thread "Hal Finney"
Bruno Marchal writes:
> Le 26-mai-05, à 18:03, Hal Finney a écrit :
>
> > One problem with the UD is that the probability that an integer is even
> > is not 1/2, and that it is prime is not zero.  Probabilities in general
> > will not equal those defined based on limits as in the earlier 
> > paragraph.
> > It's not clear which is the correct one to use.
>
> It seems to me that the UDA showed that the (relative) measure on a 
> computational state is determined by the (absolute?) measure on the 
> infinite computational histories going through that states. There is a 
> continuum of such histories, from the first person person point of view 
> which can not be aware of any delay in the computations emulated by the 
> DU (= all with Church's thesis), the first persons must bet on the 
> infinite union of infinite histories.

Sorry, I was not clear: by UD I meant the Universal Distribution, aka
the Universal Prior, not the Universal Dovetailer, which I think you
are talking about.  The UDA is the Universal Dovetailer Argument, your
thesis about the nature of first and third person experience.

I simply meant to use the Universal Distribution as an example probability
measure over the integers, where, given a particular Universal Turing
Machine, the measure for integer k equals the sum of 1/2^n where n is
the length of each program that outputs integer k.  Of course this is an
uncomputable measure.  A much simpler measure is 1/2^k for all positive
integers k.

I don't know whether there would be any probability measures over the
integers such that the probability of every event E equals the limit as n
approaches infinity of the probability of E for all integers less than n.

Actually on further thought, it's clear that the answer is no.  Consider
the set Ex = all integers less than x.  Clearly the probability of Ex
being true for integers less than n goes to zero as n goes to infinity.
But the only way a probability measure can give 0 for a set is to give
0 for every element of the set.  That means that the measure for all
elements less than x must be zero, for any x.  And that implies that
the measure must be 0 for all finite x, which rules out any meaningful
measure.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-27 Thread Bruno Marchal


Le 26-mai-05, à 18:03, Hal Finney a écrit :


One problem with the UD is that the probability that an integer is even
is not 1/2, and that it is prime is not zero.  Probabilities in general
will not equal those defined based on limits as in the earlier 
paragraph.

It's not clear which is the correct one to use.


It seems to me that the UDA showed that the (relative) measure on a 
computational state is determined by the (absolute?) measure on the 
infinite computational histories going through that states. There is a 
continuum of such histories, from the first person person point of view 
which can not be aware of any delay in the computations emulated by the 
DU (= all with Church's thesis), the first persons must bet on the 
infinite union of infinite histories.


Bruno


http://iridia.ulb.ac.be/~marchal/




Re: White Rabbit vs. Tegmark

2005-05-27 Thread Alastair Malcolm
- Original Message -
From: Patrick Leahy <[EMAIL PROTECTED]>
To: Brent Meeker <[EMAIL PROTECTED]>
Cc: Everything-List 
Sent: 26 May 2005 19:54
Subject: RE: White Rabbit vs. Tegmark
.
.
.
> * But the arbitrariness of the measure itself becomes the main argument
> against the everything thesis, since the main claimed benefit of the
> thesis is that it removes arbitrary choices in defining reality.

I don't think we can reject the thesis that all logically possible
universes exist, just because its analysis is proving tricky (or even if it
were to prove beyond us), and certainly not if we have reasonable candidates
for a measure basis.

I use fundamental philosophical principles effectively to infer the
existence of all possible distinguishable entities (which would include both
individual, and clusters of, physical universes), much if not all of which
should be modellable by formal systems. In the context of SAS-containing
universes, this seems to me a reasonable starting point for an in-principle
measure basis, particularly when consideration of 'compressed' entities (not
logically disbarred) leads to a prediction of the predominance of simpler
universes.

But I agree that going any further with particular predictions, particularly
after factoring in Tegmark's other levels, is likely to prove, shall we say,
at least a little way off ;)

Alastair
Philosophy paper at: http://www.physica.freeserve.co.uk/pa01.htm



Re: White Rabbit vs. Tegmark

2005-05-26 Thread Russell Standish
On Thu, May 26, 2005 at 07:54:03PM +0100, Patrick Leahy wrote:
> 
> * Since the White Rabbit^** argument implicitly assumes a measure, as it 
> stands it can't be definitive.
> 
> * But the arbitrariness of the measure itself becomes the main argument 
> against the everything thesis, since the main claimed benefit of the 
> thesis is that it removes arbitrary choices in defining reality.
> 
> Paddy Leahy
> 
> 
> ^**  "This is a song about Alice, remember?"  --- Arlo Guthrie
> 

This measure is not arbitrary, but defined by the observer
itself. Every such observer based measure satisfies an Occam's razor
theorem {\em in the framework of the observer}.

I discuss why the arbitrariness of the choice of UTM does not matter
in my "Why Occam's Razor" paper. What I didn't show in that paper (I
wrote it nearly 5 years ago!) was that any mapping of descriptions to
meaning suffices (ie any observer, computational or not) - Turing
completeness is not needed. Turing completeness only gives a guarantee
that the complexities seen by different observers differ by at most a
constant independent of the description.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpxVYFrlvA1C.pgp
Description: PGP signature


Re: White Rabbit vs. Tegmark

2005-05-26 Thread Alastair Malcolm

- Original Message -
From: Patrick Leahy <[EMAIL PROTECTED]>
To: Alastair Malcolm <[EMAIL PROTECTED]>
Cc: EverythingList 
Sent: 26 May 2005 11:20
Subject: Re: White Rabbit vs. Tegmark
>
> On Thu, 26 May 2005, Alastair Malcolm wrote:
>
> > An example occurs which might be of help. Let us say that the physics of
> > the universe is such that in the Milky Way galaxy, carbon-based SAS's
> > outnumber silicon-based SAS's by a trillion to one. Wouldn't we say that
> > the inhabitants of that galaxy are more likely to find themselves as
> > carbon-based? Now extrapolate this to any large, finite number of
> > galaxies. The same likelihood will pertain. Now surely all the
> > statistics don't just go out of the window if the universe happens to be
> > infinite rather than large and finite?
> >
> > Alastair
>
> Well, it just does, for countable sets.  This is what Cantor showed, and
> Lewis explains in his book. Cantor defines "same size" as a 1-to-1
> pairing. Hence as there are infinite primes and infinite non-primes there
> are the same number (cardinality) of them:
>
> (1,3), (2,4), (3,6), (5,8), (7,9), (11,12), (13,14), (17,15), (19,16) etc
> and so ad infinitum
>
> You might say there are obviously "more" non-primes. This means that if
> you list the numbers in numerical sequence, you get fewer primes than
> non-primes in any finite sequence except a few very short ones. But in
> another sequence the answer is different:
>
> (1,2,4) (3,5,6) (7,11,8) (13,17,9) etc ad infinitum.
>
> In this infinite sequence, each triple has two primes and only one
> non-prime. Hence there seem to be more primes than non-primes!

I've got no problem with this, as far as it goes. The point I was trying to
make - talking in terms of prime numbers if you prefer - is that in the
circumstance I am referring to we equivalently *are* in the situation of a
particular pre-defined sequence - and so relative frequency becomes
relevant.

Alastair




RE: White Rabbit vs. Tegmark

2005-05-26 Thread Patrick Leahy


On Thu, 26 May 2005, Brent Meeker wrote:

I agree with all you say.  But note that the case of finite sets is not 
really any different.  You still have to define a measure.  It may seem 
that there is one, compelling, natural measure - but that's just 
Laplace's principle of indifference applied to integers.  The is no more 
justification for it in finite sets than infinite ones.  That there are 
fewer primes than non-primes in set of natural numbers less than 100 
doesn't make the probability of a prime smaller *unless* you assign the 
same measure to each number.


Brent Meeker


I'll answer both Brent and Hal (m6556) here.

Yup, I hadn't thought through the measure issue properly. Several 
conclusions from this discussion:


* As Brent says, you always have to assume a measure. Sometimes a measure 
seems so "natural" you forget you're doing it, as above. Another example 
is in an infinite homogeneous universe, where equally a uniform measure 
seems natural, and also limiting frequencies over large volumes are well 
defined, as per Hal's message.


* As Hal points out, it *is* possible to assign probability measures to 
countably-infinite sets.


* Alternatively, you can assign a non-normalizable measure (presumably 
uniform) and take limiting frequencies. But then as per Cantor the answer 
does depend on your ordering, which is something extra you are adding to 
the definition of the set (even for numerical sequences).


* Different lines of argument can easily lead to different "natural" 
measures for the same set, e.g. Hal's "Universal Distribution" vs. 
Laplacian indifference for the integers.


* For me, the only way to connect a measure with a probability in the 
context of an "everything" theory is for the measure to represent the 
density of universes (or observers or observer-moments if you factor that 
in as well).


* Since the White Rabbit^** argument implicitly assumes a measure, as it 
stands it can't be definitive.


* But the arbitrariness of the measure itself becomes the main argument 
against the everything thesis, since the main claimed benefit of the 
thesis is that it removes arbitrary choices in defining reality.


Paddy Leahy


^**  "This is a song about Alice, remember?"  --- Arlo Guthrie




RE: White Rabbit vs. Tegmark

2005-05-26 Thread Brent Meeker


>-Original Message-
>From: Patrick Leahy [mailto:[EMAIL PROTECTED]
>Sent: Thursday, May 26, 2005 10:21 AM
>To: Alastair Malcolm
>Cc: EverythingList
>Subject: Re: White Rabbit vs. Tegmark
>
>
>
>On Thu, 26 May 2005, Alastair Malcolm wrote:
>
>> An example occurs which might be of help. Let us say that the physics of
>> the universe is such that in the Milky Way galaxy, carbon-based SAS's
>> outnumber silicon-based SAS's by a trillion to one. Wouldn't we say that
>> the inhabitants of that galaxy are more likely to find themselves as
>> carbon-based? Now extrapolate this to any large, finite number of
>> galaxies. The same likelihood will pertain. Now surely all the
>> statistics don't just go out of the window if the universe happens to be
>> infinite rather than large and finite?
>>
>> Alastair
>
>Well, it just does, for countable sets.  This is what Cantor showed, and
>Lewis explains in his book. Cantor defines "same size" as a 1-to-1
>pairing. Hence as there are infinite primes and infinite non-primes there
>are the same number (cardinality) of them:
>
>(1,3), (2,4), (3,6), (5,8), (7,9), (11,12), (13,14), (17,15), (19,16) etc
>and so ad infinitum
>
>You might say there are obviously "more" non-primes. This means that if
>you list the numbers in numerical sequence, you get fewer primes than
>non-primes in any finite sequence except a few very short ones. But in
>another sequence the answer is different:
>
>(1,2,4) (3,5,6) (7,11,8) (13,17,9) etc ad infinitum.
>
>In this infinite sequence, each triple has two primes and only one
>non-prime. Hence there seem to be more primes than non-primes!
>
>For the continuum you can restore order by specifying a measure which just
>*defines* what fraction of real numbers between 0 & 1 you consider to lie
>in any interval. For instance the obvious uniform measure is that there
>are the same number between 0.1 and 0.2 as between 0.8 and 0.9 etc.
>Why pick any other measure? Well, suppose y = x^2. Then y is also between
>0 and 1. But if we pick a uniform measure for x, the measure on y is
>non-uniform (y is more likely to be less than 0.5). If you pick a uniform
>measure on y, then x = sqrt(y) also has a non-uniform measure (more likely
>to be > 0.5).
>
>A measure like this works for the continuum but not for the naturals
>because you can map the continuum onto a finite segment of the real line.
>In m6511 Russell Standish describes how a measure can be applied to the
>naturals which can't be converted into a probability. I must say, I'm not
>completely sure what that would be good for.
>
>Paddy Leahy

I agree with all you say.  But note that the case of finite sets is not really
any different.  You still have to define a measure.  It may seem that there is
one, compelling, natural measure - but that's just Laplace's principle of
indifference applied to integers.  The is no more justification for it in
finite sets than infinite ones.  That there are fewer primes than non-primes in
set of natural numbers less than 100 doesn't make the probability of a prime
smaller *unless* you assign the same measure to each number.

Brent Meeker



Re: White Rabbit vs. Tegmark

2005-05-26 Thread "Hal Finney"
Paddy Leahy writes:
> For the continuum you can restore order by specifying a measure which just 
> *defines* what fraction of real numbers between 0 & 1 you consider to lie 
> in any interval. For instance the obvious uniform measure is that there 
> are the same number between 0.1 and 0.2 as between 0.8 and 0.9 etc. 
> Why pick any other measure? Well, suppose y = x^2. Then y is also between 
> 0 and 1. But if we pick a uniform measure for x, the measure on y is 
> non-uniform (y is more likely to be less than 0.5). If you pick a uniform 
> measure on y, then x = sqrt(y) also has a non-uniform measure (more likely 
> to be > 0.5).
>
> A measure like this works for the continuum but not for the naturals 
> because you can map the continuum onto a finite segment of the real line.
> In m6511 Russell Standish describes how a measure can be applied to the 
> naturals which can't be converted into a probability. I must say, I'm not 
> completely sure what that would be good for.

I think it still makes sense to take limits over the integers.
The fraction of integers less than n that is prime has a limit of 0
as n goes to infinity.  The fraction that are even has a limit of 1/2.
And so on.

When you apply a measure to the whole real line, it has to be non-uniform
and has to go asymptotically to zero as you go out to infinity.
This happens implicitly when you map it to (0,1) even before you put
a measure on that segment.  The same thing can be done to integers.
The universal distribution assigns probability to every integer such
that they all add to 1.  The probability of an integer is based on the
length of the smallest program in a given Universal Turing Machine which
outputs that integer.  Specifically it equals the sum of 1/2^l where
l is the length of each program that outputs the integer in question.
Generally this will give higher measure to smaller numbers, although a few
big numbers will have relatively high measure if they have small programs
(i.e. if they are "simple").  Of course this measure is non-uniform and
goes asymptotically to zero, as any probability measure must.

One problem with the UD is that the probability that an integer is even
is not 1/2, and that it is prime is not zero.  Probabilities in general
will not equal those defined based on limits as in the earlier paragraph.
It's not clear which is the correct one to use.

Going back to Alistair's example, suppose we lived in a spatially infinite
universe, Tegmark's "level 1" multiverse.  Of course our entire Hubble
bubble is replicated an infinite number of times, to any desired degree
of precision.  Hence we have an infinite number of counterparts.

Do you see a problem in drawing probabilistic conclusions from this?
Would it make a difference if physics were ultimately discrete and all
spatial positions could be described as integers, versus ultimately
continuous, requiring real numbers to describe positions?

Note that in this case we can't really use the UD or a line-segment
measure because there is no natural starting point which distinguishes the
"origin" of space.  We can't have a non-uniform measure in a homogeneous
space, unless we just pick an origin arbitrarily.  So in this case the
probability-limit concept seems most appropriate.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-26 Thread Russell Standish
On Thu, May 26, 2005 at 11:20:35AM +0100, Patrick Leahy wrote:
> 
> A measure like this works for the continuum but not for the naturals 
> because you can map the continuum onto a finite segment of the real line.
> In m6511 Russell Standish describes how a measure can be applied to the 
> naturals which can't be converted into a probability. I must say, I'm not 
> completely sure what that would be good for.
> 
> Paddy Leahy

Umm, doomsday argument reasoning when the total number of people who
ever will have lived is infinite (but countable). Explain more later,
if interested.

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpShn8j0UGVW.pgp
Description: PGP signature


Re: White Rabbit vs. Tegmark

2005-05-26 Thread Patrick Leahy


On Thu, 26 May 2005, Alastair Malcolm wrote:

An example occurs which might be of help. Let us say that the physics of 
the universe is such that in the Milky Way galaxy, carbon-based SAS's 
outnumber silicon-based SAS's by a trillion to one. Wouldn't we say that 
the inhabitants of that galaxy are more likely to find themselves as 
carbon-based? Now extrapolate this to any large, finite number of 
galaxies. The same likelihood will pertain. Now surely all the 
statistics don't just go out of the window if the universe happens to be 
infinite rather than large and finite?


Alastair


Well, it just does, for countable sets.  This is what Cantor showed, and 
Lewis explains in his book. Cantor defines "same size" as a 1-to-1 
pairing. Hence as there are infinite primes and infinite non-primes there 
are the same number (cardinality) of them:


(1,3), (2,4), (3,6), (5,8), (7,9), (11,12), (13,14), (17,15), (19,16) etc 
and so ad infinitum


You might say there are obviously "more" non-primes. This means that if 
you list the numbers in numerical sequence, you get fewer primes than 
non-primes in any finite sequence except a few very short ones. But in 
another sequence the answer is different:


(1,2,4) (3,5,6) (7,11,8) (13,17,9) etc ad infinitum.

In this infinite sequence, each triple has two primes and only one 
non-prime. Hence there seem to be more primes than non-primes!


For the continuum you can restore order by specifying a measure which just 
*defines* what fraction of real numbers between 0 & 1 you consider to lie 
in any interval. For instance the obvious uniform measure is that there 
are the same number between 0.1 and 0.2 as between 0.8 and 0.9 etc. 
Why pick any other measure? Well, suppose y = x^2. Then y is also between 
0 and 1. But if we pick a uniform measure for x, the measure on y is 
non-uniform (y is more likely to be less than 0.5). If you pick a uniform 
measure on y, then x = sqrt(y) also has a non-uniform measure (more likely 
to be > 0.5).


A measure like this works for the continuum but not for the naturals 
because you can map the continuum onto a finite segment of the real line.
In m6511 Russell Standish describes how a measure can be applied to the 
naturals which can't be converted into a probability. I must say, I'm not 
completely sure what that would be good for.


Paddy Leahy



Re: White Rabbit vs. Tegmark

2005-05-26 Thread Alastair Malcolm
> - Original Message -
> From: Patrick Leahy <[EMAIL PROTECTED]>
> To: Alastair Malcolm <[EMAIL PROTECTED]>
> Cc: EverythingList 
> Sent: 24 May 2005 22:10
> Subject: Re: White Rabbit vs. Tegmark
> .
>[Patrick:]
> > This is very reminiscent of Lewis' argument. Have you read his book?
IIRC
> > he claims that you can't actually put a measure (he probably said: you
> > can't define probabilities) on a countably infinite set, precisely
because
> > of Cantor's pairing arguments. Which seems plausible to me.
> [Alastair:]
> It seems to depend on whether one can find an intrinsic ordering (or
> something similar), such that relative frequency comes into play (so prime
> numbers *would* be less likely to be hit).

An example occurs which might be of help. Let us say that the physics of the
universe is such that in the Milky Way galaxy, carbon-based SAS's outnumber
silicon-based SAS's by a trillion to one. Wouldn't we say that the
inhabitants of that galaxy are more likely to find themselves as
carbon-based? Now extrapolate this to any large, finite number of galaxies.
The same likelihood will pertain. Now surely all the statistics don't just
go out of the window if the universe happens to be infinite rather than
large and finite?

Alastair




RE: White Rabbit vs. Tegmark

2005-05-25 Thread Lee Corbin
Paddy writes

> Stathis Papaioannou wrote:
> 
> > (b) In the multiverse, those worlds in which it is a frequent occurrence 
> > that the laws of physics are temporarily suspended so that, for example, 
> > talking white rabbits materialise out of thin air, may greatly 
> > predominate. However, it is no surprise that we live in the orderly 
> > world that we do. For in those other worlds, although observers very 
> > much like us may evolve, they will certainly not spend their time 
> > puzzling over the curious absence of white rabbit type phenomena. The 
> > mere fact that we are having this discussion therefore necessitates that 
> > we live in a world where physical laws are never violated, however 
> > unlikely such a world may at first seem. This is the *extreme* anthropic 
> > principle at work.

Might it not also seem more probable (upon some of the hypotheses
being entertained here) that observers *just* like us evolved, and
then just today white rabbits began to appear out of nowhere?

> Good point, this is a fundamental weakness of the AP. If you take it to 
> extremes, we should not be surprised by *anything* because the entire 
> history of our past light-cone to date, down to specific microscopic 
> quantum events, is required in order to account for the fact that you and 
> I are having this particular exchange.

But is "the entire history... down to quantum events"  really
necessary to account for this exchange?  I say this because the
set of all the observer moments *this* seems to require covers
so many possibilities, including schizophrenia, etc. But...
this being so tricky, how may I have misunderstood?

Lee

> To give the AP force, you have to 
> work on the most general possible level (hence it was a big mistake for 
> Barrow & Tipler to restrict it to "carbon-based life forms" in their book, 
> certainly not in line with Brandon Carter's original thought).
> 
> Paddy Leahy



RE: White Rabbit vs. Tegmark

2005-05-25 Thread Jonathan Colvin

>Stathis:  I don't know if you can make a sharp distinction between the 
> really weird universes where observers never evolve and the 
> slightly weird ones where talking white rabbits appear now 
> and then. Consider these two parallel arguments using a 
> version of the anthropic principle:
> 
> (a) In the multiverse, those worlds which have physical laws 
> and constants very different to what we are used to may 
> greatly predominate. However, it is no surprise that we live 
> in the world we do. For in those other worlds, conditions are 
> such that stars and planets could never form, and so 
> observers who are even remotely like us would never have 
> evolved. The mere fact that we are having this discussion 
> therefore necessitates that we live in a world where the 
> physical laws and constants are very close to their present 
> values, however unlikely such a world may at first seem. This 
> is the anthropic principle at work.
> 
> (b) In the multiverse, those worlds in which it is a frequent 
> occurence that the laws of physics are temporarily suspended 
> so that, for example, talking white rabbits materialise out 
> of thin air, may greatly predominate. However, it is no 
> surprise that we live in the orderly world that we do. For in 
> those other worlds, although observers very much like us may 
> evolve, they will certainly not spend their time puzzling 
> over the curious absence of white rabbit type phenomena. The 
> mere fact that we are having this discussion therefore 
> necessitates that we live in a world where physical laws are 
> never violated, however unlikely such a world may at first 
> seem. This is the
> *extreme* anthropic principle at work.
> 
> If there is something wrong with (b), why isn't there also 
> something wrong with (a)?

This is the problem of determining the appropriate "class" of observer we
should count ourselves as being a random selection on. There might indeed be
something wrong with (a); replace "The mere fact that we are having *this*
discussion" with, "The mere fact that we are having *a* discussion" to
obtain a dramatically different observer class. Your formulation of (a)
(*this* discussion) essentially restricts us to being a random selection on
the class of observers with access to internet and email, discoursing on the
"everything" list. Replacing "this" with "a" broadens the class to include
any intelligent entity capable of (and having) a discussion. 

The problem of determining the appropriate class seems a rather intractable
one. Choosing too broad a class can lead to unpleasant consequences such as
the doomsday argument; too narrow a class leads to (b). Mondays, wednesdays
and fridays, I believe that my appropriate reference class can be only one;
"Jonathan Colvin" in this particular branch of the MW, since "I" could not
have been anyone else. Weekends, tuesdays and thursdays I believe I'm a
random observer on the class of "observers".

Jonathan Colvin




Re: White Rabbit vs. Tegmark

2005-05-25 Thread Alastair Malcolm
- Original Message -
From: Patrick Leahy <[EMAIL PROTECTED]>
To: Alastair Malcolm <[EMAIL PROTECTED]>
Cc: EverythingList 
Sent: 24 May 2005 22:10
Subject: Re: White Rabbit vs. Tegmark
.
.
> This is very reminiscent of Lewis' argument. Have you read his book? IIRC
> he claims that you can't actually put a measure (he probably said: you
> can't define probabilities) on a countably infinite set, precisely because
> of Cantor's pairing arguments. Which seems plausible to me.

It seems to depend on whether one can find an intrinsic ordering (or
something similar), such that relative frequency comes into play (so prime
numbers *would* be less likely to be hit). As implied by my paper this would
suggest a solution to the WR problem, but even if no ordering is possible or
is irrelevant - the simple Cantorian situation - then there would be no WR
problem anyway. (I have read hopefully the relevant passages in 'On the
Plurality of Worlds' - I would think you are mainly referring to section
2.5;  he doesn't actually mention either 'measure' or 'probability' here as
far as I can see - more like 'outnumber', 'abundance' etc.)

> Lewis also distinguishes between inductive failure and rubbish universes
> as two different objections to his model. I notice that in your articles
> both you and Russell Standish more or less run these together.

Lewis' approach to the inductive failure objection is slightly different,
with the result that he can deploy a separate argument against it. Where he
says

"Why should the reason everyone has to distrust induction seem more
formidable when the risk of error is understood my way: as the existence of
other worlds wherein our counterparts are deceived? It should not." [p117]

... he is basically saying that from a deductive-logic point of view we have
some degree of mistrust of induction anyway, and this will not be affected
whether we consider the possible worlds (where induction fails) to be real
or imaginary.

However, it is the (for me) straightforward 'induction failure' objection -
that the world should in all likelihood become unpredictable from the next
moment on - that I address in my paper, (which in many ways more
closely links to Lewis's 'rubbish universe' objection); my mentioning of
'rubbish' is in the different context of *invisible* universes, which is in
the appendix argument concerning predomination of simpler universes.

Alastair




RE: White Rabbit vs. Tegmark

2005-05-25 Thread Patrick Leahy


On Wed, 25 May 2005, Stathis Papaioannou wrote:


Consider these two parallel arguments using a version of the anthropic 
principle:


(a) In the multiverse, those worlds which have physical laws and 
constants very different to what we are used to may greatly predominate. 
However, it is no surprise that we live in the world we do. For in those 
other worlds, conditions are such that stars and planets could never 
form, and so observers who are even remotely like us would never have 
evolved. The mere fact that we are having this discussion therefore 
necessitates that we live in a world where the physical laws and 
constants are very close to their present values, however unlikely such 
a world may at first seem. This is the anthropic principle at work.


(b) In the multiverse, those worlds in which it is a frequent occurence 
that the laws of physics are temporarily suspended so that, for example, 
talking white rabbits materialise out of thin air, may greatly 
predominate. However, it is no surprise that we live in the orderly 
world that we do. For in those other worlds, although observers very 
much like us may evolve, they will certainly not spend their time 
puzzling over the curious absence of white rabbit type phenomena. The 
mere fact that we are having this discussion therefore necessitates that 
we live in a world where physical laws are never violated, however 
unlikely such a world may at first seem. This is the *extreme* anthropic 
principle at work.


If there is something wrong with (b), why isn't there also something 
wrong with (a)?


--Stathis Papaioannou


Good point, this is a fundamental weakness of the AP. If you take it to 
extremes, we should not be surprised by *anything* because the entire 
history of our past light-cone to date, down to specific microscopic 
quantum events, is required in order to account for the fact that you and 
I are having this particular exchange. To give the AP force, you have to 
work on the most general possible level (hence it was a big mistake for 
Barrow & Tipler to restrict it to "carbon-based life forms" in their book, 
certainly not in line with Brandon Carter's original thought).


Paddy Leahy



RE: White Rabbit vs. Tegmark

2005-05-25 Thread Stathis Papaioannou

Paddy Leahy writes:

Sure enough, you came up with my objection years ago, in the form of the 
"White Rabbit" paradox. Since usage is a bit vague, I'll briefly re-state 
it here. The problem is that worlds which are "law-like", that is which 
behave roughly as if there are physical laws but not exactly, seem to 
vastly outnumber worlds which are strictly "lawful". Hence we would expect 
to see numerous departures from laws of nature of a non-life-threating 
kind.


This is a different objection to the prediction of a complete failure of 
induction... it's true that stochastic universes with no laws at all (or 
where laws abruptly cease to function) should be vastly more common still, 
but they are not observed due to anthropic selection.


I don't know if you can make a sharp distinction between the really weird 
universes where observers never evolve and the slightly weird ones where 
talking white rabbits appear now and then. Consider these two parallel 
arguments using a version of the anthropic principle:


(a) In the multiverse, those worlds which have physical laws and constants 
very different to what we are used to may greatly predominate. However, it 
is no surprise that we live in the world we do. For in those other worlds, 
conditions are such that stars and planets could never form, and so 
observers who are even remotely like us would never have evolved. The mere 
fact that we are having this discussion therefore necessitates that we live 
in a world where the physical laws and constants are very close to their 
present values, however unlikely such a world may at first seem. This is the 
anthropic principle at work.


(b) In the multiverse, those worlds in which it is a frequent occurence that 
the laws of physics are temporarily suspended so that, for example, talking 
white rabbits materialise out of thin air, may greatly predominate. However, 
it is no surprise that we live in the orderly world that we do. For in those 
other worlds, although observers very much like us may evolve, they will 
certainly not spend their time puzzling over the curious absence of white 
rabbit type phenomena. The mere fact that we are having this discussion 
therefore necessitates that we live in a world where physical laws are never 
violated, however unlikely such a world may at first seem. This is the 
*extreme* anthropic principle at work.


If there is something wrong with (b), why isn't there also something wrong 
with (a)?


--Stathis Papaioannou

_
Sell your car for $9 on carpoint.com.au   
http://www.carpoint.com.au/sellyourcar




Re: White Rabbit vs. Tegmark

2005-05-24 Thread Patrick Leahy


On Tue, 24 May 2005, Alastair Malcolm wrote:


Perhaps I can throw in a few thoughts here, partly in the hope I may learn
something from possible replies (or lack thereof!).

- Original Message -
From: Patrick Leahy <[EMAIL PROTECTED]>
Sent: 23 May 2005 00:03
.





This is not a defense which Tegmark can make, since he does
require a measure (to give his thesis some anthropic content).


I don't understand this last sentence - why couldn't he use the 'Lewisian
defence' if he wanted - it is the Anthropic Principle (or just logic) that
necessitates SAS's (in a many worlds context): our existence in a world that
is suitable for us is independent of the uncountability or otherwise of the
sets of suitable and unsuitable worlds, it seems to me. (Granted he does use
the 'm' word in talking about level 4 (and other level) universes, but I am
asking why he needs it to provide 'anthropic content'.)


You have to ask what motivates a physicist like Tegmark to propose this 
concept. OK, there are deep metaphysical reasons which favour it, but the 
they arn't going to get your paper published in a physics journal. The 
main motive is the Anthropic Principle explanation for alleged fine tuning 
of the fundamental parameters. As Brandon Carter remarks in the original 
AP paper, this implies the existence of an ensemble. Meaning that fine 
tuning only ceases to be a surprise if there are lots of universes, at 
least some of which are congenial/cognizable. But this bare statement is 
not enough to do physics with. But suppose you can estimate the fraction 
of cognizable worlds with, say the cosmological constant Lambda less than 
its current value. If Lambda is an arbitrary real variable, there are 
continuously many such worlds, so you need a measure to do this. This 
allows a real test of the hypothesis: if Lambda is very much lower than it 
has to be anthropically, there is probably some non-anthropic reason for 
its low value.


(Actually Lambda does seem to be unnecessarily low, but only by one or two 
orders of magnitude).


The point is, without a measure there is no way to make such predictions 
and the AP loses its precarious claim to be scientific.



There are hints that it may be worth exploring fundamentally different
approaches to the White Rabbit problem when we consider that for Cantor the
set of all integers is the same 'size' as that of all the evens (not too
good on its own for deciding whether a randomly selected integer is likely
to come out odd or even); similarly for comparing the set of all reals
between 0 and 1000, and between 0 and 1. The standard response to this is
that one *cannot* select a real (or integer) in such circumstances - but in
the case of many worlds we *do* have a selection (the one we are in now), so
maybe there is more to be said than that of applying the Cantor approach to
real worlds, and also on random selection.


This is very reminiscent of Lewis' argument. Have you read his book? IIRC 
he claims that you can't actually put a measure (he probably said: you 
can't define probabilities) on a countably infinite set, precisely because 
of Cantor's pairing arguments. Which seems plausible to me.


Lewis also distinguishes between inductive failure and rubbish universes 
as two different objections to his model. I notice that in your articles 
both you and Russell Standish more or less run these together.





A final musing on finite formal systems: I have always
considered formal systems to be a provisional 'best guess' (or *maybe* 2nd
best after the informational approach) for exploring the plenitude - but it
occurs to me that non-finitary formal systems (which could inter alia
encompass the reals) may match (say SAS-relevant) finite formal systems in
simplicity terms, if the (infinite-length) axioms themselves could be
algorithmically generated. This would lead to a kind of 'meta-formal-system'
approach. Just a passing thought...

I think this is the kind of trouble you get into with the "mathematical 
structure" = formal system approach. If you just take the structure as 
mathematical objects, you are in much better shape. For instance, although 
there are aleph-null theorems in integer arithmetic, and a higher order of 
unprovable statements, you can just generate the integers with a program a 
few bits long. And the integers are the complete set of objects in the 
field of integer arithmetic. Similarly for the real numbers: if you just 
want to generate them all, draw a line (or postulate the complete set of 
infinite-length bitstrings). No need to worry about whether individual 
ones are computable or not.


Paddy Leahy



Re: White Rabbit vs. Tegmark

2005-05-24 Thread "Hal Finney"
Alastair Malcolm writes:
> I don't understand this last sentence - why couldn't he use the 'Lewisian
> defence' if he wanted - it is the Anthropic Principle (or just logic) that
> necessitates SAS's (in a many worlds context): our existence in a world that
> is suitable for us is independent of the uncountability or otherwise of the
> sets of suitable and unsuitable worlds, it seems to me. (Granted he does use
> the 'm' word in talking about level 4 (and other level) universes, but I am
> asking why he needs it to provide 'anthropic content'.)

I think the problem is that we can exist even in worlds that are not
particularly suitable, certainly worlds that are not as simple and
regular as our own.  Imagine a world in which a human being just pops
into existence but all around him is chaos.  There would be many more
such worlds than worlds where the human evolved in a sensible way.

Likewise we can't derive Occam's Razor or even the reasonableness of
induction (that the future will be like the past) without some kind of
measure that favors simple universes.  We could imagine a world in which
rabbits suddenly start to fly; or one in which horses suddenly start
to fly; or one in which trees suddenly start to fly; etc.  There are
apparently many more such worlds than ones which go on as they always
have.  So again without a measure that somehow favors the predictable
ones, we can't explain why induction continues to work.


> There are hints that it may be worth exploring fundamentally different
> approaches to the White Rabbit problem when we consider that for Cantor the
> set of all integers is the same 'size' as that of all the evens (not too
> good on its own for deciding whether a randomly selected integer is likely
> to come out odd or even); similarly for comparing the set of all reals
> between 0 and 1000, and between 0 and 1. The standard response to this is
> that one *cannot* select a real (or integer) in such circumstances - but in
> the case of many worlds we *do* have a selection (the one we are in now), so
> maybe there is more to be said than that of applying the Cantor approach to
> real worlds, and also on random selection.

I agree that it is hard to see how we could be experiencing one world
out of an infinite number of them, if they all had a uniform measure,
just as we can't pick a truly random integer.  Two solutions are, first,
to assume that worlds have non-uniform measure, so that each possible
world has a non-infinitesimal measure.  Or second, even if worlds do have
infinitesimal measure, observers can be considered as finite machines
which each span an infinite number of worlds.  Since an observer is only
finite, he can occupy a fraction of the multiverse which has non-zero
measure even if the worlds themselves have zero measure.


> I use the simple 'limit to infinity' approach to provide a potential
> solution to the WR problem (see appendix of
> http://www.physica.freeserve.co.uk/pa01.htm) - Russell's paper is not
> too dissimilar in this area, I think. This approach seems to cover at least
> the 'countable' region (in Cantorian terms), and also addresses the above
> problems (ie odd/even type questions etc). The key point in my philosophy
> paper is that it is mathematics (and/or information theory) that is more
> likely to map the objective distribution of types of worlds, compared to the
> particular anthropic intuition that is implied by the WR challenge.

I should note that I wrote the above comments before reading your paper,
and I see that you are familiar with the problems of chaotic universes and
induction failure that I mentioned.

Your solution is essentially the one we often discuss here, which is that
universes with shorter descriptions should be of higher measure than
those with longer descriptions, simply because a higher percentage of
infinite strings have a particular short-string prefix than a particular
long-string prefix.  There are various potential objections to this but
it is an attractive principle and seems to get us a long way towards
where we want to go.


> A final musing on finite formal systems: I have always
> considered formal systems to be a provisional 'best guess' (or *maybe* 2nd
> best after the informational approach) for exploring the plenitude - but it
> occurs to me that non-finitary formal systems (which could inter alia
> encompass the reals) may match (say SAS-relevant) finite formal systems in
> simplicity terms, if the (infinite-length) axioms themselves could be
> algorithmically generated. This would lead to a kind of 'meta-formal-system'
> approach. Just a passing thought...

My guess is that any such system, where infinite axioms could be generated
from a finite starting point, would be equivalent to just using a finite
axiom set.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-24 Thread Alastair Malcolm
Perhaps I can throw in a few thoughts here, partly in the hope I may learn
something from possible replies (or lack thereof!).

- Original Message -
From: Patrick Leahy <[EMAIL PROTECTED]>
Sent: 23 May 2005 00:03
.
.
> A very similar argument ("rubbish universes") was put forward long ago
> against David Lewis's modal realism, and is discussed in his "On the
> plurality of worlds". As I understand it, Lewis's defence was that there
> is no "measure" in his concept of "possible worlds", so it is not
> meaningful to make statements about which kinds of universe are "more
> likely" (given that there is an infinity of both lawful and law-like
> worlds). This is not a defense which Tegmark can make, since he does
> require a measure (to give his thesis some anthropic content).

I don't understand this last sentence - why couldn't he use the 'Lewisian
defence' if he wanted - it is the Anthropic Principle (or just logic) that
necessitates SAS's (in a many worlds context): our existence in a world that
is suitable for us is independent of the uncountability or otherwise of the
sets of suitable and unsuitable worlds, it seems to me. (Granted he does use
the 'm' word in talking about level 4 (and other level) universes, but I am
asking why he needs it to provide 'anthropic content'.)

There are hints that it may be worth exploring fundamentally different
approaches to the White Rabbit problem when we consider that for Cantor the
set of all integers is the same 'size' as that of all the evens (not too
good on its own for deciding whether a randomly selected integer is likely
to come out odd or even); similarly for comparing the set of all reals
between 0 and 1000, and between 0 and 1. The standard response to this is
that one *cannot* select a real (or integer) in such circumstances - but in
the case of many worlds we *do* have a selection (the one we are in now), so
maybe there is more to be said than that of applying the Cantor approach to
real worlds, and also on random selection.

I use the simple 'limit to infinity' approach to provide a potential
solution to the WR problem (see appendix of
http://www.physica.freeserve.co.uk/pa01.htm) - Russell's paper is not
too dissimilar in this area, I think. This approach seems to cover at least
the 'countable' region (in Cantorian terms), and also addresses the above
problems (ie odd/even type questions etc). The key point in my philosophy
paper is that it is mathematics (and/or information theory) that is more
likely to map the objective distribution of types of worlds, compared to the
particular anthropic intuition that is implied by the WR challenge.

A final musing on finite formal systems: I have always
considered formal systems to be a provisional 'best guess' (or *maybe* 2nd
best after the informational approach) for exploring the plenitude - but it
occurs to me that non-finitary formal systems (which could inter alia
encompass the reals) may match (say SAS-relevant) finite formal systems in
simplicity terms, if the (infinite-length) axioms themselves could be
algorithmically generated. This would lead to a kind of 'meta-formal-system'
approach. Just a passing thought...

Alastair



RE: White Rabbit vs. Tegmark

2005-05-24 Thread "Hal Finney"
Lee Corbin writes:
> Russell writes
> > You've got me digging out my copy of Kreyszig "Intro to Functional
> > Analysis". It turns out that the set of continuous functions on an
> > interval C[a,b] form a vector space. By application of Zorn's lemma
> > (or equivalently the axiom of choice), every vector space has what is
> > called a Hamel basis, namely a linearly independent countable set B
> > such that every element in the vector space can be expressed as a
> > finite linear combination of elements drawn from the Hamel basis
>
> I can't follow your math, but are you saying the following
> in effect?
>
> Any continuous function on R or C, as we know, can be
> specified by countably many reals R1, R2, R3, ... But
> by a certain mapping trick, I think that I can see how
> this could be reduced to *one* real.  It depends for its 
> functioning---as I think your result above depends---
> on the fact that each real encodes infinite information.

I don't think that is exactly how the result Russell describes works, but
certainly Lee's construction makes his result somewhat less paradoxical.
Indeed, a real number can include the information from any countable
set of reals.

Nevertheless I'd be curious to see an example of this Hamel basis
construction.  Let's consider a simple Euclidean space.  A two dimensional
space is just the Euclidean plane, where every point corresponds to
a pair of real numbers (x, y).

We can generalize this to any number of dimensions, including a countably
infinite number of dimensions.  In that form each point can be expressed
as (x0, x1, x2, x3, ...).  The standard orthonormal basis for this vector
space is b0=(1,0,0,0...), b1=(0,1,0,0...), b2=(0,0,1,0...), 

With such a basis the point I showed can be expressed as x0*b0+x1*b1+
I gather from Russell's result that we can create a different, countable
basis such that an arbitrary point can be expressed as only a finite
number of terms.  That is pretty surprising.

I have searched online for such a construction without any luck.
The Wikipedia article, http://en.wikipedia.org/wiki/Hamel_basis has an
example of using a Fourier basis to span functions, which requires an
infinite combination of basis vectors and is therefore not a Hamel basis.
They then remark, "Every Hamel basis of this space is much bigger than
this merely countably infinite set of functions."  That would seem to
imply, contrary to what Russell writes above, that the Hamel basis is
uncountably infinite in size.

In that case the Hamel basis for the infinite dimensional Euclidean space
can simply be the set of all points in the space, so then each point
can be represented as 1 * the appropriate basis vector.  That would be
a disappointingly trivial result.  And it would not shed light on the
original question of proving that an arbitrary continuous function can
be represented by a countably infinite number of bits.

Hal



RE: White Rabbit vs. Tegmark

2005-05-24 Thread Lee Corbin
Russell writes

> You've got me digging out my copy of Kreyszig "Intro to Functional
> Analysis". It turns out that the set of continuous functions on an
> interval C[a,b] form a vector space. By application of Zorn's lemma
> (or equivalently the axiom of choice), every vector space has what is
> called a Hamel basis, namely a linearly independent countable set B
> such that every element in the vector space can be expressed as a
> finite linear combination of elements drawn from the Hamel basis: ie
> 
> \forall x\in V, \exists n\in N, b_i\in B, a_i\in F, i=1, ... n :
>  x = \sum_i^n a_ib_i
> 
> where F is the field (eg real numbers), V the vector space (eg C[a,b]) and B
> the Hamel basis.
> 
> Only a finite number of reals is needed to specify an arbitrary
> continuous function!

I can't follow your math, but are you saying the following
in effect?

Any continuous function on R or C, as we know, can be
specified by countably many reals R1, R2, R3, ... But
by a certain mapping trick, I think that I can see how
this could be reduced to *one* real.  It depends for its 
functioning---as I think your result above depends---
on the fact that each real encodes infinite information.

Suppose that I have a continuous function f that I wish
to encode using one real. I use the trick that shows
that countably many infinite sets are countable (you
know the one: by running back and forth along the diagonals).

Take the digits of R1, and place them in positions
1, 3, 6, 10, 15, 21, ... of the MasterReal, and R2 in positions
2, 4, 7, 11, 16, 22, ... of the MasterReal, R3's digits at
5, 8, 12, 17, 23, ... of the MasterReal, and so on, using
the first free integer position of the gaps that are left
after specification of the positions of the real R(N-1).

So it seems that countably many reals have been packed into
just one. (A slightly more involved example could be produced
for the Complex field.)

Lee

> Actually the theory of Fourier series will tell you how to generate
> any Lebesgue integral function almost everywhere from a countable
> series of cosine functions.
> 
> Cheers
> 
> -- 
> *PS: A number of people ask me about the attachment to my email, which
> is of type "application/pgp-signature". Don't worry, it is not a
> virus. It is an electronic signature, that may be used to verify this
> email came from me if you have PGP or GPG installed. Otherwise, you
> may safely ignore this attachment.
> 
> 
> A/Prof Russell Standish  Phone 8308 3119 (mobile)
> Mathematics  0425 253119 (")
> UNSW SYDNEY 2052   [EMAIL PROTECTED] 
> Australiahttp://parallel.hpc.unsw.edu.au/rks
> International prefix  +612, Interstate prefix 02
> 
> 



Re: White Rabbit vs. Tegmark

2005-05-24 Thread Bruno Marchal

Le 24-mai-05, à 00:17, Patrick Leahy a écrit :

On Mon, 23 May 2005, Bruno Marchal wrote:



Concerning the white rabbits, I don't see how Tegmark could even address the problem given that it is a measure problem with respect to the many computational histories. I don't even remember if Tegmark is aware of any measure relating the 1-person and 3-person points of view.

Not sure why you say *computational* wrt Tegmark's theory. Nor do I understand exactly what you mean by a measure relating 1-person & 3-person.  

This is not easy to sum up, and is related to my PhD thesis, which is summarized in english in the following papers:

http://iridia.ulb.ac.be/~marchal/publications/CC&Q.pdf
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHAL.pdf

or in links to this list. You can find them in my webpage (URL below).



Tegmark is certainly aware of the need for a measure to allow statements about the probability of finding oneself (1-person pov, OK?) in a universe with certain properties. This is listed in astro-ph/0302131 as a "horrendous" problem to which he tentatively offers what looks suspiciously like Schmidhuber's (or whoever's) Universal Prior as a solution.

Could be promising heuristic, but is deeply wrong. I mean I am myself very suspicious that Universal Prior can be used as an explanation per se.

(Of course, this means he tacitly accepts the restriction to computable functions).


You cannot really be tacit about this. If only because you can gives them basic role in more than one way. Tegmark is unclear, at least..


So I don't agree that the problem can't be addressed by Tegmark, although it hasn't been. Unless by "addressed" you mean "solved", in which case I agree!

To adress the problem you need to be ontologically clear.

Let's suppose with Wei Dai that a measure can be applied to Tegmark's everything. It certainly can to the set of UTM programs as per Schmidhuber and related proposals.  

Most such proposals are done by people not aware of the 1-3 distinction. In the approach I have developed that difference is crucial.


Obviously it is possible to assign a measure which solves the White Rabbit problem, such as the UP.  But to me this procedure is very suspicious. 

I agree. You can serach my discussion with Schmidhuber on this list. (search on the name "marchal", not "bruno marchal": it is an old discussion we did have some years ago).


We can get whatever answer we like by picking the right measure.  

I mainly agree.


While the UP and similar are presented by their proponents as "natural", my strong suspicion is that if we lived in a universe that was obviously algorithmically very complex, we would see papers arguing for "natural" measures that reward algorithmic complexity. In fact the White Rabbit argument is basically an assertion that such measures *are* natural.  Why one measure rather than another? By the logic of Tegmark's original thesis, we should consider the set of all possible measures over everything. But then we need a measure on the measures, and so ad infinitum.

I mainly agree.

One self-consistent approach is Lewis', i.e. to abandon all talk of measure, all anthropic predictions, and just to speak of possibilities rather than probabilities.  This suited Lewis fine, but greatly undermines the attractiveness of the everything thesis for physicists.

With comp the measure *is* on the *possibilities*, themselves captured by the "maximal consistent extensions" in the sense of the logicians.
I have not the time to give detail, but in july or augustus, I can give you all the details in case you are interested  



more or less recently in the scientific american. I'm sure Tegmark's approach, which a priori does not presuppose the comp hyp, would benefit from category theory: this one put structure on the possible sets of mathematical structures. Lawvere rediscovered the Grothendieck toposes by trying (without success) to get the category of all categories. Toposes (or Topoi) are categories formalizing first person universes of mathematical structures. There is a North-holland book on "Topoi" by Goldblatt which is an excellent introduction to toposes for ... logicians (mhhh ...).

Hope that helps,

Bruno

Not really. I know category theory is a potential route into this, but I havn't seen any definitive statements and from what I've read on this list I don't expect to any time soon. I'm certainly not going to learn category theory myself!


At least you don't need them for reading my work. I have suppressed all need to it because it is a difficult theory for those who have not a sufficiently "algebraic mind". In the long run I believe they will be inescapable though. If only to learn knot theory, which I have reason to believe as being very fundamental for extracting geometry from the UTM introspection (as comp forces us to believe unless my thesis is wrong somewhere ...).


You overlooked a couple of direct queries to you in my posting:

* You still havn't expl

Re: White Rabbit vs. Tegmark

2005-05-24 Thread Bruno Marchal


Le 24-mai-05, à 01:10, Patrick Leahy a écrit :



On Mon, 23 May 2005, Hal Finney wrote:

I've overlooked until now the fact that mathematical physics 
restricts
itself to (almost-everywhere) differentiable functions of the 
continuum.
What is the cardinality of the set of such functions? I rather 
suspect
that they are denumerable, hence exactly representable by UTM 
programs.

Perhaps this is what Russell Standish meant.


The cardinality of such functions is c, the same as the continuum.
The existence of the constant functions alone shows that it is at 
least c,
and my understanding is that continuous, let alone differentiable, 
functions

have cardinality no more than c.



Oops, mea culpa. I said that wrong. What I meant was, what is the 
cardinality of the data needed to specify *one* continuous function of 
the continuum. E.g. for constant functions it is blatantly aleph-null. 
Similarly for any function expressible as a finite-length formula in 
which some terms stand for reals.






You reassure me a little bit ;)

PS I will answer your other post asap.

bruno

http://iridia.ulb.ac.be/~marchal/




Re: White Rabbit vs. Tegmark

2005-05-24 Thread Bruno Marchal
Remember that Wolfram assumes a 1-1 correspondence between 
consciousness and physical activity, which, as you, I have refuted (or 
I pretend I have refuted, if you prefer).
the comp hyp predicts physical laws must be as complex as the solution 
of the measure problem. In that sense, the apparent "simplicity" of the 
currently known physical laws is mysterious, and need to be explained 
(except that QM does predict too some non computational observation 
like the spin up of particles in superposition states up + down.


Bruno


Le 23-mai-05, à 23:59, Hal Finney a écrit :


Besides, it's not all that clear that our own universe is as simple as
it should be.  CA systems like Conway's Life allow for computation and
might even allow for the evolution of intelligence, but our universe's
rules are apparently far more complex.  Wolfram studied a variety of
simple computational systems and estimated that from 1/100 to 1/10 
of

them were able to maintain stable structures with interesting behavior
(like Life).  These tentative results suggest that it shouldn't take
all that much law to create life, not as much as we see in this 
universe.


I take from this a prediction of the all-universe hypothesis to be that
it will turn out either that our universe is a lot simpler than we 
think,
or else that these very simple universes actually won't allow the 
creation

of stable, living beings.  That's not vacuous, although it's not clear
how long it will be before we are in a position to refute it.


I've overlooked until now the fact that mathematical physics restricts
itself to (almost-everywhere) differentiable functions of the 
continuum.

What is the cardinality of the set of such functions? I rather suspect
that they are denumerable, hence exactly representable by UTM 
programs.

Perhaps this is what Russell Standish meant.


The cardinality of such functions is c, the same as the continuum.
The existence of the constant functions alone shows that it is at 
least c,
and my understanding is that continuous, let alone differentiable, 
functions

have cardinality no more than c.

I must insist though, that there exist mathematical objects in 
platonia
which require c bits to describe (and some which require more), and 
hence
can't be represented either by a UTM program or by the output of a 
UTM.

Hence Tegmark's original everything is bigger than Schmidhuber's.  But
these structures are so arbitrary it is hard to imagine SAS in them, 
so

maybe it makes no anthropic difference.


Whether Tegmark had those structures in mind or not, we can certainly
consider such an ensemble - the name is not important.  I posted last
Monday a summary of a paper by Frank Tipler which proposed that in fact
our universe's laws do require c bits to describe themm, and a lot of
other crazy ideas as well,
http://www.iop.org/EJ/abstract/0034-4885/68/4/R04 .  I don't think it
was particularly convincing, but it did offer a way of thinking about
infinitely complicated natural laws.  One simple example would be the 
fine

structure constant, which might turn out to be an uncomputable number.
That wouldn't be inconsistent with our existence, but it is hard to see
how our being here could depend on such a property.

http://iridia.ulb.ac.be/~marchal/




Re: White Rabbit vs. Tegmark

2005-05-24 Thread Russell Standish
On Mon, May 23, 2005 at 06:03:32PM -0700, "Hal Finney" wrote:
> Paddy Leahy writes:
> > Oops, mea culpa. I said that wrong. What I meant was, what is the 
> > cardinality of the data needed to specify *one* continuous function of the 
> > continuum. E.g. for constant functions it is blatantly aleph-null. 
> > Similarly for any function expressible as a finite-length formula in which 
> > some terms stand for reals.
> 
> I think it's somewhat nonstandard to ask for the "cardinality of the
> data" needed to specify an object.  Usually we ask for the cardinality
> of some set of objects.
> 
> The cardinality of the reals is c.  But the cardinality of the data
> needed to specify a particular real is no more than aleph-null (and
> possibly quite a bit less!).
> 
> In the same way, the cardinality of the set of continuous functions
> is c.  But the cardinality of the data to specify a particular
> continuous function is no more than aleph null.  At least for infinitely
> differentiable ones, you can do as Russell suggests and represent it as
> a Taylor series, which is a countable set of real numbers and can be
> expressed via a countable number of bits.  I'm not sure how to extend
> this result to continuous but non-differentiable functions but I'm pretty
> sure the same thing applies.
> 
> Hal Finney

You've got me digging out my copy of Kreyszig "Intro to Functional
Analysis". It turns out that the set of continuous functions on an
interval C[a,b] form a vector space. By application of Zorn's lemma
(or equivalently the axiom of choice), every vector space has what is
called a Hamel basis, namely a linearly independent countable set B
such that every element in the vector space can be expressed as a
finite linear combination of elements drawn from the Hamel basis: ie

\forall x\in V, \exists n\in N, b_i\in B, a_i\in F, i=1, ... n :
 x = \sum_i^n a_ib_i

where F is the field (eg real numbers), V the vector space (eg C[a,b]) and B
the Hamel basis.

Only a finite number of reals is needed to specify an arbitrary
continuous function!

Actually the theory of Fourier series will tell you how to generate
any Lebesgue integral function almost everywhere from a countable
series of cosine functions.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpQXY4RMJ7BQ.pgp
Description: PGP signature


Re: White Rabbit vs. Tegmark

2005-05-23 Thread "Hal Finney"
Paddy Leahy writes:
> Oops, mea culpa. I said that wrong. What I meant was, what is the 
> cardinality of the data needed to specify *one* continuous function of the 
> continuum. E.g. for constant functions it is blatantly aleph-null. 
> Similarly for any function expressible as a finite-length formula in which 
> some terms stand for reals.

I think it's somewhat nonstandard to ask for the "cardinality of the
data" needed to specify an object.  Usually we ask for the cardinality
of some set of objects.

The cardinality of the reals is c.  But the cardinality of the data
needed to specify a particular real is no more than aleph-null (and
possibly quite a bit less!).

In the same way, the cardinality of the set of continuous functions
is c.  But the cardinality of the data to specify a particular
continuous function is no more than aleph null.  At least for infinitely
differentiable ones, you can do as Russell suggests and represent it as
a Taylor series, which is a countable set of real numbers and can be
expressed via a countable number of bits.  I'm not sure how to extend
this result to continuous but non-differentiable functions but I'm pretty
sure the same thing applies.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-23 Thread Russell Standish
On Mon, May 23, 2005 at 11:17:04PM +0100, Patrick Leahy wrote:
> And another mathematical query for you or anyone on the list:
> 
> I've overlooked until now the fact that mathematical physics restricts 
> itself to (almost-everywhere) differentiable functions of the continuum. 
> What is the cardinality of the set of such functions? I rather suspect 
> that they are denumerable, hence exactly representable by UTM programs.
> Perhaps this is what Russell Standish meant.
> 

I noticed a couple of people have already responded to this with the
obvious, but you can consider the set of analytic functions (which
tend to be popular with physicists), which are representable as a
Taylor series, so can be represented by a countable set of real
coefficients. If you further restrict the coefficients to be
computable (namely the limit of some well defined series) then the
cardinality of your set is countable.

Not sure if this observation translates to the bigger set of almost
everywhere differentiable functions. Also, can one create a consistent
theory of differential calculus based on a field of computable numbers
(it must be a field, because it is bigger than the set of rationals).

The reason I persist in this notion is that the set of all
descriptions has the interesting property of having zero
information. This set does have cardinality c, however.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpg90KdANHqY.pgp
Description: PGP signature


Re: White Rabbit vs. Tegmark

2005-05-23 Thread Patrick Leahy


On Mon, 23 May 2005, Hal Finney wrote:


I've overlooked until now the fact that mathematical physics restricts
itself to (almost-everywhere) differentiable functions of the continuum.
What is the cardinality of the set of such functions? I rather suspect
that they are denumerable, hence exactly representable by UTM programs.
Perhaps this is what Russell Standish meant.


The cardinality of such functions is c, the same as the continuum.
The existence of the constant functions alone shows that it is at least c,
and my understanding is that continuous, let alone differentiable, functions
have cardinality no more than c.



Oops, mea culpa. I said that wrong. What I meant was, what is the 
cardinality of the data needed to specify *one* continuous function of the 
continuum. E.g. for constant functions it is blatantly aleph-null. 
Similarly for any function expressible as a finite-length formula in which 
some terms stand for reals.




Re: White Rabbit vs. Tegmark

2005-05-23 Thread "Hal Finney"
Paddy Leahy writes:
> Let's suppose with Wei Dai that a measure can be applied to Tegmark's 
> everything. It certainly can to the set of UTM programs as per Schmidhuber 
> and related proposals.  Obviously it is possible to assign a measure which 
> solves the White Rabbit problem, such as the UP.  But to me this procedure 
> is very suspicious.  We can get whatever answer we like by picking the 
> right measure.  While the UP and similar are presented by their proponents 
> as "natural", my strong suspicion is that if we lived in a universe that 
> was obviously algorithmically very complex, we would see papers arguing 
> for "natural" measures that reward algorithmic complexity. In fact the 
> White Rabbit argument is basically an assertion that such measures *are* 
> natural.  Why one measure rather than another? By the logic of Tegmark's 
> original thesis, we should consider the set of all possible measures over 
> everything. But then we need a measure on the measures, and so ad 
> infinitum.

I agree that this is a potential problem and an area where more work
is needed.  We do know that the universal distribution has certain
nice properties that make it stand out, that algorithmic complexity
is asymptotically unique up to a constant, and similar results which
suggest that we are not totally off base in granting these measures the
power to determine which physical realities we are likely to experience.
But certainly the argument is far from iron-clad and it's not clear how
well the whole thing is grounded.  I hope that in the future we will
have a better understanding of these issues.

I don't agree however that we are attracted to simplicity-favoring
measures merely by virtue of our particular circumstances.  The universal
distribution was invented decades ago as a mathematical object of study,
and Chaitin's work on algorithmic complexity is likewise an example of
pure math.  These results can be used (loosely) to explain and justify
the success of Occam's Razor, and with more difficulty to explain why
the universe is as we see it, but that's not where they came from.

Besides, it's not all that clear that our own universe is as simple as
it should be.  CA systems like Conway's Life allow for computation and
might even allow for the evolution of intelligence, but our universe's
rules are apparently far more complex.  Wolfram studied a variety of
simple computational systems and estimated that from 1/100 to 1/10 of
them were able to maintain stable structures with interesting behavior
(like Life).  These tentative results suggest that it shouldn't take
all that much law to create life, not as much as we see in this universe.

I take from this a prediction of the all-universe hypothesis to be that
it will turn out either that our universe is a lot simpler than we think,
or else that these very simple universes actually won't allow the creation
of stable, living beings.  That's not vacuous, although it's not clear
how long it will be before we are in a position to refute it.

> I've overlooked until now the fact that mathematical physics restricts 
> itself to (almost-everywhere) differentiable functions of the continuum. 
> What is the cardinality of the set of such functions? I rather suspect 
> that they are denumerable, hence exactly representable by UTM programs.
> Perhaps this is what Russell Standish meant.

The cardinality of such functions is c, the same as the continuum.
The existence of the constant functions alone shows that it is at least c,
and my understanding is that continuous, let alone differentiable, functions
have cardinality no more than c.

> I must insist though, that there exist mathematical objects in platonia 
> which require c bits to describe (and some which require more), and hence 
> can't be represented either by a UTM program or by the output of a UTM.
> Hence Tegmark's original everything is bigger than Schmidhuber's.  But 
> these structures are so arbitrary it is hard to imagine SAS in them, so 
> maybe it makes no anthropic difference.

Whether Tegmark had those structures in mind or not, we can certainly
consider such an ensemble - the name is not important.  I posted last
Monday a summary of a paper by Frank Tipler which proposed that in fact
our universe's laws do require c bits to describe themm, and a lot of
other crazy ideas as well,
http://www.iop.org/EJ/abstract/0034-4885/68/4/R04 .  I don't think it
was particularly convincing, but it did offer a way of thinking about
infinitely complicated natural laws.  One simple example would be the fine
structure constant, which might turn out to be an uncomputable number.
That wouldn't be inconsistent with our existence, but it is hard to see
how our being here could depend on such a property.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-23 Thread Patrick Leahy


On Mon, 23 May 2005, Bruno Marchal wrote:



Concerning the white rabbits, I don't see how Tegmark could even address the 
problem given that it is a measure problem with respect to the many 
computational histories. I don't even remember if Tegmark is aware of any 
measure relating the 1-person and 3-person points of view.


Not sure why you say *computational* wrt Tegmark's theory. Nor do I 
understand exactly what you mean by a measure relating 1-person & 
3-person.  Tegmark is certainly aware of the need for a measure to allow 
statements about the probability of finding oneself (1-person pov, OK?) in 
a universe with certain properties. This is listed in astro-ph/0302131 as 
a "horrendous" problem to which he tentatively offers what looks 
suspiciously like Schmidhuber's (or whoever's) Universal Prior as a 
solution.


(Of course, this means he tacitly accepts the restriction to computable 
functions).


So I don't agree that the problem can't be addressed by Tegmark, although 
it hasn't been. Unless by "addressed" you mean "solved", in which case I 
agree!


Let's suppose with Wei Dai that a measure can be applied to Tegmark's 
everything. It certainly can to the set of UTM programs as per Schmidhuber 
and related proposals.  Obviously it is possible to assign a measure which 
solves the White Rabbit problem, such as the UP.  But to me this procedure 
is very suspicious.  We can get whatever answer we like by picking the 
right measure.  While the UP and similar are presented by their proponents 
as "natural", my strong suspicion is that if we lived in a universe that 
was obviously algorithmically very complex, we would see papers arguing 
for "natural" measures that reward algorithmic complexity. In fact the 
White Rabbit argument is basically an assertion that such measures *are* 
natural.  Why one measure rather than another? By the logic of Tegmark's 
original thesis, we should consider the set of all possible measures over 
everything. But then we need a measure on the measures, and so ad 
infinitum.


One self-consistent approach is Lewis', i.e. to abandon all talk of 
measure, all anthropic predictions, and just to speak of possibilities 
rather than probabilities.  This suited Lewis fine, but greatly undermines 
the attractiveness of the everything thesis for physicists.



more or less recently in the scientific american. I'm sure Tegmark's 
approach, which a priori does not presuppose the comp hyp, would benefit from 
category theory: this one put structure on the possible sets of mathematical 
structures. Lawvere rediscovered the Grothendieck toposes by trying (without 
success) to get the category of all categories. Toposes (or Topoi) are 
categories formalizing first person universes of mathematical structures. 
There is a North-holland book on "Topoi" by Goldblatt which is an excellent 
introduction to toposes for ... logicians (mhhh ...).


Hope that helps,

Bruno


Not really. I know category theory is a potential route into this, but I 
havn't seen any definitive statements and from what I've read on this list 
I don't expect to any time soon. I'm certainly not going to learn category 
theory myself!


You overlooked a couple of direct queries to you in my posting:

* You still havn't explained why you say his system is "too big 
(inconsistent)". Especially the inconsistent bit. I'm sure a level of 
explanation is possible which doesn't take the whole of category theory 
for granted.  Also, if you have a proof that his system is inconsistent, 
you should publish it.


* Is it correct to say that category theory cannot define "the whole" 
because it is outside the heirarchy of the cardinals?


And another mathematical query for you or anyone on the list:

I've overlooked until now the fact that mathematical physics restricts 
itself to (almost-everywhere) differentiable functions of the continuum. 
What is the cardinality of the set of such functions? I rather suspect 
that they are denumerable, hence exactly representable by UTM programs.

Perhaps this is what Russell Standish meant.

I must insist though, that there exist mathematical objects in platonia 
which require c bits to describe (and some which require more), and hence 
can't be represented either by a UTM program or by the output of a UTM.
Hence Tegmark's original everything is bigger than Schmidhuber's.  But 
these structures are so arbitrary it is hard to imagine SAS in them, so 
maybe it makes no anthropic difference.


Paddy Leahy



Re: White Rabbit vs. Tegmark

2005-05-23 Thread Bruno Marchal

Hi Patrick,

Sorry for having been short, especially on those notions for which some 
background of logic is needed.
Unfortunately I have not really the time to explain with all the 
nuances needed.


Nevertheless the fact that reals are simpler to axiomatize than natural 
numbers should be a natural idea in the everything list, given that the 
"everything" basic idea is that "taking all objects" is more simple 
than taking some subpart of it. Now, concerning the [natural numbers 
versus real numbers] this has been somehow formally capture by a 
beautiful theorem by Tarski, which I guess should be on the net, let me 
see, "googlle: tarski reals", ok it gives Wolfram dictionnary: so look 
here for the theorem I was alluding to:


http://mathworld.wolfram.com/TarskisTheorem.html

It is not so astonishing. reals have been invented for making math more 
easier.


Concerning the white rabbits, I don't see how Tegmark could even 
address the problem given that it is a measure problem with respect to 
the many computational histories. I don't even remember if Tegmark is 
aware of any measure relating the 1-person and 3-person points of view.


Of course I like very much Tegmark's idea that physicalness is a 
special case of mathematicalness, but on the later he is a little 
naive, like physicist often are when they talk about math. Even 
Einstein, and that's normal. More normal and frequent, but more 
annoying also, is that he seems unaware of the mind-body problem.  John 
Archibald Wheeler "law without law" is quite good too. My favorite 
paper by Tegmark is the one he wrote with Wheeler on Everett more or 
less recently in the scientific american. I'm sure Tegmark's approach, 
which a priori does not presuppose the comp hyp, would benefit from 
category theory: this one put structure on the possible sets of 
mathematical structures. Lawvere rediscovered the Grothendieck toposes 
by trying (without success) to get the category of all categories. 
Toposes (or Topoi) are categories formalizing first person universes of 
mathematical structures.  There is a North-holland book on "Topoi" by 
Goldblatt which is an excellent introduction to toposes for ... 
logicians (mhhh ...).


Hope that helps,

Bruno




Le 23-mai-05, à 12:51, Patrick Leahy a écrit :



Now I'm really confused!

I took Russell to mean that real numbers are excluded from his system 
because they require an infinite number of axioms. In which case his 
system is really quite different from Tegmark's.


But if Bruno is correct and reals only need a finite number of axioms,
then surely Russell is wrong to imply that real-number universes are 
covered by his system.


Sure, they can be modelled to any finite degree of precision, but that 
is not the same thing as actually being included (which requires 
infinite precision). For instance, Duhem pointed out that you can 
devise a Newtonian dynamical system where a particle will go to 
infinity if its starting point is an irrational number, but execute 
closed orbits if its starting point is rational.


On Mon, 23 May 2005, Bruno Marchal wrote (among other things):



Le 23-mai-05, à 06:09, Russell Standish a écrit :


 Hence my interpretation of Tegmark's assertion is of finite
axiomatic systems, not all mathematic things.



I don't think Tegmark would agree. I agree with you that "the whole 
math" is much too big (inconsistent).


Since Tegmark defines "mathematical structures" as existing if 
self-consistent (following Hilbert), how can his concept be 
inconsistent?
But there may be an inconsistency in (i) asserting the identity of 
isomorphic systems and (ii) claiming that a measure exists, especially 
if you try both at once.




It is mainly from a logician point of view that Tegmark can hardly be 
convincing. As I said often, physical reality cannot be a 
mathematical reality *among other*. The relation is more subtle both 
with or without the comp hyp. I have discussed it at length a long 
time ago in this list.
Category theory and logic provides tools for defining big structure, 
but not the whole.


As I understand it, this is because "the whole" is unquantifiably big, 
i.e. outside even the heirarchy of cardinals. Correct?


The David Lewis problem mentionned recently is not even expressible 
in Tegmark framework.


It might be illuminating if you could explain why not. On the face of 
it, it fits in perfectly well, viz: for any given lawful universe, 
there are infinitely many others in which as well as the observable 
phenomena there exist non-observable "epiphenomenal rubbish". The only 
difference from the White Rabbit problem is the specification that the 
rubbish be strictly non-observable. As a physicist, my reaction is 
that it is then irrelevant so who cares? But this can be fixed by 
making the rubbish perceptible but mostly harmless, i.e. "White 
Rabbits".


Paddy Leahy


http://iridia.ulb.ac.be/~marchal/




Re: White Rabbit vs. Tegmark

2005-05-23 Thread Patrick Leahy


On Sun, 22 May 2005, Hal Finney wrote:


Regarding the nature of Tegmark's mathematical objects, I found some
old discussion on the list, a debate between me and Russell Standish,
in which Russell argued that Tegmark's objects should be understood as
formal systems, while I claimed that they should be seen more as pure
Platonic objects which can only be approximated via axiomatization.





I see that Russell is right, and that Tegmark does identify mathematical
structures with formal systems.  His chart at the first link above shows
"Formal Systems" as the foundation for all mathematical structures.
And the discussion in his paper is entirely in terms of formal systems
and their properties.  He does not seem to consider the implications if
any of Godel's theorem.


Actually he does both. Most of the time he implies that universes are 
formal systems, e.g. formal systems can be self-consistent but objects 
just exist: only statements about objects (e.g. theorems) have to be 
consistent. But he also says that *our* universe corresponds to the 
solution of a set of equations, i.e. a mathematical object in Platonia. 
This is definitely muddled thinking.


Specifying a universe by a set of statements about it seems to be a highly 
redundant way to go about things, whether you go for all theorems or the 
much larger class of true statements. (Of course, many statements, 
including nearly all the ones interesting to mathematicians, apply to 
whole classes of objects). Hence despite all the guff about formal 
systems, I think any attempt to make sense of Tegmark's thesis has to

go with the object = universe.

Paddy Leahy



Re: White Rabbit vs. Tegmark

2005-05-23 Thread Patrick Leahy


Now I'm really confused!

I took Russell to mean that real numbers are excluded from his system 
because they require an infinite number of axioms. In which case his 
system is really quite different from Tegmark's.


But if Bruno is correct and reals only need a finite number of axioms,
then surely Russell is wrong to imply that real-number universes are 
covered by his system.


Sure, they can be modelled to any finite degree of precision, but that is 
not the same thing as actually being included (which requires infinite 
precision). For instance, Duhem pointed out that you can devise a 
Newtonian dynamical system where a particle will go to infinity if its 
starting point is an irrational number, but execute closed orbits if its 
starting point is rational.


On Mon, 23 May 2005, Bruno Marchal wrote (among other things):



Le 23-mai-05, à 06:09, Russell Standish a écrit :


 Hence my interpretation of Tegmark's assertion is of finite
axiomatic systems, not all mathematic things.



I don't think Tegmark would agree. I agree with you that "the whole math" is 
much too big (inconsistent).


Since Tegmark defines "mathematical structures" as existing if 
self-consistent (following Hilbert), how can his concept be inconsistent?
But there may be an inconsistency in (i) asserting the identity of 
isomorphic systems and (ii) claiming that a measure exists, especially if 
you try both at once.




It is mainly from a logician point of view that Tegmark can hardly be 
convincing. As I said often, physical reality cannot be a mathematical 
reality *among other*. The relation is more subtle both with or without the 
comp hyp. I have discussed it at length a long time ago in this list.
Category theory and logic provides tools for defining big structure, but not 
the whole.


As I understand it, this is because "the whole" is unquantifiably big, 
i.e. outside even the heirarchy of cardinals. Correct?


The David Lewis problem mentionned recently is not even 
expressible in Tegmark framework.


It might be illuminating if you could explain why not. On the face of it, 
it fits in perfectly well, viz: for any given lawful universe, there are 
infinitely many others in which as well as the observable phenomena there 
exist non-observable "epiphenomenal rubbish". The only difference from the 
White Rabbit problem is the specification that the rubbish be strictly 
non-observable. As a physicist, my reaction is that it is then irrelevant 
so who cares? But this can be fixed by making the rubbish perceptible but 
mostly harmless, i.e. "White Rabbits".


Paddy Leahy


Re: White Rabbit vs. Tegmark

2005-05-23 Thread Bruno Marchal


Le 23-mai-05, à 06:09, Russell Standish a écrit :


On Mon, May 23, 2005 at 04:00:39AM +0100, Patrick Leahy wrote:



Hmm, my lack of a pure maths background may be getting me into trouble
here. What about real numbers? Do you need an infinite axiomatic 
system to
handle them? Because it seems to me that your ensemble of digital 
strings
is too small (wrong cardinality?) to handle the set of functions of 
real
variables over the continuum.  Certainly this is explicit in 
Schmidhuber's
1998 paper.  Not that I would insist that our universe really does 
involve

real numbers, but I'm pretty sure that Tegmark would not be happy to
exclude them from his "all of mathematics".



A finite set of axioms describing the reals does not completely
specify the real numbers, unless they are inconsistent. I'm sure you've
heard it before.


I guess you mean "natural numbers". By a theorem by Tarski there is a 
sense to say that the real numbers are far much simpler than the 
natural numbers (for example it has taken 300 years to prove Fermat 
formula when the variables refer to natural numbers, but the same 
formula is an easy exercice when the variable refers to real number).




The system the axioms do describe can be modelled by a
countable system as well.


Sure.


You could say that it describes describable
functions over describable numbers.



Then you get only the constructive real numbers. This is equivalent to 
the total (defined everywhere) computable function from N to N. This is 
a non recursively enumerable set. People can read the diagonalization 
posts to the everything-list in my url, to understand better what 
happens here.

http://www.escribe.com/science/theory/m3079.html
http://www.escribe.com/science/theory/m3344.html




It may even be the case that you
only have computable functions over computable numbers, and that
describable, uncomputable things cannot be captured by finite
axiomatic systems, but I'm not sure. Juergen Schmidhuber knows more
about such things.


It is a bit too ambiguous.




What I would argue is what use are undescribable things in the
plenitude?


I don't think we can escape them; provably so once the comp hyp is 
assumed.




 Hence my interpretation of Tegmark's assertion is of finite
axiomatic systems, not all mathematic things.



I don't think Tegmark would agree. I agree with you that "the whole 
math" is much too big (inconsistent).


The bigger problem with Tegmark is that he associates first person with 
their third person description in a 1-1 way (like most aristotelian). 
But then he should postulate non-comp, and explain the nature of that 
association with  a suitable theory of mind (which he does not really 
discuss).


It is mainly from a logician point of view that Tegmark can hardly be 
convincing. As I said often, physical reality cannot be a mathematical 
reality *among other*. The relation is more subtle both with or without 
the comp hyp. I have discussed it at length a long time ago in this 
list.
Category theory and logic provides tools for defining big structure, 
but not the whole. The David Lewis problem mentionned recently is not 
even expressible in Tegmark framework. Schmidhuber takes the right 
ontology, but then messed up completely the "mind-body" problem by 
complete abstraction from the 1/3 distinction.
Tegmark do a sort of 1/3 distinction (the frog/bird views) but does not 
take it sufficiently seriously.


Bruno




http://iridia.ulb.ac.be/~marchal/





Re: White Rabbit vs. Tegmark

2005-05-22 Thread Russell Standish
On Mon, May 23, 2005 at 04:00:39AM +0100, Patrick Leahy wrote:
> 
> 
> Hmm, my lack of a pure maths background may be getting me into trouble 
> here. What about real numbers? Do you need an infinite axiomatic system to 
> handle them? Because it seems to me that your ensemble of digital strings 
> is too small (wrong cardinality?) to handle the set of functions of real 
> variables over the continuum.  Certainly this is explicit in Schmidhuber's 
> 1998 paper.  Not that I would insist that our universe really does involve 
> real numbers, but I'm pretty sure that Tegmark would not be happy to 
> exclude them from his "all of mathematics".
> 

A finite set of axioms describing the reals does not completely
specify the real numbers, unless they are inconsistent. I'm sure you've
heard it before. The system the axioms do describe can be modelled by a
countable system as well. You could say that it describes describable
functions over describable numbers. It may even be the case that you
only have computable functions over computable numbers, and that
describable, uncomputable things cannot be captured by finite
axiomatic systems, but I'm not sure. Juergen Schmidhuber knows more
about such things.

What I would argue is what use are undescribable things in the
plenitude? Hence my interpretation of Tegmark's assertion is of finite
axiomatic systems, not all mathematic things.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpwI90o4J8im.pgp
Description: PGP signature


Re: White Rabbit vs. Tegmark

2005-05-22 Thread "Hal Finney"
Regarding the nature of Tegmark's mathematical objects, I found some
old discussion on the list, a debate between me and Russell Standish,
in which Russell argued that Tegmark's objects should be understood as
formal systems, while I claimed that they should be seen more as pure
Platonic objects which can only be approximated via axiomatization.

The discussions can generally be found at
http://www.escribe.com/science/theory/index.html?by=Date&n=25 under
the title "Tegmark's TOE & Cantor's Absolute Infinity".  In particular

http://www.escribe.com/science/theory/m4034.html
http://www.escribe.com/science/theory/m4045.html
http://www.escribe.com/science/theory/m4048.html

In this last message I write:

> I have gone back to Tegmark's paper, which is discussed informally
> at http://www.hep.upenn.edu/~max/toe.html
> and linked from
>
> http://arXiv.org/abs/gr-qc/9704009.
>
> I see that Russell is right, and that Tegmark does identify mathematical
> structures with formal systems.  His chart at the first link above shows
> "Formal Systems" as the foundation for all mathematical structures.
> And the discussion in his paper is entirely in terms of formal systems
> and their properties.  He does not seem to consider the implications if
> any of Godel's theorem.

Note, Tegmark's paper has moved to
http://space.mit.edu/home/tegmark/toe_frames.html .

See also http://www.escribe.com/science/theory/m4038.html where Wei Dai
argues that even unaxiomatizable mathematical objects, even infinite
objects that are too big to be sets, can have a measure meaningfully
applied to them.  However I do not know enough math to fully understand
his proposal.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-22 Thread "Hal Finney"
Patrick Leahy writes:
> Sure enough, you came up with my objection years ago, in the form of the 
> "White Rabbit" paradox. Since usage is a bit vague, I'll briefly re-state 
> it here. The problem is that worlds which are "law-like", that is which 
> behave roughly as if there are physical laws but not exactly, seem to 
> vastly outnumber worlds which are strictly "lawful". Hence we would expect 
> to see numerous departures from laws of nature of a non-life-threating 
> kind.

I think the question is whether we can assume the existence of a measure
over the mathematical objects which compose Tegmark's ensemble.  If so
then we can use the same argument as we do for Schmidhuber, namely that
the simpler objects would have greater measure.  Hence we would predict
that our laws of nature would be among the simplest possible in order
to allow for life to exist.

If we assume that a "mathematical object" (never clear what that meant)
corresponds to a formal axiomatic system, then we could use a measure
based on the size of the axiomatic description.  I don't remember now
whether Tegmark considered his mathematical objects to be the same as
formal systems or not.

Hal Finney



Re: White Rabbit vs. Tegmark

2005-05-22 Thread Patrick Leahy


On Mon, 23 May 2005, Russell Standish wrote:



I think most of us concluded that Tegmark's thesis is somewhat
ambiguous. One "interpretation" of it that both myself and Bruno tend
to make is that it is the set of finite axiomatic systems (finite sets
of axioms, and recusively enumerated theorems). Thus, for example, the
system where the continuum hypothesis is true is a distinct
mathematical system from one where it is false.

Such a collection can be shown to be a subset of the set of
descriptions (what I call the Schmidhuber ensemble in my paper), and
has some fairly natural measures associated with it. As such, the
arguments I make in "Why Occam's razor paper" apply just as much to
Tegmark's ensemble as Schmidhuber's.


Hmm, my lack of a pure maths background may be getting me into trouble 
here. What about real numbers? Do you need an infinite axiomatic system to 
handle them? Because it seems to me that your ensemble of digital strings 
is too small (wrong cardinality?) to handle the set of functions of real 
variables over the continuum.  Certainly this is explicit in Schmidhuber's 
1998 paper.  Not that I would insist that our universe really does involve 
real numbers, but I'm pretty sure that Tegmark would not be happy to 
exclude them from his "all of mathematics".




Conversely, if you wish to stand on the phrase "all of mathematics
exists" then you will have trouble defining exactly what that means,
let alone defining a measure.



I don't wish to, but this concept has been repeated by Tegmark in several 
well publicised articles (e.g. the Scientific American one).  Again, lack 
of mathematical background forbids me from making definitive claims, but I 
suspect that it could be proved impossible even to define a measure over 
*all* self-consistent mathematical concepts. In which case Lewis was right 
and Tegmark's "level 4 multiverse" is essentially content-free, from the 
point of view of a physicist (as opposed to a logician).



Paddy Leahy



Re: White Rabbit vs. Tegmark

2005-05-22 Thread aet.radal ssg
I would agree with Russell, here. That's what I meant when I said that I didn't 
like Tegmark's mathematical model but I could tolerate it. In the end, it gives 
me what I need in that it supports parallel universes and doesn't threaten E/W, 
etc. At the same time, I don't have a dog in every fight, so until I see 
something about his theory that is simply untenable, I'll let it slide.

- Original Message - 
From: "Russell Standish" 
To: "Patrick Leahy" 
Subject: Re: White Rabbit vs. Tegmark 
Date: Mon, 23 May 2005 09:47:22 +1000 

> 
> On Mon, May 23, 2005 at 12:03:55AM +0100, Patrick Leahy wrote: 
> 
> ... 
> 
> > A very similar argument ("rubbish universes") was put forward 
> > long ago against David Lewis's modal realism, and is discussed in 
> > his "On the plurality of worlds". As I understand it, Lewis's 
> > defence was that there is no "measure" in his concept of 
> > "possible worlds", so it is not meaningful to make statements 
> > about which kinds of universe are "more likely" (given that there 
> > is an infinity of both lawful and law-like worlds). This is not a 
> > defense which Tegmark can make, since he does require a measure 
> > (to give his thesis some anthropic content). 
> > 
> > It seems to me that discussion on this list back in 1999 more or 
> > less concluded that this was a fatal objection to Tegmark's 
> > version of the thesis, although not to some alternatives based 
> > exclusively on UTM programs (e.g. Russell Standish's Occam's 
> > Razor paper). 
> > 
> > Is this a fair summary, or is anyone here prepared to defend 
> > Tegmark's thesis? 
> > 
> > Paddy Leahy 
> > 
> 
> I think most of us concluded that Tegmark's thesis is somewhat 
> ambiguous. One "interpretation" of it that both myself and Bruno tend 
> to make is that it is the set of finite axiomatic systems (finite sets 
> of axioms, and recusively enumerated theorems). Thus, for example, the 
> system where the continuum hypothesis is true is a distinct 
> mathematical system from one where it is false. 
> 
> Such a collection can be shown to be a subset of the set of 
> descriptions (what I call the Schmidhuber ensemble in my paper), and 
> has some fairly natural measures associated with it. As such, the 
> arguments I make in "Why Occam's razor paper" apply just as much to 
> Tegmark's ensemble as Schmidhuber's. 
> 
> Conversely, if you wish to stand on the phrase "all of mathematics 
> exists" then you will have trouble defining exactly what that means, 
> let alone defining a measure. 
> 
> Cheers 
> 
> -- 
> *PS: A number of people ask me about the attachment to my email, which 
> is of type "application/pgp-signature". Don't worry, it is not a 
> virus. It is an electronic signature, that may be used to verify this 
> email came from me if you have PGP or GPG installed. Otherwise, you 
> may safely ignore this attachment. 
> 
>  
> A/Prof Russell Standish Phone 8308 3119 (mobile) 
> Mathematics 0425 253119 (") 
> UNSW SYDNEY 2052 [EMAIL PROTECTED] 
> Australia http://parallel.hpc.unsw.edu.au/rks 
> International prefix +612, Interstate prefix 02 
>  
<< 2.dat >> 

-- 
___
Sign-up for Ads Free at Mail.com
http://promo.mail.com/adsfreejump.htm




Re: White Rabbit vs. Tegmark

2005-05-22 Thread Russell Standish
On Mon, May 23, 2005 at 12:03:55AM +0100, Patrick Leahy wrote:

...

> A very similar argument ("rubbish universes") was put forward long ago 
> against David Lewis's modal realism, and is discussed in his "On the 
> plurality of worlds". As I understand it, Lewis's defence was that there 
> is no "measure" in his concept of "possible worlds", so it is not 
> meaningful to make statements about which kinds of universe are "more 
> likely" (given that there is an infinity of both lawful and law-like 
> worlds). This is not a defense which Tegmark can make, since he does 
> require a measure (to give his thesis some anthropic content).
> 
> It seems to me that discussion on this list back in 1999 more or less 
> concluded that this was a fatal objection to Tegmark's version of the 
> thesis, although not to some alternatives based exclusively on UTM 
> programs (e.g. Russell Standish's Occam's Razor paper).
> 
> Is this a fair summary, or is anyone here prepared to defend Tegmark's 
> thesis?
> 
> Paddy Leahy
> 

I think most of us concluded that Tegmark's thesis is somewhat
ambiguous. One "interpretation" of it that both myself and Bruno tend
to make is that it is the set of finite axiomatic systems (finite sets
of axioms, and recusively enumerated theorems). Thus, for example, the
system where the continuum hypothesis is true is a distinct
mathematical system from one where it is false.

Such a collection can be shown to be a subset of the set of
descriptions (what I call the Schmidhuber ensemble in my paper), and
has some fairly natural measures associated with it. As such, the
arguments I make in "Why Occam's razor paper" apply just as much to
Tegmark's ensemble as Schmidhuber's.

Conversely, if you wish to stand on the phrase "all of mathematics
exists" then you will have trouble defining exactly what that means,
let alone defining a measure.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpwakb1EX2cL.pgp
Description: PGP signature


Re: White Rabbit vs. Tegmark

2005-05-22 Thread aet.radal ssg
Without getting into a long hurrang, I think that Tegmark is correct, at least 
in part. Briefly, there has to be a reason why these alternate worlds exist. 
I'm referring to the Everett-Wheeler hypothesis and not just wishful thinking. 
Granted, if I remember correctly, Tegmark does deal with the whole issue in 
terms of mathematical models, which I don't really care for, however, I can 
tolerate it. 

As for the issue of measurement, all possible worlds can be viewed as still 
existing, without measurement, in a superpositional state, until they are 
observed (or "measured"). So you still get a "plurality of worlds". 
My question is how does anyone know whether "law-like" worlds  vastly outnumber 
"lawful" ones?

If I have to, I can go look up Tegmark's Scientific American article on it all, 
to get a refresher on it. I'm just hoping I don't have to.

I met Tegmark about a year ago or so. Nice guy. I stumbled into his office, 
since the door was open, and introduced myself, mentioning that I was also an 
"Everett man". "Everett-man?" he replied puzzled. "Yeah, you know - Hugh 
Everett, Everett/Wheeler hypothesis?" at which point he excitedly jumped out of 
his seat and shook my hand, laughing.




----- Original Message - 
From: "Patrick Leahy" 
To: EverythingList 
Subject: White Rabbit vs. Tegmark 
Date: Mon, 23 May 2005 00:03:55 +0100 (BST) 

> 
> 
> I looked into this mailing list because I thought I'd come up with 
> a fairly cogent objection to Max Tegmark's version of the 
> "everything" thesis, i.e. that there is no distinction between 
> physical and mathematical reality... our multiverse is one 
> particular solution to a set of differential equations, not 
> privileged in any way over other solutions to the same equations, 
> solutions to other equations, and indeed any other mathemetical 
> construct whatsoever (e.g. outputs of UTMs). 
> 
> Sure enough, you came up with my objection years ago, in the form 
> of the "White Rabbit" paradox. Since usage is a bit vague, I'll 
> briefly re-state it here. The problem is that worlds which are 
> "law-like", that is which behave roughly as if there are physical 
> laws but not exactly, seem to vastly outnumber worlds which are 
> strictly "lawful". Hence we would expect to see numerous departures 
> from laws of nature of a non-life-threating kind. 
> 
> This is a different objection to the prediction of a complete 
> failure of induction... it's true that stochastic universes with no 
> laws at all (or where laws abruptly cease to function) should be 
> vastly more common still, but they are not observed due to 
> anthropic selection. 
> 
> A very similar argument ("rubbish universes") was put forward long 
> ago against David Lewis's modal realism, and is discussed in his 
> "On the plurality of worlds". As I understand it, Lewis's defence 
> was that there is no "measure" in his concept of "possible worlds", 
> so it is not meaningful to make statements about which kinds of 
> universe are "more likely" (given that there is an infinity of both 
> lawful and law-like worlds). This is not a defense which Tegmark 
> can make, since he does require a measure (to give his thesis some 
> anthropic content). 
> 
> It seems to me that discussion on this list back in 1999 more or 
> less concluded that this was a fatal objection to Tegmark's version 
> of the thesis, although not to some alternatives based exclusively 
> on UTM programs (e.g. Russell Standish's Occam's Razor paper). 
> 
> Is this a fair summary, or is anyone here prepared to defend Tegmark's 
> thesis? 
> 
> Paddy Leahy 
> 
> == 
> Dr J. P. Leahy, University of Manchester, 
> Jodrell Bank Observatory, School of Physics & Astronomy, 
> Macclesfield, Cheshire SK11 9DL, UK 
> Tel - +44 1477 572636, Fax - +44 1477 571618 

-- 
___
Sign-up for Ads Free at Mail.com
http://promo.mail.com/adsfreejump.htm




White Rabbit vs. Tegmark

2005-05-22 Thread Patrick Leahy


I looked into this mailing list because I thought I'd come up with a 
fairly cogent objection to Max Tegmark's version of the "everything" 
thesis, i.e. that there is no distinction between physical and 
mathematical reality... our multiverse is one particular solution to a set 
of differential equations, not privileged in any way over other solutions 
to the same equations, solutions to other equations, and indeed any other 
mathemetical construct whatsoever (e.g. outputs of UTMs).


Sure enough, you came up with my objection years ago, in the form of the 
"White Rabbit" paradox. Since usage is a bit vague, I'll briefly re-state 
it here. The problem is that worlds which are "law-like", that is which 
behave roughly as if there are physical laws but not exactly, seem to 
vastly outnumber worlds which are strictly "lawful". Hence we would expect 
to see numerous departures from laws of nature of a non-life-threating 
kind.


This is a different objection to the prediction of a complete failure of 
induction... it's true that stochastic universes with no laws at all (or 
where laws abruptly cease to function) should be vastly more common still, 
but they are not observed due to anthropic selection.


A very similar argument ("rubbish universes") was put forward long ago 
against David Lewis's modal realism, and is discussed in his "On the 
plurality of worlds". As I understand it, Lewis's defence was that there 
is no "measure" in his concept of "possible worlds", so it is not 
meaningful to make statements about which kinds of universe are "more 
likely" (given that there is an infinity of both lawful and law-like 
worlds). This is not a defense which Tegmark can make, since he does 
require a measure (to give his thesis some anthropic content).


It seems to me that discussion on this list back in 1999 more or less 
concluded that this was a fatal objection to Tegmark's version of the 
thesis, although not to some alternatives based exclusively on UTM 
programs (e.g. Russell Standish's Occam's Razor paper).


Is this a fair summary, or is anyone here prepared to defend Tegmark's 
thesis?


Paddy Leahy

==
Dr J. P. Leahy, University of Manchester,
Jodrell Bank Observatory, School of Physics & Astronomy,
Macclesfield, Cheshire SK11 9DL, UK
Tel - +44 1477 572636, Fax - +44 1477 571618