Re: Free Will Theorem

2005-04-22 Thread Russell Standish
Ah John, if only I could understand what you're saying...

On Fri, Apr 22, 2005 at 11:45:22AM -0400, John M wrote:
 
 - Original Message -
 From: Russell Standish [EMAIL PROTECTED]
 To: John M [EMAIL PROTECTED]
 Cc: Stathis Papaioannou [EMAIL PROTECTED];
 [EMAIL PROTECTED]; everything-list@eskimo.com
 Sent: Tuesday, April 19, 2005 8:09 PM
 Subject: Re: Free Will Theorem
 
 Russell S. writes in his convoluted from attachment-digging out ways:
 
 Laplace's daemon is a hypothetical creature that knows the exact state
 of every particle in the universe. In a deterministic universe, the
 daemon could compute the future exactly. Of course the daemon cannot
 possibly exist, any more than omniscient beings. In a quantum world,
 or a Multiverse, such daemons are laughable fantasies. Nevertheless,
 they're often deployed in reductio ad absurdum type arguments to do
 with determinism.
 
 Again the stubborn anthropomorphic one-way thinking about the idea of a
 total determinism in one way only. Everything calculated 'in' there is only
 ONE outcome in the world - as the essence of the one-way universe's own
 determinism. This was the spirit that made the total greater than the sum
 of its components - the Aris-total of the epistemic level 2500 years ago.
 It is an age-old technique to invent a faulty hypothesis (thought
 experiment, etc.) and on this basis show the 'ad absurdity' of something.
 
 Determinism as I would like 'to speak about it' is the idea that whatever
 happens (the world as process?) originates in happenings - (beware: not a
 cause as in a limited model, but) in unlimited ensembles of happenings all
 over, not limited to the topical etc. boundaries we erect for our chosen
 observations. The happenings are including the 'ideational' part of the
 world, which is 'choice-accepting' - consequently not fully predictable.
 As in: endogenously impredicative complexities.
 Anticipatory is not necessarily predictable and (my) deterministic points to
 the other side: not where it goes TO, but comes FROM. Even there it is more
 than we can today encompass (compute?) in full.
 This may be a worldwide applicational principle of the spirit that made its
 minuscule example into QM as the 'uncertainty'.
  Or the cat, or a complimentarity.
 Alas, I cannot 'speak about it', because we are not up to such level. Not
 me, not you, not even the materialistic daemon. We all are rooted in the
 materialistic reductionist models what our neuronal brain can handle - in a
 world of unlimited interconnectedness.
 
 John Mikes
 
 
 

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 ()
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgp6ubIaIpqno.pgp
Description: PGP signature


Re: follow-up on Holographic principle and MWI

2005-04-22 Thread Russell Standish
On Thu, Apr 21, 2005 at 11:02:12PM -0400, danny mayes wrote:
 Well, as described in the FOR think of the multiverse as a block, made 
 up of different stacks of pictures that comprise individual universes as 
 they move through time.  Now try to adjust that to what is really going 
 on:  space time is expanding out from the Big Bang.  If you could remove 
 yourself from the multiverse and watch it, time would be expanding at an 
 increasing area, just as the spatial dimensions are.  The reason 
 information storage capacity would equal the surface area of a given 
 object is that any object or area is actually existing in all these 
 overlapping timelines, or virtually identically universes.  Therefore, 
 if you assume the time-area is expanding at a proportional rate to the 
 spatial volume, you would need to divide a cube 10^300 Planck units on a 
 side  by  10^100 to  take out the information that is moving into  the 

This is very sloppy - if time-area were proportional to volume, then
the divisor would be 10^300. Perhaps you meant proportional to length,
but then I do not see why this should be.

 volume or area of time, since we lose this information as we are stuck 
 on a solitary time line and losing the multiverse information to 
 decoherence.  This is simply another way of saying we lose the 
 information to the other universes, I'm just explaining why it would be 
 the amount it is through the mental imagery of time expanding to fill a 
 space  equivalent to the spatial dimensions.
 

But decoherence increases information, not loses it.

 Taking a bird's eye view, and watching the cube moving through the 
 multiverse, all the overlapping universes the cube comprises, the cube 
 could store 10^300 bits of information- equal to it's volume.  However, 
 if you  measure the information in any individual universe, you have to 
 divide the cube over all the overlapping universes it comprises, or an 
 area of time equal to the the area of one of it's sides (again 
 assuming the expansion of time is proportional to the expansion of the 
 spatial dimensions.)  This leaves information storage capacity equal to 
 the surface area of the object . 
 
 I am basically taking the block view of the multiverse seriously, and 
 dividing the information storage capacity by the area of all the stacks 
 of pictures the cube exists on, because we can only measure the 
 information on the one stack that is our universe.  The area of the 
 different stacks can be thought of as an area of time, and would equal 
 one of the spatial areas that comprise the cube if time expansion is 
 proportional to spatial expansion.
 
 This makes sense to me, but then again I am an attorney
 
 Danny Mayes

The only thing that makes sense to me is that maximal decoherence
occurs by arranging observers around the 4/3\pi solid angle of the
volume in question. Thus the maximum decoherence rate is proportional to the
surface area of the volume. Also, we know that linear spatial dimensions are
increasing linearly in flat space-time, so combining the two implies
that maximal decoherence will occur quadratically as a function of
time.

Does this give us the holographic principle? Hmm..

Also, what happens if space-time is not so flat - say spatial expansion
starts to accelerate like its doing now?

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 ()
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpHLuBXwEVmx.pgp
Description: PGP signature


Re: Free Will Theorem

2005-04-22 Thread John M

- Original Message -
From: Russell Standish [EMAIL PROTECTED]
To: John M [EMAIL PROTECTED]
Cc: Stathis Papaioannou [EMAIL PROTECTED];
[EMAIL PROTECTED]; everything-list@eskimo.com
Sent: Tuesday, April 19, 2005 8:09 PM
Subject: Re: Free Will Theorem

Russell S. writes in his convoluted from attachment-digging out ways:

Laplace's daemon is a hypothetical creature that knows the exact state
of every particle in the universe. In a deterministic universe, the
daemon could compute the future exactly. Of course the daemon cannot
possibly exist, any more than omniscient beings. In a quantum world,
or a Multiverse, such daemons are laughable fantasies. Nevertheless,
they're often deployed in reductio ad absurdum type arguments to do
with determinism.

Again the stubborn anthropomorphic one-way thinking about the idea of a
total determinism in one way only. Everything calculated 'in' there is only
ONE outcome in the world - as the essence of the one-way universe's own
determinism. This was the spirit that made the total greater than the sum
of its components - the Aris-total of the epistemic level 2500 years ago.
It is an age-old technique to invent a faulty hypothesis (thought
experiment, etc.) and on this basis show the 'ad absurdity' of something.

Determinism as I would like 'to speak about it' is the idea that whatever
happens (the world as process?) originates in happenings - (beware: not a
cause as in a limited model, but) in unlimited ensembles of happenings all
over, not limited to the topical etc. boundaries we erect for our chosen
observations. The happenings are including the 'ideational' part of the
world, which is 'choice-accepting' - consequently not fully predictable.
As in: endogenously impredicative complexities.
Anticipatory is not necessarily predictable and (my) deterministic points to
the other side: not where it goes TO, but comes FROM. Even there it is more
than we can today encompass (compute?) in full.
This may be a worldwide applicational principle of the spirit that made its
minuscule example into QM as the 'uncertainty'.
 Or the cat, or a complimentarity.
Alas, I cannot 'speak about it', because we are not up to such level. Not
me, not you, not even the materialistic daemon. We all are rooted in the
materialistic reductionist models what our neuronal brain can handle - in a
world of unlimited interconnectedness.

John Mikes






Re: follow-up on Holographic principle and MWI

2005-04-22 Thread danny mayes




Russell Standish wrote:

  On Thu, Apr 21, 2005 at 11:02:12PM -0400, danny mayes wrote:
  
  
Well, as described in the FOR think of the multiverse as a block, made 
up of different stacks of pictures that comprise individual universes as 
they move through time.  Now try to adjust that to what is really going 
on:  space time is expanding out from the Big Bang.  If you could remove 
yourself from the multiverse and watch it, time would be expanding at an 
increasing area, just as the spatial dimensions are.  The reason 
information storage capacity would equal the surface area of a given 
object is that any object or area is actually existing in all these 
overlapping timelines, or virtually identically universes.  Therefore, 
if you assume the "time-area" is expanding at a proportional rate to the 
spatial volume, you would need to divide a cube 10^300 Planck units on a 
side  by  10^100 to  take out the information that is moving into  the 

  
  
This is very sloppy - if "time-area" were proportional to volume, then
the divisor would be 10^300. Perhaps you meant proportional to length,
but then I do not see why this should be.

  

 You are correct. This is very sloppy. First, I made a
typo in referring to the cube as 10^300 on a side when I intended to
say 10^300 in volume. Also, the time area would be proportional to the
other spatial dimensions (a side) of the cube, not the volume. My
apologies. Again, the "time area" should equal a side if it is
considered equivalent to a spatial dimension.


  
  
volume or area of time, since we lose this information as we are stuck 
on a solitary time line and losing the multiverse information to 
decoherence.  This is simply another way of saying we lose the 
information to the other universes, I'm just explaining why it would be 
the amount it is through the mental imagery of time expanding to fill a 
space  equivalent to the spatial dimensions.


  
  
But decoherence increases information, not loses it.
  

 It increases the information we have in this universe, by
removing the interference of all the information from all the
alternative outcomes. We gain the information of one possible
outcome. From the multiverse view, there is no gain or loss of
information, but from our perspective we gain one bit of information
and the rest ends up in the alternative outcomes.


  
  
  
Taking a bird's eye view, and watching the cube moving through the 
multiverse, all the overlapping universes the cube comprises, the cube 
could store 10^300 bits of information- equal to it's volume.  However, 
if you  measure the information in any individual universe, you have to 
divide the cube over all the overlapping universes it comprises, or an 
"area" of time equal to the the area of one of it's sides (again 
assuming the expansion of time is proportional to the expansion of the 
spatial dimensions.)  This leaves information storage capacity equal to 
the surface area of the object . 

I am basically taking the block view of the multiverse seriously, and 
dividing the information storage capacity by the area of all the stacks 
of pictures the cube exists on, because we can only measure the 
information on the one stack that is our universe.  The area of the 
different stacks can be thought of as an area of time, and would equal 
one of the spatial areas that comprise the cube if time expansion is 
proportional to spatial expansion.

This makes sense to me, but then again I am an attorney

Danny Mayes

  
  
The only thing that makes sense to me is that maximal decoherence
occurs by arranging observers around the 4/3\pi solid angle of the
volume in question. Thus the maximum decoherence rate is proportional to the
surface area of the volume. Also, we know that linear spatial dimensions are
increasing linearly in flat space-time, so combining the two implies
that maximal decoherence will occur quadratically as a function of
time.

Does this give us the holographic principle? Hmm..

Also, what happens if space-time is not so flat - say spatial expansion
starts to accelerate like its doing now?

  

 With regards to your last, time area expansion would
accelerate with with spatial acceleration. This means the number of
stacks/outcomes become more numerous. With spatial collapse the
time-area would decrease (stacks/outcomes decrease). (??)




RE: many worlds theory of immortality

2005-04-22 Thread Stathis Papaioannou
Jesse,
Stathis Papaioannou wrote:
Now, look at p(n) again. This time, let's say it is not k, but a random 
real number greater than zero, smaller than 1, with k being the mean of 
the distribution. At first glance, it may appear that not much has 
changed, since the probabilities will on average be the same, over a 
long time period. However, this is not correct. In the above product, p(n) 
can go arbitrarily close to 1 for an arbitrarily long run of n, thus 
reducing the product value arbitrarily close to zero up to that point, 
which cannot subsequently be made up by a compensating fall of p(n) 
close to zero, since the factor 1-p(n)^(2^n) can never be greater than 1. 
(Sorry I haven't put this very elegantly.)
p(n) *can* go arbitrarily close to 1 for an arbitrarily long period of 
time, but you're not taking into the account the fact that the larger the 
population already is, the more arbitrarily close to 1 p(n) would have to 
get to wipe out the population completely--and the more arbitrarily close a 
value to 1 you pick, the less probable it is that p(n) will be greater than 
or equal to this value in a given generation. So it's still true that the 
probability of the population being wiped out is continually decreasing as 
the population gets larger, which means it's still plausible there could be 
a nonzero probability the population would never be wiped out--you'd have 
to do the math to test this (and you might get different answers depending 
on what probability distribution you pick for p(n)).

It also seems unrealistic to say that in a given generation, all 2^n 
members will have the *same* probability p(n) of being erased--if you're 
going to have random variations in p(n), wouldn't it make more sense for 
each individual to independently pick a value of p(n) from the probability 
distribution you're using? And if you do that, then the larger the 
population is, the smaller the average deviation from the expected mean 
value of p(n) given by that distribution.

The conclusion is therefore that if p(n) is allowed to vary randomly, Real 
Death becomes a certainty over time, even with continuous exponential 
growth forever.
I think you have any basis for being sure that Real Death becomes a 
certainty over time in the model you suggest (or the modified version I 
suggested above), not unless you've actually done the math, which would 
likely be pretty hairy.

Jesse
Jesse,
It would be stubborn of me not to admit at this point that you have defended 
your position better than I have mine. I'm still not quite convinced that 
what I have called p(n) won't ultimately ruin the model you have proposed, 
and I'm still not quite convinced that, even if it works, this model will 
not constitute a smaller and smaller proportion of worlds where you remain 
alive, over time; but as you say, I would have to do the maths before making 
such claims. I may try out some of these ideas with Mathematica, but I 
expect that the maths is beyond me. Anyway, thank-you for a most interesting 
and edifying discussion!

--Stathis Papaioannou
_
SEEK: Now with over 80,000 dream jobs! Click here:   
http://ninemsn.seek.com.au?hotmail



Re: follow-up on Holographic principle and MWI

2005-04-22 Thread danny mayes
Russell Standish wrote:
the divisor would be 10^300. Perhaps you meant proportional to length,
but then I do not see why this should be.
 

 Don't know if I directly answered this in my first reply.  If 
time-area equal an equivalent spatial area, we use length as the 
divisor to represent the fact that we have access to the information 
in one universe/one time line.  We, of course do not have access to 
the information in the time area, which is all the possible outcomes.