RE: more torture

2005-06-15 Thread Stathis Papaioannou

Jesse Mazer wrote:

[quoting Stathis]
You are one of 10 copies who are being tortured. The copies are all being 
run in lockstep with each other, as would occur if 10 identical computers 
were running 10 identical sentient programs. Assume that the torture is so 
bad that death is preferable, and so bad that escaping it with your life 
is only marginally preferable to escaping it by dying (eg., given the 
option of a 50% chance of dying or a 49% chance of escaping the torture 
and living, you would take the 50%). The torture will continue for a year, 
but you are allowed one of 3 choices as to how things will proceed:


(a) 9 of the 10 copies will be chosen at random and painlessly killed, 
while the remaining copy will continue to be tortured.


(b) For one minute, the torture will cease and the number of copies will 
increase to 10^100. Once the minute is up, the number of copies will be 
reduced to 10 again and the torture will resume as before.


(c) the torture will be stopped for 8 randomly chosen copies, and continue 
for the other 2.


Which would you choose? To me, it seems clear that there is an 80% chance 
of escaping the torture if you pick (c), while with (a) it is certain that 
the torture will continue, and with (b) it is certain that the torture 
will continue with only one minute of respite.


If you impose the condition I discussed earlier that absolute probabilities 
don't change over time, or in terms of my analogy, that the water levels in 
each tank don't change because the total inflow rate to each tank always 
matches the total outflow rate, then I don't think it's possible to make 
sense of the notion that the observer-moments in that torture-free minute 
would have 10^100 times greater absolute measure. If there's 10^100 times 
more water in the tanks corresponding to OMs during that minute, where does 
all this water go after the tank corresponding to the last OM in this 
minute, and where is it flowing in from to the tank corresponding to the 
first OM in this minute?


As I understood your model, the tanks have constant volume over time 
(because net inflow matches net outflow), but you never said they all had 
the same volume. If they did, every OM would have the same absolute measure, 
so why bother with the idea of absolute measure at all?


It appears that we both believe that any individual's consciousness will 
continue indefinitely, or, as you say in a later post in the current thread, 
death only exists from a third person perspective. However, I don't really 
understand the mechanism whereby you believe this will happen. Perhaps you 
could tell me where we differ:


My understanding of observer moments is that, unlike the water molecules in 
your tanks, they are *always* created and destroyed. The observer's 
experience of continuity of consciousness over time results from the 
stringing together of OM's which are related in the following way: at a 
particular OM in an observer's stream of consciousness, the next moment, 
or successor OM, can be any OM which identifies itself with that observer, 
shares the observer's memories up to that point, and fits in as a 
continuation of the previous OM's thoughts. (These criteria are necessarily 
somewhat loose, accounting for situations such as waking up with retrograde 
amnesia after a head injury.)


Death (from the first person perspective) can be defined as occuring when 
there is no successor OM, anywhere or ever. As long as there remains even 
one successor OM, be it in another Galaxy, a parallel universe, or whatever, 
the stream of consciousness will continue indefinitely. In the multiverse 
(or larger mathematical structure containing the multiverse), there will 
always be a successor OM; hence, the quantum immortality idea.


You may agree with at least some of the above, but it looks like you may 
have a problem with my 10^100 copies, which I propose are created, live for 
a minute, then are destroyed. Didn't I just say death can't happen from a 
first person perspective? Going by the definition of death above, if the 
copies are to really die, there would have to be no successor OM anywhere or 
ever (which in this case means the self contained model universe of the 
thought experiment). But clearly, there *is* a successor OM. As the end of 
the minute approaches, the copies know that the torture is going to start 
again. The fact that there is a mismatch between the number of 
instantiations during the minute and after (10^100 - 10) doesn't make any 
difference. This is what the purpose of the thought experiment was: to show 
that the absolute measure, which is proportional to the number of 
instantiations of a given OM, cannot make any first person difference. If it 
could, then option (c) would be the worst choice, reducing the measure of 
tortured OM's by 80%, while (a) would reduce it by 90% and (b) by almost 
100%. If you chose (a) or (b) on this basis, you would be guaranteeing that 
you will experience a 

Re: more torture

2005-06-15 Thread Saibal Mitra

- Original Message - 
From: Stathis Papaioannou [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; everything-list@eskimo.com
Sent: Tuesday, June 14, 2005 05:26 PM
Subject: Re: more torture



   Saibal Mitra writes:
  
   Because no such thing as free will exists one has to consider three
   different universes in which the three different choices are made.
The
   three
   universes will have comparable measures. The antropic factor of
10^100
 will
   then dominate and will cause the observer to find himself having made
   choice
   b) as one of the 10^100 copies in the minute without torture.
  
   But what will happen to the observer when the minute is up?
  
   --Stathis
 
 
 Pretending that these three universes are all that exists, what will
happen
 is that the OM will find himself being another one of the 10^100 copies.
 The
 copy survives with memory loss.
 
 
 Saibal

 In what sense can the copy (or anything) become another copy with memory
 loss? It is almost as if you are postulating a soul, which flies from one
 body to another, and somehow contains the original person's identity so
that
 it survives memory loss. What is required for an observer moment OM_1 at
 time t1 to become the next observer moment at time t2 is that at least
one
 successor OM exist with time stamp t2, a belief that he is the same person
 as OM_1, and memories of OM_1 up to time t2. If several such OM's exist
 {OM_2.1, OM_2.2, OM_2.3...} then either one may be the successor, with
 probability determined by the measure of OM_2.n relative to the measure of
 the whole set. Amazingly, being completely swamped with other OM's of
 various types and vintages, more or less closely related to OM_1, makes
 absolutely no difference to the process, because the OM's don't need to
 find each other and lock arms, all they need to do is *exist*, anywhere
in
 the multiverse, related in the way I have described. This is somewhat
 analogous to the fact that the integer 56 is always followed by the
integer
 57, even though there are lots and lots of other integers everywhere
amongst
 which these two could get lost.

 --Stathsi Papaioannou


I'm certainly not postulating a soul. All I'm saying is that all OMs are
real and there is no preference for one over another. Each OM will feel that
he is the successor of a previous one. If an OM checks if he is a typical
creature in the universe, he will find with large probability that this is
indeed the case.

Your proposal about time evolution ignores memory loss. How to assign
probabilities to OM_2.1, OM_2.2, etc. if they don't remember everything
about OM_1? Real people's memories are not perfect. So, you would have to
admit memory loss to make your proposal work in practice. And unless you
believe that QTI makes you immune from Alzheimer's you would have to admit
an arbitrary large amount of memory loss.


So, to me the notion of a successor doesn't make sense in general. You can
always define a set of successors of OM_1 irrespective of measure by saying
that members of that set remember being OM_1. But then there also exists
successors of me with perfect memory but with very small measures. I could
e.g. arise accidentally in a simulation performed by aliens and that
simulation could be a more perfect continuation (memory wise) of my present
OM.



These considerations have led me to believe that one should abandon any
fundamental idea of successors altogether. OMs just exist and each OM has a
memory of ''previous'' experiences. So, each OM remembers being another OM.
There exists a probability distribution over the set of all OMs which is
fixed by the laws of physics. OMs thus ''always'' exist and this is a form
of immortality. In your example of 10^100 copies almost all OMs are one of
these copies. What happens to such an OM when the minute is up? Nothing
really happens. All the OMs are ''static'' mathematical entities.


Saibal







RE: more torture

2005-06-15 Thread Jesse Mazer

Stathis Papaioannou wrote:


Jesse Mazer wrote:

If you impose the condition I discussed earlier that absolute 
probabilities don't change over time, or in terms of my analogy, that the 
water levels in each tank don't change because the total inflow rate to 
each tank always matches the total outflow rate, then I don't think it's 
possible to make sense of the notion that the observer-moments in that 
torture-free minute would have 10^100 times greater absolute measure. If 
there's 10^100 times more water in the tanks corresponding to OMs during 
that minute, where does all this water go after the tank corresponding to 
the last OM in this minute, and where is it flowing in from to the tank 
corresponding to the first OM in this minute?


As I understood your model, the tanks have constant volume over time 
(because net inflow matches net outflow), but you never said they all had 
the same volume. If they did, every OM would have the same absolute 
measure, so why bother with the idea of absolute measure at all?


No, I don't think they don't all have to have the same volume, but I thought 
you were assuming that the ASSA would force us to conclude there's a 10^100 
greater chance of finding ourselves as an OM during this minute, an idea 
that would only be true if the OMs during that minute *did* have the same 
absolute probability/water volume as OMs at other times. It's true that it's 
possible to make this example work in terms of the water model if you have 
each tank during that minute contain only 1/10^100 the amount of water 
that's in tanks before that minute, but in that case your absolute 
probability of experiencing an OM in that minute is no higher than at any 
other time. So if my interpretation of your argument is right, I think 
you're arguing against a strawman version of the ASSA here.




It appears that we both believe that any individual's consciousness will 
continue indefinitely, or, as you say in a later post in the current 
thread, death only exists from a third person perspective. However, I 
don't really understand the mechanism whereby you believe this will happen. 
Perhaps you could tell me where we differ:


My understanding of observer moments is that, unlike the water molecules in 
your tanks, they are *always* created and destroyed. The observer's 
experience of continuity of consciousness over time results from the 
stringing together of OM's which are related in the following way: at a 
particular OM in an observer's stream of consciousness, the next moment, 
or successor OM, can be any OM which identifies itself with that observer, 
shares the observer's memories up to that point, and fits in as a 
continuation of the previous OM's thoughts. (These criteria are necessarily 
somewhat loose, accounting for situations such as waking up with retrograde 
amnesia after a head injury.)


If you want to have an objective notion of continuity of consciousness and 
conditional probabilities, then it can't just be a matter of us subjectively 
evaluating how much one observer-moment's memories seem to match the 
experiences and memories of an earlier one. Instead, you'd need some sort of 
theory of consciousness to give you a well-defined, objective procedure for 
deciding this--this is what I've been calling the similarity function. If 
we assume such a thing exists, there are two ways we could think of the 
water molecules. One is to say they represent observers who persist 
indefinitely, while observer-moments just represent what these observers 
*experience* at any given moment, not what they are. These observers would 
have no qualities of their own beyond what they are experiencing at a given 
moment, a bit like the pure witnessing consciousness thought to be our 
true self in certain eastern philosophies. If this seems too close to the 
dualistic idea of a soul, another option is just to say the water 
molecules represent a convenient way to think about conditional and absolute 
probabilities in frequentist terms, since it's generally more intuitive to 
think about any kind of probability in a frequentist way (rather than, say, 
a Bayesian way or a decision-theory way). So in this case the water 
molecules would just be a sort of intuition-pump, they wouldn't have any 
deeper significance.




Death (from the first person perspective) can be defined as occuring when 
there is no successor OM, anywhere or ever. As long as there remains even 
one successor OM, be it in another Galaxy, a parallel universe, or 
whatever, the stream of consciousness will continue indefinitely. In the 
multiverse (or larger mathematical structure containing the multiverse), 
there will always be a successor OM; hence, the quantum immortality idea.


You may agree with at least some of the above, but it looks like you may 
have a problem with my 10^100 copies, which I propose are created, live for 
a minute, then are destroyed. Didn't I just say death can't happen from a 
first person 

RE: more torture

2005-06-15 Thread Jesse Mazer

I wrote:


No, I don't think they don't all have to have the same volume,


Whoops, weird double negative here...that should read I don't think they 
all have to have the same volume.


Jesse




RE: more torture

2005-06-15 Thread rmiller

At 11:03 AM 6/15/2005, Jesse Mazer wrote:

I wrote:


No, I don't think they don't all have to have the same volume,


Whoops, weird double negative here...that should read I don't think they 
all have to have the same volume.


Jesse



must have
should have
are required to have



RM 





Re: more torture

2005-06-14 Thread Stathis Papaioannou

Saibal Mitra writes:


Because no such thing as free will exists one has to consider three
different universes in which the three different choices are made. The 
three

universes will have comparable measures. The antropic factor of 10^100 will
then dominate and will cause the observer to find himself having made 
choice

b) as one of the 10^100 copies in the minute without torture.


But what will happen to the observer when the minute is up?

--Stathis




 I have been arguing in recent posts that the absolute measure of an
observer
 moment (or observer, if you prefer) makes no possible difference at the
 first person level. A counterargument has been that, even if an observer
 cannot know how many instantiations of him are being run, it is still
 important in principle to take the absolute measure into account, for
 example when considering the total amount of suffering in the world. The
 following thought experiment shows how, counterintuitively, sticking to
this
 principle may actually be doing the victims a disservice:

 You are one of 10 copies who are being tortured. The copies are all 
being
 run in lockstep with each other, as would occur if 10 identical 
computers
 were running 10 identical sentient programs. Assume that the torture is 
so

 bad that death is preferable, and so bad that escaping it with your life
is
 only marginally preferable to escaping it by dying (eg., given the 
option

of
 a 50% chance of dying or a 49% chance of escaping the torture and 
living,
 you would take the 50%). The torture will continue for a year, but you 
are

 allowed one of 3 choices as to how things will proceed:

 (a) 9 of the 10 copies will be chosen at random and painlessly killed,
while
 the remaining copy will continue to be tortured.

 (b) For one minute, the torture will cease and the number of copies will
 increase to 10^100. Once the minute is up, the number of copies will be
 reduced to 10 again and the torture will resume as before.

 (c) the torture will be stopped for 8 randomly chosen copies, and 
continue

 for the other 2.

 Which would you choose? To me, it seems clear that there is an 80% 
chance

of
 escaping the torture if you pick (c), while with (a) it is certain that
the
 torture will continue, and with (b) it is certain that the torture will
 continue with only one minute of respite.

 Are there other ways to look at the choices? It might be argued that in
(a)
 there is a 90% chance that you will be one of the copies who is killed,
and
 thus a 90% chance that you will escape the torture, better than your
chances
 in (c). However, even if you are one of the ones killed, this does not
help
 you at all. If there is a successor observer moment at the moment of
death,
 subjectively, your consciousness will continue. The successor OM in this
 case comes from the one remaining copy who is being tortured, hence
 guaranteeing that you will continue to suffer.

 What about looking at it from an altruistic rather than selfish 
viewpoint:
 isn't it is better to decrease the total suffering in the world by 90% 
as

in
 (a) rather than by 80% as in (c)? Before making plans to decrease
suffering,
 ask the victims. All 10 copies will plead with you to choose (c).

 What about (b)? ASSA enthusiasts might argue that with this choice, an 
OM

 sampled randomly from the set of all possible OM's will almost certainly
be
 from the one minute torture-free interval. What would this mean for the
 victims? If you interview each of the 10 copies before the minute 
starts,
 they will tell you that they are currently being tortured and they 
expect
 that they will get one minute respite, then start suffering again, so 
they

 wish the choice had been (c). Next, if you interview each of the 10^100
 copies they will tell you that the torture has stopped for exactly one
 minute by the torture chambre's clock, but they know that it is going to
 start again and they wish you had chosen (c). Finally, if you interview
each
 of the 10 copies for whom the torture has recommenced, they will report
that
 they remember the minute of respite, but that's no good to them now, and
 they wish you had chosen (c).

 --Stathis Papaioannou

 _
 Express yourself instantly with MSN Messenger! Download today - it's 
FREE!

 http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/




_
SEEK: Over 80,000 jobs across all industries at Australia's #1 job site.
http://ninemsn.seek.com.au?hotmail




Re: more torture

2005-06-14 Thread Saibal Mitra

- Original Message - 
From: Stathis Papaioannou [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; everything-list@eskimo.com
Sent: Tuesday, June 14, 2005 08:06 AM
Subject: Re: more torture


 Saibal Mitra writes:

 Because no such thing as free will exists one has to consider three
 different universes in which the three different choices are made. The
 three
 universes will have comparable measures. The antropic factor of 10^100
will
 then dominate and will cause the observer to find himself having made
 choice
 b) as one of the 10^100 copies in the minute without torture.

 But what will happen to the observer when the minute is up?

 --Stathis


Pretending that these three universes are all that exists, what will happen
is that the OM will find himself being another one of the 10^100 copies. The
copy survives with memory loss.


Saibal



Re: more torture

2005-06-14 Thread Stathis Papaioannou

Hal Finney writes:


Let us consider these flavors of altruism in the case of Stathis' puzzle:

 You are one of 10 copies who are being tortured. The copies are all 
being
 run in lockstep with each other, as would occur if 10 identical 
computers
 were running 10 identical sentient programs. Assume that the torture is 
so
 bad that death is preferable, and so bad that escaping it with your life 
is
 only marginally preferable to escaping it by dying (eg., given the 
option of
 a 50% chance of dying or a 49% chance of escaping the torture and 
living,
 you would take the 50%). The torture will continue for a year, but you 
are

 allowed one of 3 choices as to how things will proceed:

 (a) 9 of the 10 copies will be chosen at random and painlessly killed, 
while

 the remaining copy will continue to be tortured.

 (b) For one minute, the torture will cease and the number of copies will
 increase to 10^100. Once the minute is up, the number of copies will be
 reduced to 10 again and the torture will resume as before.

 (c) the torture will be stopped for 8 randomly chosen copies, and 
continue

 for the other 2.

 Which would you choose?

For the averagist, doing (a) will not change average happiness.  Doing
(b) will improve it, but not that much.  The echoes of the torture and
anticipation of future torture will make that one minute of respite
not particularly pleasant.  Doing (c) would seem to be the best choice,
as 8 out of the 10 avoid a year of torture.  (I'm not sure why Stathis
seemed to say that the people would not want to escape their torture,
given that it was so bad.  That doesn't seem right to me; the worse it
is, the more they would want to escape it.)

For the totalist, since death is preferable to the torture, each
person's life has a negative impact on total happiness.  Hence (a)
would be an improvement as it removes these negatives from the universe.
Doing (b) is unclear: during that one minute, would the 10^100 copies
kill themselves if possible?  If so, their existence is negative and
so doing (b) would make the universe much worse due to the addition
of so many negatively happy OMs.  Doing (c) would seem to be better,
assuming that the 8 out of 10 would eventually find that their lives
were positive during that year without torture.

So it appears that each one would choose (c), although they would differ
about whether (a) is an improvement over the status quo.

(b) is deprecated because that one minute will not be pleasant due to
the echoes of the torture.  If the person could have his memory wiped
for that one minute and neither remember nor anticipate future torture,
that would make (b) the best choice for both kinds of altruists.  Adding
10^100 pleasant observer-moments would increase both total and average
happiness and would more than compensate for a year of suffering for 10
people.  10^100 is a really enormous number.


This analysis would be fine were it not for the fact that we are discussing 
*exact copies* running in lockstep with each other. You have to take into 
account the special way observers construct their identity as a unique 
individual persisting through time, which you admitted in a recent post is 
a purely contingent, artificial, manufactured set of beliefs and attitudes 
which have been programmed into us in order to help our genes survive. With 
choice (a), although it seems like a good idea to end the suffering of 9/10 
copies, it doesn't make the slightest bit of difference. In order to end a 
person's suffering at a particular observer moment, you have to either 
ensure that there will be no successor OM's ever again (i.e., death), or 
provide a successor OM which does not involve suffering. As long as at least 
one copy remains alive, that copy will always provide a successor OM for any 
of the other copies which are killed. Subjectively, it will be impossible 
for any of the copies to notice that anything has changed when they are 
killed. This reasoning applies whether you consider the selfish interests of 
one of the copies or the altruistic interests of all of them.


You might argue, as you have with your example of increased measure on 
alternate days of the week, that it is still better to try to reduce the 
total number of unpleasant experiences in the world, even if we cannot see 
any change that may result. Perhaps that would be OK, all else being equal. 
However, I provided choice (c) to show how this sort of reasoning can lead 
to unfortunate outcomes. In (c), unlike (a), alternative successor OM's to 
the torture exist. The result is that at the moment the choice is made, each 
copy is looking at a 20% chance that the torture will continue and an 80% 
chance that it will stop. At first glance, this doesn't look quite as good 
as choice (a), if you follow the try to reduce the number of unpleasant 
OM's in the world rule. But as shown above, it would be a terrible mistake 
to choose (a), as you would be ensuring that the torture will 

Re: more torture

2005-06-14 Thread Bruno Marchal


Le 13-juin-05, à 21:06, Jesse Mazer a écrit :


Hal Finney wrote:


Jesse Mazer writes:
 If you impose the condition I discussed earlier that absolute 
probabilities
 don't change over time, or in terms of my analogy, that the water 
levels in
 each tank don't change because the total inflow rate to each tank 
always
 matches the total outflow rate, then I don't think it's possible to 
make
 sense of the notion that the observer-moments in that torture-free 
minute
 would have 10^100 times greater absolute measure. If there's 10^100 
times
 more water in the tanks corresponding to OMs during that minute, 
where does
 all this water go after the tank corresponding to the last OM in 
this
 minute, and where is it flowing in from to the tank corresponding 
to the

 first OM in this minute?

I would propose to implement the effect by duplicating the guy 10^100 
times
during that minute, then terminating all the duplicates after that 
time.


What happens in your model when someone dies in some fraction of the
multiverse?  His absolute measure decreases, but where does the 
now-excess

water go?


In my model, death only exists from a third-person perspective, but 
from a first-person perspective I'm subscribing to the QTI, so 
consciousness will always continue in some form (even if my memories 
don't last or I am reduced to an amoeba-level consciousness)--the 
water molecules are never created or destroyed.



I agree. This is even related with my NO KESTRELS, NO STARLINGS rough 
summary of physics (see the end of my first combinators post the 
chemistry of combinators:

http://www.escribe.com/science/theory/m5913.html
I intend to come back on this.



For what would happen when an observer is duplicated from a 
third-person perspective, it might help to consider the example I 
discussed on the 'Last-minute vs. anticipatory quantum 
immortality' thread at 
http://www.escribe.com/science/theory/m4841.html , where a person is 
initially duplicated before a presidential election, and then 
depending on the results of the election, one duplicate is later 
copied 999 times. All else being equal, I'd speculate that the initial 
2-split would anticipate the later 999-split, so that 999 out of 
1000 water molecules of the first observer would split off into the 
copy that is later going to be split 999 times, so before this second 
split, OMs of this copy would have 999 times the absolute measure of 
the copy that isn't going to be split again.



I essentially agree. Stathis should not agree, or I have misunderstood 
Stathis on its last posts. Correct me perhaps.




 I'm not absolutely sure that this would be a consequence of the idea 
about finding a unique self-consistent set of absolute and conditional 
probabilities based only on a similarity matrix and the condition of 
absolute probabilities not changing with time, but it seems intuitive 
to me that it would.


I agree except question of vocabulary. It's not important (at this 
stage).



At some point I'm going to try to test this idea with mathematica or 
something, creating a finite set of OMs and deciding what the possible 
successors to each one are in order to construct something like a 
similarity matrix, then finding the unique vector of absolute 
probabilities that, when multiplied by this matrix, gives a unit 
vector (the procedure I discussed in my last post to you at 
http://www.escribe.com/science/theory/m6855.html ). Hopefully the 
absolute probabilities would indeed tend to anticipate future splits 
in the way I'm describing.


Nice test. I'm curious to see the result. Not sure there is a unique 
vector. Not sure it is important that there is one. I may be wrong.




So if this anticipatory idea works, then any copy that's very unlikely 
to survive long from a third-person perspective is going to undergoe 
fewer future splits from a multiverse perspective (there will always 
be few branches where this copy survives though), so your conditional 
probability of becoming such a copy would be low, meaning that not 
much of your water would flow into that copy, and it will have a 
smaller absolute measure than copies that are likely to survive in 
more branches.


Let us see ...

Bruno

http://iridia.ulb.ac.be/~marchal/




Re: more torture

2005-06-14 Thread Stathis Papaioannou



 Saibal Mitra writes:

 Because no such thing as free will exists one has to consider three
 different universes in which the three different choices are made. The
 three
 universes will have comparable measures. The antropic factor of 10^100
will
 then dominate and will cause the observer to find himself having made
 choice
 b) as one of the 10^100 copies in the minute without torture.

 But what will happen to the observer when the minute is up?

 --Stathis


Pretending that these three universes are all that exists, what will happen
is that the OM will find himself being another one of the 10^100 copies. 
The

copy survives with memory loss.


Saibal


In what sense can the copy (or anything) become another copy with memory 
loss? It is almost as if you are postulating a soul, which flies from one 
body to another, and somehow contains the original person's identity so that 
it survives memory loss. What is required for an observer moment OM_1 at 
time t1 to become the next observer moment at time t2 is that at least one 
successor OM exist with time stamp t2, a belief that he is the same person 
as OM_1, and memories of OM_1 up to time t2. If several such OM's exist 
{OM_2.1, OM_2.2, OM_2.3...} then either one may be the successor, with 
probability determined by the measure of OM_2.n relative to the measure of 
the whole set. Amazingly, being completely swamped with other OM's of 
various types and vintages, more or less closely related to OM_1, makes 
absolutely no difference to the process, because the OM's don't need to 
find each other and lock arms, all they need to do is *exist*, anywhere in 
the multiverse, related in the way I have described. This is somewhat 
analogous to the fact that the integer 56 is always followed by the integer 
57, even though there are lots and lots of other integers everywhere amongst 
which these two could get lost.


--Stathsi Papaioannou

_
Dating? Try Lavalife – get 7 days FREE! Sign up NOW. 
http://lavalife9.ninemsn.com.au/clickthru/clickthru.act?context=an99locale=en_AUa=19180




Re: more torture

2005-06-13 Thread Bruno Marchal
I agree with everything you say in this post, but I am not sure that 
settles the issue. It does not change my mind on the preceding post 
where we were disagreeing; which was that IF I must choose between


A) splitted between 1 finite hells and 1 infinite paradise
B) Splitted between 1 infinite hell and 1 finite paradises

where finite and infinite refer to the number of computational 
steps simulating the stories of thoise hells and paradises, THEN I 
should choose A.
This is because all finite stories have a measure 0. Infinite 
stories, by their natural DU multiplications will have a measure one.


But we are on the verge of inconsistency, because in practice there is 
no way to garantie anything like the finiteness of any computation 
going through our states (this is akin to the insolubility of the 
self-stopping problem by sufficiently rich (lobian) turing machine).


The idea that I try to convey is that if I am in state S1, the 
probability of some next state S2 depends on the proportion, among the 
infinite stories going through S1 of those *infinite* stories going 
also through S2. And all finite stories must be discounted.


(It is not necessary I remain personally immortal in those infinite 
stories, the measure is given by the stories going through my states 
even if I have a finite 3-life-time in all of those stories).


(btw, this entails also that comp implies at least infinite past and/or 
future for any universes supporting our present story).


[Note that here I am going far ahead of what I can ask to the lobian 
machine, because our talk involves quantifiers on stories and that's 
very complex to handle. Well, to be sure I have till now only been able 
to translate the case of probability one, in machine term; but it is 
enough to extract non trivial information on the logic of observable 
proposition.]




Bruno
 



Le 13-juin-05, à 13:00, Stathis Papaioannou a écrit :

I have been arguing in recent posts that the absolute measure of an 
observer moment (or observer, if you prefer) makes no possible 
difference at the first person level. A counterargument has been that, 
even if an observer cannot know how many instantiations of him are 
being run, it is still important in principle to take the absolute 
measure into account, for example when considering the total amount of 
suffering in the world. The following thought experiment shows how, 
counterintuitively, sticking to this principle may actually be doing 
the victims a disservice:


You are one of 10 copies who are being tortured. The copies are all 
being run in lockstep with each other, as would occur if 10 identical 
computers were running 10 identical sentient programs. Assume that the 
torture is so bad that death is preferable, and so bad that escaping 
it with your life is only marginally preferable to escaping it by 
dying (eg., given the option of a 50% chance of dying or a 49% chance 
of escaping the torture and living, you would take the 50%). The 
torture will continue for a year, but you are allowed one of 3 choices 
as to how things will proceed:


(a) 9 of the 10 copies will be chosen at random and painlessly killed, 
while the remaining copy will continue to be tortured.


(b) For one minute, the torture will cease and the number of copies 
will increase to 10^100. Once the minute is up, the number of copies 
will be reduced to 10 again and the torture will resume as before.


(c) the torture will be stopped for 8 randomly chosen copies, and 
continue for the other 2.


Which would you choose? To me, it seems clear that there is an 80% 
chance of escaping the torture if you pick (c), while with (a) it is 
certain that the torture will continue, and with (b) it is certain 
that the torture will continue with only one minute of respite.


Are there other ways to look at the choices? It might be argued that 
in (a) there is a 90% chance that you will be one of the copies who is 
killed, and thus a 90% chance that you will escape the torture, better 
than your chances in (c). However, even if you are one of the ones 
killed, this does not help you at all. If there is a successor 
observer moment at the moment of death, subjectively, your 
consciousness will continue. The successor OM in this case comes from 
the one remaining copy who is being tortured, hence guaranteeing that 
you will continue to suffer.


What about looking at it from an altruistic rather than selfish 
viewpoint: isn't it is better to decrease the total suffering in the 
world by 90% as in (a) rather than by 80% as in (c)? Before making 
plans to decrease suffering, ask the victims. All 10 copies will plead 
with you to choose (c).


What about (b)? ASSA enthusiasts might argue that with this choice, an 
OM sampled randomly from the set of all possible OM's will almost 
certainly be from the one minute torture-free interval. What would 
this mean for the victims? If you interview each of the 10 copies 
before the minute starts, 

Re: more torture

2005-06-13 Thread Bruno Marchal

Hi Quentin,


concerning finite/infinite number of steps, it seems to me that it is
always possible to have a computation that will take an infinite number
of steps to arrive at a particular state, since for any state, there
exists an infinity of computational histories which go through it, so 
it

seems to me that some of them needs infinite steps... Do I miss
something ?



Yes. My fault. I was not clear enough. First it is obvious that any 
states in the execution of the DU has only a finite 3-computations. 
But, as you say, any states belong to an infinite set of computations, 
and this justifies that from the first person point of view we can have 
infinite past. I should have mention the 1-3 difference. Apology. (But 
I would not say that some state *needs* an infinite 3-computation, that 
one would not even be generated by the DU.


Regards,
Bruno




Sincerely,
Quentin Anciaux

Le lundi 13 juin 2005 à 15:37 +0200, Bruno Marchal a écrit :

I agree with everything you say in this post, but I am not sure that
settles the issue. It does not change my mind on the preceding post
where we were disagreeing; which was that IF I must choose between

A) splitted between 1 finite hells and 1 infinite paradise
B) Splitted between 1 infinite hell and 1 finite paradises

where finite and infinite refer to the number of computational
steps simulating the stories of thoise hells and paradises, THEN I
should choose A.
This is because all finite stories have a measure 0. Infinite
stories, by their natural DU multiplications will have a measure 
one.


But we are on the verge of inconsistency, because in practice there is
no way to garantie anything like the finiteness of any computation
going through our states (this is akin to the insolubility of the
self-stopping problem by sufficiently rich (lobian) turing machine).

The idea that I try to convey is that if I am in state S1, the
probability of some next state S2 depends on the proportion, among the
infinite stories going through S1 of those *infinite* stories going
also through S2. And all finite stories must be discounted.

(It is not necessary I remain personally immortal in those infinite
stories, the measure is given by the stories going through my states
even if I have a finite 3-life-time in all of those stories).

(btw, this entails also that comp implies at least infinite past 
and/or

future for any universes supporting our present story).

[Note that here I am going far ahead of what I can ask to the lobian
machine, because our talk involves quantifiers on stories and that's
very complex to handle. Well, to be sure I have till now only been 
able

to translate the case of probability one, in machine term; but it is
enough to extract non trivial information on the logic of observable
proposition.]



Bruno
  



Le 13-juin-05, à 13:00, Stathis Papaioannou a écrit :


I have been arguing in recent posts that the absolute measure of an
observer moment (or observer, if you prefer) makes no possible
difference at the first person level. A counterargument has been 
that,

even if an observer cannot know how many instantiations of him are
being run, it is still important in principle to take the absolute
measure into account, for example when considering the total amount 
of

suffering in the world. The following thought experiment shows how,
counterintuitively, sticking to this principle may actually be doing
the victims a disservice:

You are one of 10 copies who are being tortured. The copies are all
being run in lockstep with each other, as would occur if 10 identical
computers were running 10 identical sentient programs. Assume that 
the

torture is so bad that death is preferable, and so bad that escaping
it with your life is only marginally preferable to escaping it by
dying (eg., given the option of a 50% chance of dying or a 49% chance
of escaping the torture and living, you would take the 50%). The
torture will continue for a year, but you are allowed one of 3 
choices

as to how things will proceed:

(a) 9 of the 10 copies will be chosen at random and painlessly 
killed,

while the remaining copy will continue to be tortured.

(b) For one minute, the torture will cease and the number of copies
will increase to 10^100. Once the minute is up, the number of copies
will be reduced to 10 again and the torture will resume as before.

(c) the torture will be stopped for 8 randomly chosen copies, and
continue for the other 2.

Which would you choose? To me, it seems clear that there is an 80%
chance of escaping the torture if you pick (c), while with (a) it is
certain that the torture will continue, and with (b) it is certain
that the torture will continue with only one minute of respite.

Are there other ways to look at the choices? It might be argued that
in (a) there is a 90% chance that you will be one of the copies who 
is
killed, and thus a 90% chance that you will escape the torture, 
better

than your chances in (c). However, even if 

Re: more torture

2005-06-13 Thread rmiller

At 06:00 AM 6/13/2005, Stathis Papaioannou wrote:
I have been arguing in recent posts that the absolute measure of an 
observer moment (or observer, if you prefer) makes no possible difference 
at the first person level. A counterargument has been that, even if an 
observer cannot know how many instantiations of him are being run, it is 
still important in principle to take the absolute measure into account, 
for example when considering the total amount of suffering in the world. 
The following thought experiment shows how, counterintuitively, sticking 
to this principle may actually be doing the victims a disservice:


You are one of 10 copies who are being tortured. The copies are all being 
run in lockstep with each other, as would occur if 10 identical computers 
were running 10 identical sentient programs. Assume that the torture is so 
bad that death is preferable, and so bad that escaping it with your life 
is only marginally preferable to escaping it by dying (eg., given the 
option of a 50% chance of dying or a 49% chance of escaping the torture 
and living, you would take the 50%). The torture will continue for a year, 
but you are allowed one of 3 choices as to how things will proceed:


(a) 9 of the 10 copies will be chosen at random and painlessly killed, 
while the remaining copy will continue to be tortured.


(b) For one minute, the torture will cease and the number of copies will 
increase to 10^100. Once the minute is up, the number of copies will be 
reduced to 10 again and the torture will resume as before.


(c) the torture will be stopped for 8 randomly chosen copies, and continue 
for the other 2.


Which would you choose? To me, it seems clear that there is an 80% chance 
of escaping the torture if you pick (c), while with (a) it is certain that 
the torture will continue, and with (b) it is certain that the torture 
will continue with only one minute of respite.

RM writes. . .
Here is my criteria:  There are those who suggest that there is only one 
electron in the universe, but that it travels forward and backward in time, 
thus making multiple copies of itself.  If the individual percipient would 
eventually have to experience the pain and suffering of all whom he had 
affected--or caused to experience pain and suffering, then the most 
selfish, altruistic *and* sensible choice would be (c).


Rich Miller




RE: more torture

2005-06-13 Thread Hal Finney
Jesse Mazer writes:
 If you impose the condition I discussed earlier that absolute probabilities 
 don't change over time, or in terms of my analogy, that the water levels in 
 each tank don't change because the total inflow rate to each tank always 
 matches the total outflow rate, then I don't think it's possible to make 
 sense of the notion that the observer-moments in that torture-free minute 
 would have 10^100 times greater absolute measure. If there's 10^100 times 
 more water in the tanks corresponding to OMs during that minute, where does 
 all this water go after the tank corresponding to the last OM in this 
 minute, and where is it flowing in from to the tank corresponding to the 
 first OM in this minute?

I would propose to implement the effect by duplicating the guy 10^100 times
during that minute, then terminating all the duplicates after that time.

What happens in your model when someone dies in some fraction of the
multiverse?  His absolute measure decreases, but where does the now-excess
water go?

Hal Finney



RE: more torture

2005-06-13 Thread Jesse Mazer

Hal Finney wrote:


Jesse Mazer writes:
 If you impose the condition I discussed earlier that absolute 
probabilities
 don't change over time, or in terms of my analogy, that the water levels 
in

 each tank don't change because the total inflow rate to each tank always
 matches the total outflow rate, then I don't think it's possible to make
 sense of the notion that the observer-moments in that torture-free 
minute
 would have 10^100 times greater absolute measure. If there's 10^100 
times
 more water in the tanks corresponding to OMs during that minute, where 
does

 all this water go after the tank corresponding to the last OM in this
 minute, and where is it flowing in from to the tank corresponding to the
 first OM in this minute?

I would propose to implement the effect by duplicating the guy 10^100 times
during that minute, then terminating all the duplicates after that time.

What happens in your model when someone dies in some fraction of the
multiverse?  His absolute measure decreases, but where does the now-excess
water go?


In my model, death only exists from a third-person perspective, but from a 
first-person perspective I'm subscribing to the QTI, so consciousness will 
always continue in some form (even if my memories don't last or I am reduced 
to an amoeba-level consciousness)--the water molecules are never created 
or destroyed. For what would happen when an observer is duplicated from a 
third-person perspective, it might help to consider the example I discussed 
on the 'Last-minute vs. anticipatory quantum immortality' thread at 
http://www.escribe.com/science/theory/m4841.html , where a person is 
initially duplicated before a presidential election, and then depending on 
the results of the election, one duplicate is later copied 999 times. All 
else being equal, I'd speculate that the initial 2-split would anticipate 
the later 999-split, so that 999 out of 1000 water molecules of the first 
observer would split off into the copy that is later going to be split 999 
times, so before this second split, OMs of this copy would have 999 times 
the absolute measure of the copy that isn't going to be split again. I'm not 
absolutely sure that this would be a consequence of the idea about finding a 
unique self-consistent set of absolute and conditional probabilities based 
only on a similarity matrix and the condition of absolute probabilities 
not changing with time, but it seems intuitive to me that it would. At some 
point I'm going to try to test this idea with mathematica or something, 
creating a finite set of OMs and deciding what the possible successors to 
each one are in order to construct something like a similarity matrix, 
then finding the unique vector of absolute probabilities that, when 
multiplied by this matrix, gives a unit vector (the procedure I discussed in 
my last post to you at http://www.escribe.com/science/theory/m6855.html ). 
Hopefully the absolute probabilities would indeed tend to anticipate 
future splits in the way I'm describing.


So if this anticipatory idea works, then any copy that's very unlikely to 
survive long from a third-person perspective is going to undergoe fewer 
future splits from a multiverse perspective (there will always be few 
branches where this copy survives though), so your conditional probability 
of becoming such a copy would be low, meaning that not much of your water 
would flow into that copy, and it will have a smaller absolute measure than 
copies that are likely to survive in more branches.


Jesse




Re: more torture

2005-06-13 Thread Saibal Mitra
Because no such thing as free will exists one has to consider three
different universes in which the three different choices are made. The three
universes will have comparable measures. The antropic factor of 10^100 will
then dominate and will cause the observer to find himself having made choice
b) as one of the 10^100 copies in the minute without torture.


Saibal








-
Defeat Spammers by launching DDoS attacks on Spam-Websites:
http://www.hillscapital.com/antispam/
- Original Message - 
From: Stathis Papaioannou [EMAIL PROTECTED]
To: everything-list@eskimo.com
Sent: Monday, June 13, 2005 01:00 PM
Subject: more torture


 I have been arguing in recent posts that the absolute measure of an
observer
 moment (or observer, if you prefer) makes no possible difference at the
 first person level. A counterargument has been that, even if an observer
 cannot know how many instantiations of him are being run, it is still
 important in principle to take the absolute measure into account, for
 example when considering the total amount of suffering in the world. The
 following thought experiment shows how, counterintuitively, sticking to
this
 principle may actually be doing the victims a disservice:

 You are one of 10 copies who are being tortured. The copies are all being
 run in lockstep with each other, as would occur if 10 identical computers
 were running 10 identical sentient programs. Assume that the torture is so
 bad that death is preferable, and so bad that escaping it with your life
is
 only marginally preferable to escaping it by dying (eg., given the option
of
 a 50% chance of dying or a 49% chance of escaping the torture and living,
 you would take the 50%). The torture will continue for a year, but you are
 allowed one of 3 choices as to how things will proceed:

 (a) 9 of the 10 copies will be chosen at random and painlessly killed,
while
 the remaining copy will continue to be tortured.

 (b) For one minute, the torture will cease and the number of copies will
 increase to 10^100. Once the minute is up, the number of copies will be
 reduced to 10 again and the torture will resume as before.

 (c) the torture will be stopped for 8 randomly chosen copies, and continue
 for the other 2.

 Which would you choose? To me, it seems clear that there is an 80% chance
of
 escaping the torture if you pick (c), while with (a) it is certain that
the
 torture will continue, and with (b) it is certain that the torture will
 continue with only one minute of respite.

 Are there other ways to look at the choices? It might be argued that in
(a)
 there is a 90% chance that you will be one of the copies who is killed,
and
 thus a 90% chance that you will escape the torture, better than your
chances
 in (c). However, even if you are one of the ones killed, this does not
help
 you at all. If there is a successor observer moment at the moment of
death,
 subjectively, your consciousness will continue. The successor OM in this
 case comes from the one remaining copy who is being tortured, hence
 guaranteeing that you will continue to suffer.

 What about looking at it from an altruistic rather than selfish viewpoint:
 isn't it is better to decrease the total suffering in the world by 90% as
in
 (a) rather than by 80% as in (c)? Before making plans to decrease
suffering,
 ask the victims. All 10 copies will plead with you to choose (c).

 What about (b)? ASSA enthusiasts might argue that with this choice, an OM
 sampled randomly from the set of all possible OM's will almost certainly
be
 from the one minute torture-free interval. What would this mean for the
 victims? If you interview each of the 10 copies before the minute starts,
 they will tell you that they are currently being tortured and they expect
 that they will get one minute respite, then start suffering again, so they
 wish the choice had been (c). Next, if you interview each of the 10^100
 copies they will tell you that the torture has stopped for exactly one
 minute by the torture chambre's clock, but they know that it is going to
 start again and they wish you had chosen (c). Finally, if you interview
each
 of the 10 copies for whom the torture has recommenced, they will report
that
 they remember the minute of respite, but that's no good to them now, and
 they wish you had chosen (c).

 --Stathis Papaioannou

 _
 Express yourself instantly with MSN Messenger! Download today - it's FREE!
 http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/




Re: more torture

2005-06-13 Thread Hal Finney
IMO belief in the ASSA is tantamount to altruism.  The ASSA would imply
taking action based on its positive impact on the whole multiverse of
observer-moments (OMs).

We have had some discussion here and on the extropy-chat (transhumanist)
mailing list about two different possible flavors of altruism.  These are
sometimes called averagist vs totalist.

The averagist wants to maximize the average happiness of humanity.
He opposes measures that will add more people at the expense of decreasing
their average happiness.  This is a pretty common element among green
political movements.

The totalist wants to maximize the total happiness of humanity.
He believes that people are good and more people are better.  This
philosophy is less common but is sometimes associated with libertarian
or radical right wing politics.

These two ideas can be applied to observer-moments as well.  But both
of these approaches have problems if taken to the extreme.

For the extreme averagist, half the OMs are below average.  If they were
eliminated, the average would rise.  But again, half of the remaining
OMs would be below (the new, higher) average.  So again half should be
eliminated.  In the end you are left with the one OM with the highest
average happiness.  Eliminating almost every ounce of intelligence in
the universe hardly seems altruistic.

For the extreme totalist, the problem is that he will support adding
OMs as long as their quality of life is just barely above that which
would lead to suicide.  More OMs generally will decrease the quality
of life of others, due to competition for resources, so the result is
a massively overpopulated universe with everyone leading terrible lives.
This again seems inconsistent with the goals of altruism.

In practice it seems that some middle ground must be found.  Adding
more OMs is good, up to a point.  I don't know if anyone has a good,
objective measure that can be maximized for an effective approach to
altruism.

Let us consider these flavors of altruism in the case of Stathis' puzzle:

 You are one of 10 copies who are being tortured. The copies are all being 
 run in lockstep with each other, as would occur if 10 identical computers 
 were running 10 identical sentient programs. Assume that the torture is so 
 bad that death is preferable, and so bad that escaping it with your life is 
 only marginally preferable to escaping it by dying (eg., given the option of 
 a 50% chance of dying or a 49% chance of escaping the torture and living, 
 you would take the 50%). The torture will continue for a year, but you are 
 allowed one of 3 choices as to how things will proceed:

 (a) 9 of the 10 copies will be chosen at random and painlessly killed, while 
 the remaining copy will continue to be tortured.

 (b) For one minute, the torture will cease and the number of copies will 
 increase to 10^100. Once the minute is up, the number of copies will be 
 reduced to 10 again and the torture will resume as before.

 (c) the torture will be stopped for 8 randomly chosen copies, and continue 
 for the other 2.

 Which would you choose?

For the averagist, doing (a) will not change average happiness.  Doing
(b) will improve it, but not that much.  The echoes of the torture and
anticipation of future torture will make that one minute of respite
not particularly pleasant.  Doing (c) would seem to be the best choice,
as 8 out of the 10 avoid a year of torture.  (I'm not sure why Stathis
seemed to say that the people would not want to escape their torture,
given that it was so bad.  That doesn't seem right to me; the worse it
is, the more they would want to escape it.)

For the totalist, since death is preferable to the torture, each
person's life has a negative impact on total happiness.  Hence (a)
would be an improvement as it removes these negatives from the universe.
Doing (b) is unclear: during that one minute, would the 10^100 copies
kill themselves if possible?  If so, their existence is negative and
so doing (b) would make the universe much worse due to the addition
of so many negatively happy OMs.  Doing (c) would seem to be better,
assuming that the 8 out of 10 would eventually find that their lives
were positive during that year without torture.

So it appears that each one would choose (c), although they would differ
about whether (a) is an improvement over the status quo.

(b) is deprecated because that one minute will not be pleasant due to
the echoes of the torture.  If the person could have his memory wiped
for that one minute and neither remember nor anticipate future torture,
that would make (b) the best choice for both kinds of altruists.  Adding
10^100 pleasant observer-moments would increase both total and average
happiness and would more than compensate for a year of suffering for 10
people.  10^100 is a really enormous number.

Hal Finney