Re: Many Pasts? Not according to QM...

2005-06-13 Thread Bruno Marchal


Le 12-juin-05, à 06:30, Jesse Mazer a écrit :

My speculation is that p(y - x) would depend on a combination of some 
function that depends only on intrinsic features of the description of 
x and y--how similar x is to y, basically, the details to be 
determined by some formal theory of consciousness (or 'theory of 
observer-moments', perhaps)--and the absolute probability of x, since 
if two possible future OMs x and x' are equally similar to my 
current OM y, then I'd expect if x had a higher abolute measure than 
x' (perhaps x' involves an experience of a 'white rabbit' event), then 
p(y - x) would be larger than p(y - x').


To Jesse: You apparently completely separate the probability of x and 
x' from the similarity of x and x'.

I am not sure that makes sense for me.
In particular how could x and x' be similar, if x', but not x, involves 
a 'white rabbit events'.


To Hal:  I don't understand how an OM could write a letter. Writing a 
letter, it seems to me, involves many OMs. Evolution, and more 
generally computational history *is* what gives sense to any of the 
OM. What evolves is more real that the many (subjective or objective) 
bricks on which evolution proceeds and histories locally rely.


To Russell: I don't understand what you mean by a conscious 
description. Even the expression conscious machine can be misleading 
at some point in the reasoning. It is really some person, which can be 
(with comp) associate relatively to a machine/machine-history, who can 
be conscious.
Imo, only a person can be conscious. Even the notion of OM, as it is 
used in most of the recent posts, seems to me be a construction of the 
mind of some person. It is personhood which makes possible to attribute 
some sense to our many living 1-person OMs.


Bruno


http://iridia.ulb.ac.be/~marchal/




Re: Many Pasts? Not according to QM...

2005-06-13 Thread Bruno Marchal


Le 12-juin-05, à 14:48, Stathis Papaioannou a écrit :



Bruno Marchal writes:


But the basic idea is simple perhaps: Suppose I must choose between

a) I am 3-multiplied in ten exemplars. One will get an orange juice 
and 9 will be tortured.


b) I am 3-multiplied in ten exemplars.  One will be tortured, and 9 
will get a glass of orange juice instead.


OK. Now, with comp, strictly speaking the 1-uncertainty are 
ill-defined, indeed. Because the uncertainty bears on the maximal 
histories. Without precision I would choose b.
But if you tell me in advance that all the 9 guys in b, who got the 
orange juice, will merge (after artificial amnesia of the details 
which differ in their experience), and/or if you tell me also that 
the one who will be tortured will be 3- multiplied by 1000, after the 
torture, this change the number of relative histories going through 
the 1-state orange-juice or tortured in such a way that it would 
be better that I choose a. Obviously other multiplication events in 
the future could also change this, so that to know the real 
probabilities, in principle you must evaluate the whole histories 
going through the states.
To be sure, the reasoning of Stathis is still 100% correct with comp 
for what he want illustrate, but such probability calculus should not 
be considered as a mean to evaluate real probabilities. When you 
look at the math, this can be described by conflict between local 
information and global information. It is all but simple. Today I 
have only solve the probability 1 case, and it is enough for 
seeing how quantum proba could be justify by just comp. But even this 
case leads to open math questions. It is tricky in QM too.


I was with you until you proposed the tortured copy in (a) be 
multiplied 1000-fold or the 9 orange juice copies in (b) be merged. I 
would *still* choose (a) in these situations. I look at it in two 
steps. The first step is exactly the same as without the 
multiplying/merging, so at this point (a) is better. If you had then 
proposed something like, the orange juice copies will then be 
tortured, then that would have made a difference to my choice. What 
you in fact proposed is that the absolute measure of the tortured 
copies be subsequently increased or the absolute measure of the orange 
juice copies be subsequently decreased. I would argue that changing 
the absolute measure in this way can make no possible first person 
difference; or, equivalently, that multiplying or reducing the number 
of instantiations of an observer moment makes no possible first person 
difference - it's all the one observer moment.


Yes but this leads to paradoxes. It can be shown that all OM have the 
same measure in the running of the UD, or that there is no measure at 
all. The relative measure of OM2 relative to OM1 will be given by the 
density of computations going from OM1 to OM2.




What does make a difference is the *relative* measure of candidate 
successor OM's, and it is crucial that this refers to the transition 
from one OM to the next.


Strictly speaking I agree, but then I am taking the opportunity of the 
ambiguous nature of you statement.



This is simply because that is how our minds perceive the passage of 
time and construct the illusion of a single individual who maintains 
his identity over time.



I agree intuitively, but here I have a problem: for technical reasons I 
disbelieve in intuition at this stage. At this stage I cannot *not* 
take into account the reversal which makes the passage of time 
secondary from the way the 1-computations (the web of arithmetical 
dreams) are coherent. Of course this is probably highly 
counter-intuitive and that's why I turn on the math. I have said this 
recurrently on the list. The thought experiments are good for making us 
doubt about many prejudices. But to build a theory, at some point it is 
necessary to be utterly clear on what we assume or not, and to be open 
that the consequences of the theory are in contradiction with what 
intuition told us. After all this happened already more than one time 
with modern physics. I am not sure at all I can follow you when you 
describe how our minds perceive the passage of time. I have learn to 
accept that the notion of single individual is less an illusion than 
time, space and all physical modalities. But I know it is 
counter-intuitive and that is the reason I have eventually decided to 
interview lobian machine to take into account the lob-godel 
incompleteness (which is counter-intuitive at its roots). Sorry if this 
looks a little bit like an authoritative argument, but I can explain 
all the details if you are willing to cure your math anxiety 
As I said once, common sense is the only tool we have to go a little 
bit beyond ... common sense.


Bruno

http://iridia.ulb.ac.be/~marchal/




more torture

2005-06-13 Thread Stathis Papaioannou
I have been arguing in recent posts that the absolute measure of an observer 
moment (or observer, if you prefer) makes no possible difference at the 
first person level. A counterargument has been that, even if an observer 
cannot know how many instantiations of him are being run, it is still 
important in principle to take the absolute measure into account, for 
example when considering the total amount of suffering in the world. The 
following thought experiment shows how, counterintuitively, sticking to this 
principle may actually be doing the victims a disservice:


You are one of 10 copies who are being tortured. The copies are all being 
run in lockstep with each other, as would occur if 10 identical computers 
were running 10 identical sentient programs. Assume that the torture is so 
bad that death is preferable, and so bad that escaping it with your life is 
only marginally preferable to escaping it by dying (eg., given the option of 
a 50% chance of dying or a 49% chance of escaping the torture and living, 
you would take the 50%). The torture will continue for a year, but you are 
allowed one of 3 choices as to how things will proceed:


(a) 9 of the 10 copies will be chosen at random and painlessly killed, while 
the remaining copy will continue to be tortured.


(b) For one minute, the torture will cease and the number of copies will 
increase to 10^100. Once the minute is up, the number of copies will be 
reduced to 10 again and the torture will resume as before.


(c) the torture will be stopped for 8 randomly chosen copies, and continue 
for the other 2.


Which would you choose? To me, it seems clear that there is an 80% chance of 
escaping the torture if you pick (c), while with (a) it is certain that the 
torture will continue, and with (b) it is certain that the torture will 
continue with only one minute of respite.


Are there other ways to look at the choices? It might be argued that in (a) 
there is a 90% chance that you will be one of the copies who is killed, and 
thus a 90% chance that you will escape the torture, better than your chances 
in (c). However, even if you are one of the ones killed, this does not help 
you at all. If there is a successor observer moment at the moment of death, 
subjectively, your consciousness will continue. The successor OM in this 
case comes from the one remaining copy who is being tortured, hence 
guaranteeing that you will continue to suffer.


What about looking at it from an altruistic rather than selfish viewpoint: 
isn't it is better to decrease the total suffering in the world by 90% as in 
(a) rather than by 80% as in (c)? Before making plans to decrease suffering, 
ask the victims. All 10 copies will plead with you to choose (c).


What about (b)? ASSA enthusiasts might argue that with this choice, an OM 
sampled randomly from the set of all possible OM's will almost certainly be 
from the one minute torture-free interval. What would this mean for the 
victims? If you interview each of the 10 copies before the minute starts, 
they will tell you that they are currently being tortured and they expect 
that they will get one minute respite, then start suffering again, so they 
wish the choice had been (c). Next, if you interview each of the 10^100 
copies they will tell you that the torture has stopped for exactly one 
minute by the torture chambre's clock, but they know that it is going to 
start again and they wish you had chosen (c). Finally, if you interview each 
of the 10 copies for whom the torture has recommenced, they will report that 
they remember the minute of respite, but that's no good to them now, and 
they wish you had chosen (c).


--Stathis Papaioannou

_
Express yourself instantly with MSN Messenger! Download today - it's FREE! 
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/




Re: more torture

2005-06-13 Thread Bruno Marchal
I agree with everything you say in this post, but I am not sure that 
settles the issue. It does not change my mind on the preceding post 
where we were disagreeing; which was that IF I must choose between


A) splitted between 1 finite hells and 1 infinite paradise
B) Splitted between 1 infinite hell and 1 finite paradises

where finite and infinite refer to the number of computational 
steps simulating the stories of thoise hells and paradises, THEN I 
should choose A.
This is because all finite stories have a measure 0. Infinite 
stories, by their natural DU multiplications will have a measure one.


But we are on the verge of inconsistency, because in practice there is 
no way to garantie anything like the finiteness of any computation 
going through our states (this is akin to the insolubility of the 
self-stopping problem by sufficiently rich (lobian) turing machine).


The idea that I try to convey is that if I am in state S1, the 
probability of some next state S2 depends on the proportion, among the 
infinite stories going through S1 of those *infinite* stories going 
also through S2. And all finite stories must be discounted.


(It is not necessary I remain personally immortal in those infinite 
stories, the measure is given by the stories going through my states 
even if I have a finite 3-life-time in all of those stories).


(btw, this entails also that comp implies at least infinite past and/or 
future for any universes supporting our present story).


[Note that here I am going far ahead of what I can ask to the lobian 
machine, because our talk involves quantifiers on stories and that's 
very complex to handle. Well, to be sure I have till now only been able 
to translate the case of probability one, in machine term; but it is 
enough to extract non trivial information on the logic of observable 
proposition.]




Bruno
 



Le 13-juin-05, à 13:00, Stathis Papaioannou a écrit :

I have been arguing in recent posts that the absolute measure of an 
observer moment (or observer, if you prefer) makes no possible 
difference at the first person level. A counterargument has been that, 
even if an observer cannot know how many instantiations of him are 
being run, it is still important in principle to take the absolute 
measure into account, for example when considering the total amount of 
suffering in the world. The following thought experiment shows how, 
counterintuitively, sticking to this principle may actually be doing 
the victims a disservice:


You are one of 10 copies who are being tortured. The copies are all 
being run in lockstep with each other, as would occur if 10 identical 
computers were running 10 identical sentient programs. Assume that the 
torture is so bad that death is preferable, and so bad that escaping 
it with your life is only marginally preferable to escaping it by 
dying (eg., given the option of a 50% chance of dying or a 49% chance 
of escaping the torture and living, you would take the 50%). The 
torture will continue for a year, but you are allowed one of 3 choices 
as to how things will proceed:


(a) 9 of the 10 copies will be chosen at random and painlessly killed, 
while the remaining copy will continue to be tortured.


(b) For one minute, the torture will cease and the number of copies 
will increase to 10^100. Once the minute is up, the number of copies 
will be reduced to 10 again and the torture will resume as before.


(c) the torture will be stopped for 8 randomly chosen copies, and 
continue for the other 2.


Which would you choose? To me, it seems clear that there is an 80% 
chance of escaping the torture if you pick (c), while with (a) it is 
certain that the torture will continue, and with (b) it is certain 
that the torture will continue with only one minute of respite.


Are there other ways to look at the choices? It might be argued that 
in (a) there is a 90% chance that you will be one of the copies who is 
killed, and thus a 90% chance that you will escape the torture, better 
than your chances in (c). However, even if you are one of the ones 
killed, this does not help you at all. If there is a successor 
observer moment at the moment of death, subjectively, your 
consciousness will continue. The successor OM in this case comes from 
the one remaining copy who is being tortured, hence guaranteeing that 
you will continue to suffer.


What about looking at it from an altruistic rather than selfish 
viewpoint: isn't it is better to decrease the total suffering in the 
world by 90% as in (a) rather than by 80% as in (c)? Before making 
plans to decrease suffering, ask the victims. All 10 copies will plead 
with you to choose (c).


What about (b)? ASSA enthusiasts might argue that with this choice, an 
OM sampled randomly from the set of all possible OM's will almost 
certainly be from the one minute torture-free interval. What would 
this mean for the victims? If you interview each of the 10 copies 
before the minute starts, 

Re: Many Pasts? Not according to QM...

2005-06-13 Thread Jesse Mazer

Bruno Marchal:


To Jesse: You apparently completely separate the probability of x and x' 
from the similarity of x and x'.

I am not sure that makes sense for me.
In particular how could x and x' be similar, if x', but not x, involves a 
'white rabbit events'.


It's not completely separable, but I'd think that similarity would mostly 
be a function of memories, personality, etc...even if I experience something 
very weird, I can still have basically the same mind. For example, a hoaxer 
could create a realistic animatronic talking white rabbit, and temporarily I 
might experience an observer-moment identical to what I'd experience if I 
saw a genuine white talking rabbit, so the similarity between my current 
experience and what I'd experience in a white rabbit universe would be the 
same as the similarity between my current experience and what I'd 
experience in a universe where someone creates a realistic hoax. I don't 
think the first-person probabilities of experiencing hoaxes are somehow kept 
lower than what you'd expect from a third-person perspective, do you?


Jesse




Re: Observer-Moment Measure from Universe Measure

2005-06-13 Thread Bruno Marchal

Hi Brent,

You didn't answer my last post where I explain that Bp is different 
from Bp  p.
I hope you were not too much disturbed by my teacher's tone (which 
can be enervating I imagine). Or is it because you don't recognize the 
modal form of  Godel's theorem:


~Bf - ~B(~Bf),

which is equivalent to B(Bf - f) - Bf,   by simple contraposition 
p - q is equivalent with ~p - ~q, and using also that ~p is 
equivalent to p - f, where f is put for false.


This shows that for a consistent (~Bf) machine, although Bf - f is 
true, it cannot be proved by the machine. Now (Bf  f) - f trivially. 
So Bf and Bf  f are not equivalent for the machine

(although they are for the guardian angel).


Bruno


http://iridia.ulb.ac.be/~marchal/



Re: Many Pasts? Not according to QM...

2005-06-13 Thread Jesse Mazer

Bruno Marchal wrote:



Bruno Marchal:


To Jesse: You apparently completely separate the probability of x and x' 
from the similarity of x and x'.

I am not sure that makes sense for me.
In particular how could x and x' be similar, if x', but not x, involves a 
'white rabbit events'.


It's not completely separable, but I'd think that similarity would 
mostly be a function of memories, personality, etc...even if I experience 
something very weird, I can still have basically the same mind. For 
example, a hoaxer could create a realistic animatronic talking white 
rabbit, and temporarily I might experience an observer-moment identical to 
what I'd experience if I saw a genuine white talking rabbit, so the 
similarity between my current experience and what I'd experience in a 
white rabbit universe would be the same as the similarity between my 
current experience and what I'd experience in a universe where someone 
creates a realistic hoax. I don't think the first-person probabilities of 
experiencing hoaxes are somehow kept lower than what you'd expect from a 
third-person perspective, do you?


Perhaps I misunderstood you, but it seems to me, that in case you ask me to 
compute P(x - y) (your notation), it could and even should change that 
prediction result. In particular if the rabbit has been generated by a 
genuine hoaxer I would predict the white rabbit will stay in y, and if the 
hoaxer is not genuine, then I would still consider x and x' as rather very 
dissimilar. What do you think? This follows *also* from a relativisation of 
Hall Finney's theory based on kolmogorov complexity: a stable white rabbit 
is expensive in information resource. No?


Well, note that following Hal's notation, I was actually assuming y came 
before x (or x'), and I was calculating P(y - x). And your terminology is 
confusing to me here--when you say if the hoaxer is not genuine, do you 
mean that the white rabbit wasn't a hoax but was a genuine talking rabbit 
(in which case no hoaxer is involved at all), or do you mean if the white 
rabbit *was* a hoax? If the latter, then what do you mean when you say if 
the rabbit had been generated by a genuine hoaxer--is the white rabbit 
real, or is it a hoax in this case? Also, when you say you'd consider x and 
x' as very dissimilar, do you mean from each other or from y? Remember that 
dissimilar is just the word I use for continuity of personal identity, how 
much two successive experiences make sense as being successive OMs of the 
same person, it doesn't refer to whether the two sensory experiences are 
themselves dissimilar or dissimilar. If I'm here looking at my computer, 
then suddenly close my eyes, the two successive experiences will be quite 
dissimilar in terms of the sensory information I'm taking in, but they'll 
still be similar in terms of my background memories, personality, etc., so 
they make sense as successive OMs of the same person. On the other hand, if 
I'm sitting at my computer and suddenly my brain is replaced with the brain 
of George W. Bush, there will be very little continuity of identity despite 
the fact that the sensory experiences of both OMs would be pretty similar, 
so in my terminology there would be very little similarity between these 
two OMs.


As for the cost of simulating a white rabbit universe, I agree it's more 
expensive than simulating a non-white rabbit universe, but I don't see how 
this relates to continuity of identity when experiencing white rabbits vs. 
not experiencing them.


Jesse




Re: more torture

2005-06-13 Thread Bruno Marchal

Hi Quentin,


concerning finite/infinite number of steps, it seems to me that it is
always possible to have a computation that will take an infinite number
of steps to arrive at a particular state, since for any state, there
exists an infinity of computational histories which go through it, so 
it

seems to me that some of them needs infinite steps... Do I miss
something ?



Yes. My fault. I was not clear enough. First it is obvious that any 
states in the execution of the DU has only a finite 3-computations. 
But, as you say, any states belong to an infinite set of computations, 
and this justifies that from the first person point of view we can have 
infinite past. I should have mention the 1-3 difference. Apology. (But 
I would not say that some state *needs* an infinite 3-computation, that 
one would not even be generated by the DU.


Regards,
Bruno




Sincerely,
Quentin Anciaux

Le lundi 13 juin 2005 à 15:37 +0200, Bruno Marchal a écrit :

I agree with everything you say in this post, but I am not sure that
settles the issue. It does not change my mind on the preceding post
where we were disagreeing; which was that IF I must choose between

A) splitted between 1 finite hells and 1 infinite paradise
B) Splitted between 1 infinite hell and 1 finite paradises

where finite and infinite refer to the number of computational
steps simulating the stories of thoise hells and paradises, THEN I
should choose A.
This is because all finite stories have a measure 0. Infinite
stories, by their natural DU multiplications will have a measure 
one.


But we are on the verge of inconsistency, because in practice there is
no way to garantie anything like the finiteness of any computation
going through our states (this is akin to the insolubility of the
self-stopping problem by sufficiently rich (lobian) turing machine).

The idea that I try to convey is that if I am in state S1, the
probability of some next state S2 depends on the proportion, among the
infinite stories going through S1 of those *infinite* stories going
also through S2. And all finite stories must be discounted.

(It is not necessary I remain personally immortal in those infinite
stories, the measure is given by the stories going through my states
even if I have a finite 3-life-time in all of those stories).

(btw, this entails also that comp implies at least infinite past 
and/or

future for any universes supporting our present story).

[Note that here I am going far ahead of what I can ask to the lobian
machine, because our talk involves quantifiers on stories and that's
very complex to handle. Well, to be sure I have till now only been 
able

to translate the case of probability one, in machine term; but it is
enough to extract non trivial information on the logic of observable
proposition.]



Bruno
  



Le 13-juin-05, à 13:00, Stathis Papaioannou a écrit :


I have been arguing in recent posts that the absolute measure of an
observer moment (or observer, if you prefer) makes no possible
difference at the first person level. A counterargument has been 
that,

even if an observer cannot know how many instantiations of him are
being run, it is still important in principle to take the absolute
measure into account, for example when considering the total amount 
of

suffering in the world. The following thought experiment shows how,
counterintuitively, sticking to this principle may actually be doing
the victims a disservice:

You are one of 10 copies who are being tortured. The copies are all
being run in lockstep with each other, as would occur if 10 identical
computers were running 10 identical sentient programs. Assume that 
the

torture is so bad that death is preferable, and so bad that escaping
it with your life is only marginally preferable to escaping it by
dying (eg., given the option of a 50% chance of dying or a 49% chance
of escaping the torture and living, you would take the 50%). The
torture will continue for a year, but you are allowed one of 3 
choices

as to how things will proceed:

(a) 9 of the 10 copies will be chosen at random and painlessly 
killed,

while the remaining copy will continue to be tortured.

(b) For one minute, the torture will cease and the number of copies
will increase to 10^100. Once the minute is up, the number of copies
will be reduced to 10 again and the torture will resume as before.

(c) the torture will be stopped for 8 randomly chosen copies, and
continue for the other 2.

Which would you choose? To me, it seems clear that there is an 80%
chance of escaping the torture if you pick (c), while with (a) it is
certain that the torture will continue, and with (b) it is certain
that the torture will continue with only one minute of respite.

Are there other ways to look at the choices? It might be argued that
in (a) there is a 90% chance that you will be one of the copies who 
is
killed, and thus a 90% chance that you will escape the torture, 
better

than your chances in (c). However, even if 

Re: Many Pasts? Not according to QM...

2005-06-13 Thread Bruno Marchal
Oops sorry. I did misunderstood you. Thanks for the clarification. I 
agree with your preceding post to Hal now.


Bruno


Le 13-juin-05, à 16:23, Jesse Mazer a écrit :


Bruno Marchal wrote:



Bruno Marchal:


To Jesse: You apparently completely separate the probability of x 
and x' from the similarity of x and x'.

I am not sure that makes sense for me.
In particular how could x and x' be similar, if x', but not x, 
involves a 'white rabbit events'.


It's not completely separable, but I'd think that similarity would 
mostly be a function of memories, personality, etc...even if I 
experience something very weird, I can still have basically the same 
mind. For example, a hoaxer could create a realistic animatronic 
talking white rabbit, and temporarily I might experience an 
observer-moment identical to what I'd experience if I saw a genuine 
white talking rabbit, so the similarity between my current 
experience and what I'd experience in a white rabbit universe would 
be the same as the similarity between my current experience and 
what I'd experience in a universe where someone creates a realistic 
hoax. I don't think the first-person probabilities of experiencing 
hoaxes are somehow kept lower than what you'd expect from a 
third-person perspective, do you?


Perhaps I misunderstood you, but it seems to me, that in case you ask 
me to compute P(x - y) (your notation), it could and even should 
change that prediction result. In particular if the rabbit has been 
generated by a genuine hoaxer I would predict the white rabbit will 
stay in y, and if the hoaxer is not genuine, then I would still 
consider x and x' as rather very dissimilar. What do you think? This 
follows *also* from a relativisation of Hall Finney's theory based on 
kolmogorov complexity: a stable white rabbit is expensive in 
information resource. No?


Well, note that following Hal's notation, I was actually assuming y 
came before x (or x'), and I was calculating P(y - x). And your 
terminology is confusing to me here--when you say if the hoaxer is 
not genuine, do you mean that the white rabbit wasn't a hoax but was 
a genuine talking rabbit (in which case no hoaxer is involved at all), 
or do you mean if the white rabbit *was* a hoax? If the latter, then 
what do you mean when you say if the rabbit had been generated by a 
genuine hoaxer--is the white rabbit real, or is it a hoax in this 
case? Also, when you say you'd consider x and x' as very dissimilar, 
do you mean from each other or from y? Remember that dissimilar is 
just the word I use for continuity of personal identity, how much two 
successive experiences make sense as being successive OMs of the same 
person, it doesn't refer to whether the two sensory experiences are 
themselves dissimilar or dissimilar. If I'm here looking at my 
computer, then suddenly close my eyes, the two successive experiences 
will be quite dissimilar in terms of the sensory information I'm 
taking in, but they'll still be similar in terms of my background 
memories, personality, etc., so they make sense as successive OMs of 
the same person. On the other hand, if I'm sitting at my computer and 
suddenly my brain is replaced with the brain of George W. Bush, there 
will be very little continuity of identity despite the fact that the 
sensory experiences of both OMs would be pretty similar, so in my 
terminology there would be very little similarity between these two 
OMs.


As for the cost of simulating a white rabbit universe, I agree it's 
more expensive than simulating a non-white rabbit universe, but I 
don't see how this relates to continuity of identity when experiencing 
white rabbits vs. not experiencing them.


Jesse




http://iridia.ulb.ac.be/~marchal/




Re: more torture

2005-06-13 Thread rmiller

At 06:00 AM 6/13/2005, Stathis Papaioannou wrote:
I have been arguing in recent posts that the absolute measure of an 
observer moment (or observer, if you prefer) makes no possible difference 
at the first person level. A counterargument has been that, even if an 
observer cannot know how many instantiations of him are being run, it is 
still important in principle to take the absolute measure into account, 
for example when considering the total amount of suffering in the world. 
The following thought experiment shows how, counterintuitively, sticking 
to this principle may actually be doing the victims a disservice:


You are one of 10 copies who are being tortured. The copies are all being 
run in lockstep with each other, as would occur if 10 identical computers 
were running 10 identical sentient programs. Assume that the torture is so 
bad that death is preferable, and so bad that escaping it with your life 
is only marginally preferable to escaping it by dying (eg., given the 
option of a 50% chance of dying or a 49% chance of escaping the torture 
and living, you would take the 50%). The torture will continue for a year, 
but you are allowed one of 3 choices as to how things will proceed:


(a) 9 of the 10 copies will be chosen at random and painlessly killed, 
while the remaining copy will continue to be tortured.


(b) For one minute, the torture will cease and the number of copies will 
increase to 10^100. Once the minute is up, the number of copies will be 
reduced to 10 again and the torture will resume as before.


(c) the torture will be stopped for 8 randomly chosen copies, and continue 
for the other 2.


Which would you choose? To me, it seems clear that there is an 80% chance 
of escaping the torture if you pick (c), while with (a) it is certain that 
the torture will continue, and with (b) it is certain that the torture 
will continue with only one minute of respite.

RM writes. . .
Here is my criteria:  There are those who suggest that there is only one 
electron in the universe, but that it travels forward and backward in time, 
thus making multiple copies of itself.  If the individual percipient would 
eventually have to experience the pain and suffering of all whom he had 
affected--or caused to experience pain and suffering, then the most 
selfish, altruistic *and* sensible choice would be (c).


Rich Miller




RE: more torture

2005-06-13 Thread Hal Finney
Jesse Mazer writes:
 If you impose the condition I discussed earlier that absolute probabilities 
 don't change over time, or in terms of my analogy, that the water levels in 
 each tank don't change because the total inflow rate to each tank always 
 matches the total outflow rate, then I don't think it's possible to make 
 sense of the notion that the observer-moments in that torture-free minute 
 would have 10^100 times greater absolute measure. If there's 10^100 times 
 more water in the tanks corresponding to OMs during that minute, where does 
 all this water go after the tank corresponding to the last OM in this 
 minute, and where is it flowing in from to the tank corresponding to the 
 first OM in this minute?

I would propose to implement the effect by duplicating the guy 10^100 times
during that minute, then terminating all the duplicates after that time.

What happens in your model when someone dies in some fraction of the
multiverse?  His absolute measure decreases, but where does the now-excess
water go?

Hal Finney



RE: more torture

2005-06-13 Thread Jesse Mazer

Hal Finney wrote:


Jesse Mazer writes:
 If you impose the condition I discussed earlier that absolute 
probabilities
 don't change over time, or in terms of my analogy, that the water levels 
in

 each tank don't change because the total inflow rate to each tank always
 matches the total outflow rate, then I don't think it's possible to make
 sense of the notion that the observer-moments in that torture-free 
minute
 would have 10^100 times greater absolute measure. If there's 10^100 
times
 more water in the tanks corresponding to OMs during that minute, where 
does

 all this water go after the tank corresponding to the last OM in this
 minute, and where is it flowing in from to the tank corresponding to the
 first OM in this minute?

I would propose to implement the effect by duplicating the guy 10^100 times
during that minute, then terminating all the duplicates after that time.

What happens in your model when someone dies in some fraction of the
multiverse?  His absolute measure decreases, but where does the now-excess
water go?


In my model, death only exists from a third-person perspective, but from a 
first-person perspective I'm subscribing to the QTI, so consciousness will 
always continue in some form (even if my memories don't last or I am reduced 
to an amoeba-level consciousness)--the water molecules are never created 
or destroyed. For what would happen when an observer is duplicated from a 
third-person perspective, it might help to consider the example I discussed 
on the 'Last-minute vs. anticipatory quantum immortality' thread at 
http://www.escribe.com/science/theory/m4841.html , where a person is 
initially duplicated before a presidential election, and then depending on 
the results of the election, one duplicate is later copied 999 times. All 
else being equal, I'd speculate that the initial 2-split would anticipate 
the later 999-split, so that 999 out of 1000 water molecules of the first 
observer would split off into the copy that is later going to be split 999 
times, so before this second split, OMs of this copy would have 999 times 
the absolute measure of the copy that isn't going to be split again. I'm not 
absolutely sure that this would be a consequence of the idea about finding a 
unique self-consistent set of absolute and conditional probabilities based 
only on a similarity matrix and the condition of absolute probabilities 
not changing with time, but it seems intuitive to me that it would. At some 
point I'm going to try to test this idea with mathematica or something, 
creating a finite set of OMs and deciding what the possible successors to 
each one are in order to construct something like a similarity matrix, 
then finding the unique vector of absolute probabilities that, when 
multiplied by this matrix, gives a unit vector (the procedure I discussed in 
my last post to you at http://www.escribe.com/science/theory/m6855.html ). 
Hopefully the absolute probabilities would indeed tend to anticipate 
future splits in the way I'm describing.


So if this anticipatory idea works, then any copy that's very unlikely to 
survive long from a third-person perspective is going to undergoe fewer 
future splits from a multiverse perspective (there will always be few 
branches where this copy survives though), so your conditional probability 
of becoming such a copy would be low, meaning that not much of your water 
would flow into that copy, and it will have a smaller absolute measure than 
copies that are likely to survive in more branches.


Jesse




Re: more torture

2005-06-13 Thread Saibal Mitra
Because no such thing as free will exists one has to consider three
different universes in which the three different choices are made. The three
universes will have comparable measures. The antropic factor of 10^100 will
then dominate and will cause the observer to find himself having made choice
b) as one of the 10^100 copies in the minute without torture.


Saibal








-
Defeat Spammers by launching DDoS attacks on Spam-Websites:
http://www.hillscapital.com/antispam/
- Original Message - 
From: Stathis Papaioannou [EMAIL PROTECTED]
To: everything-list@eskimo.com
Sent: Monday, June 13, 2005 01:00 PM
Subject: more torture


 I have been arguing in recent posts that the absolute measure of an
observer
 moment (or observer, if you prefer) makes no possible difference at the
 first person level. A counterargument has been that, even if an observer
 cannot know how many instantiations of him are being run, it is still
 important in principle to take the absolute measure into account, for
 example when considering the total amount of suffering in the world. The
 following thought experiment shows how, counterintuitively, sticking to
this
 principle may actually be doing the victims a disservice:

 You are one of 10 copies who are being tortured. The copies are all being
 run in lockstep with each other, as would occur if 10 identical computers
 were running 10 identical sentient programs. Assume that the torture is so
 bad that death is preferable, and so bad that escaping it with your life
is
 only marginally preferable to escaping it by dying (eg., given the option
of
 a 50% chance of dying or a 49% chance of escaping the torture and living,
 you would take the 50%). The torture will continue for a year, but you are
 allowed one of 3 choices as to how things will proceed:

 (a) 9 of the 10 copies will be chosen at random and painlessly killed,
while
 the remaining copy will continue to be tortured.

 (b) For one minute, the torture will cease and the number of copies will
 increase to 10^100. Once the minute is up, the number of copies will be
 reduced to 10 again and the torture will resume as before.

 (c) the torture will be stopped for 8 randomly chosen copies, and continue
 for the other 2.

 Which would you choose? To me, it seems clear that there is an 80% chance
of
 escaping the torture if you pick (c), while with (a) it is certain that
the
 torture will continue, and with (b) it is certain that the torture will
 continue with only one minute of respite.

 Are there other ways to look at the choices? It might be argued that in
(a)
 there is a 90% chance that you will be one of the copies who is killed,
and
 thus a 90% chance that you will escape the torture, better than your
chances
 in (c). However, even if you are one of the ones killed, this does not
help
 you at all. If there is a successor observer moment at the moment of
death,
 subjectively, your consciousness will continue. The successor OM in this
 case comes from the one remaining copy who is being tortured, hence
 guaranteeing that you will continue to suffer.

 What about looking at it from an altruistic rather than selfish viewpoint:
 isn't it is better to decrease the total suffering in the world by 90% as
in
 (a) rather than by 80% as in (c)? Before making plans to decrease
suffering,
 ask the victims. All 10 copies will plead with you to choose (c).

 What about (b)? ASSA enthusiasts might argue that with this choice, an OM
 sampled randomly from the set of all possible OM's will almost certainly
be
 from the one minute torture-free interval. What would this mean for the
 victims? If you interview each of the 10 copies before the minute starts,
 they will tell you that they are currently being tortured and they expect
 that they will get one minute respite, then start suffering again, so they
 wish the choice had been (c). Next, if you interview each of the 10^100
 copies they will tell you that the torture has stopped for exactly one
 minute by the torture chambre's clock, but they know that it is going to
 start again and they wish you had chosen (c). Finally, if you interview
each
 of the 10 copies for whom the torture has recommenced, they will report
that
 they remember the minute of respite, but that's no good to them now, and
 they wish you had chosen (c).

 --Stathis Papaioannou

 _
 Express yourself instantly with MSN Messenger! Download today - it's FREE!
 http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/




No torture

2005-06-13 Thread aet.radal ssg
"No tortue".
Now, sit and contemplate if you felt a difference when, after reading message after message with the opposite words in it, and then suddenly you see "No tortue". 

-- 
___Sign-up for Ads Free at Mail.com
http://www.mail.com/?sr=signup




Re-Observer-Moment Measure from Universe Measure

2005-06-13 Thread George Levy

Bruno Marchal wrote:


Godel's theorem:
~Bf - ~B(~Bf),

which is equivalent to B(Bf - f) - Bf,   




Just a little aside a la Descartes + Godel: (assume that think and 
believe are synonymous and that f = you are)


B(Bf - f) - Bf can be rendered as:
If you believe that if you think that you are therefore you are, then 
you think you are.


That's what Descartes thought!

:-)George



Re: more torture

2005-06-13 Thread Hal Finney
IMO belief in the ASSA is tantamount to altruism.  The ASSA would imply
taking action based on its positive impact on the whole multiverse of
observer-moments (OMs).

We have had some discussion here and on the extropy-chat (transhumanist)
mailing list about two different possible flavors of altruism.  These are
sometimes called averagist vs totalist.

The averagist wants to maximize the average happiness of humanity.
He opposes measures that will add more people at the expense of decreasing
their average happiness.  This is a pretty common element among green
political movements.

The totalist wants to maximize the total happiness of humanity.
He believes that people are good and more people are better.  This
philosophy is less common but is sometimes associated with libertarian
or radical right wing politics.

These two ideas can be applied to observer-moments as well.  But both
of these approaches have problems if taken to the extreme.

For the extreme averagist, half the OMs are below average.  If they were
eliminated, the average would rise.  But again, half of the remaining
OMs would be below (the new, higher) average.  So again half should be
eliminated.  In the end you are left with the one OM with the highest
average happiness.  Eliminating almost every ounce of intelligence in
the universe hardly seems altruistic.

For the extreme totalist, the problem is that he will support adding
OMs as long as their quality of life is just barely above that which
would lead to suicide.  More OMs generally will decrease the quality
of life of others, due to competition for resources, so the result is
a massively overpopulated universe with everyone leading terrible lives.
This again seems inconsistent with the goals of altruism.

In practice it seems that some middle ground must be found.  Adding
more OMs is good, up to a point.  I don't know if anyone has a good,
objective measure that can be maximized for an effective approach to
altruism.

Let us consider these flavors of altruism in the case of Stathis' puzzle:

 You are one of 10 copies who are being tortured. The copies are all being 
 run in lockstep with each other, as would occur if 10 identical computers 
 were running 10 identical sentient programs. Assume that the torture is so 
 bad that death is preferable, and so bad that escaping it with your life is 
 only marginally preferable to escaping it by dying (eg., given the option of 
 a 50% chance of dying or a 49% chance of escaping the torture and living, 
 you would take the 50%). The torture will continue for a year, but you are 
 allowed one of 3 choices as to how things will proceed:

 (a) 9 of the 10 copies will be chosen at random and painlessly killed, while 
 the remaining copy will continue to be tortured.

 (b) For one minute, the torture will cease and the number of copies will 
 increase to 10^100. Once the minute is up, the number of copies will be 
 reduced to 10 again and the torture will resume as before.

 (c) the torture will be stopped for 8 randomly chosen copies, and continue 
 for the other 2.

 Which would you choose?

For the averagist, doing (a) will not change average happiness.  Doing
(b) will improve it, but not that much.  The echoes of the torture and
anticipation of future torture will make that one minute of respite
not particularly pleasant.  Doing (c) would seem to be the best choice,
as 8 out of the 10 avoid a year of torture.  (I'm not sure why Stathis
seemed to say that the people would not want to escape their torture,
given that it was so bad.  That doesn't seem right to me; the worse it
is, the more they would want to escape it.)

For the totalist, since death is preferable to the torture, each
person's life has a negative impact on total happiness.  Hence (a)
would be an improvement as it removes these negatives from the universe.
Doing (b) is unclear: during that one minute, would the 10^100 copies
kill themselves if possible?  If so, their existence is negative and
so doing (b) would make the universe much worse due to the addition
of so many negatively happy OMs.  Doing (c) would seem to be better,
assuming that the 8 out of 10 would eventually find that their lives
were positive during that year without torture.

So it appears that each one would choose (c), although they would differ
about whether (a) is an improvement over the status quo.

(b) is deprecated because that one minute will not be pleasant due to
the echoes of the torture.  If the person could have his memory wiped
for that one minute and neither remember nor anticipate future torture,
that would make (b) the best choice for both kinds of altruists.  Adding
10^100 pleasant observer-moments would increase both total and average
happiness and would more than compensate for a year of suffering for 10
people.  10^100 is a really enormous number.

Hal Finney



Re: Many Pasts? Not according to QM...

2005-06-13 Thread Russell Standish
On Mon, Jun 13, 2005 at 11:45:52AM +0200, Bruno Marchal wrote:
 
 To Russell: I don't understand what you mean by a conscious 
 description. Even the expression conscious machine can be misleading 
 at some point in the reasoning. 

A description could be conscious in the same way that with
computationalism, a program might be conscious. With computationalism,
a certain program is considered conscious when run on an appropriate
UTM. However, as you showed in chapter 4 of your thesis it is not
necessary to actually run the program on a physical
machine. Church-Turing thesis and arithmetical platonism (my all
description strings condition fulfills a similar role to arithmetical
platonism) are enough. Furthermore, if the conscious program _is_ a
UTM in its own right, it can run on itself (actually this is pretty
much what my reading of what the Church-Turing thesis is). This obviates
having to fix the UTM. Perhaps this is the route into the anthropic
principle.

This is a model of a conscious description, under the assumption of
computationalism. Perhaps this model can be extended to
not-computationalism, where a description is conscious if it is able
to interpret itself as conscious. I do not have problem with observers
being capable of universal computation as a necessary precondition
here, should it be necessary.

Finally, there is the possibility that a concrete observer (the
noumenon) exists somewhere, and that conscious descriptions are
merely the anthropic shadow of the observer being observed by itself.


 It is really some person, which can be 
 (with comp) associate relatively to a machine/machine-history, who can 
 be conscious.
 Imo, only a person can be conscious. 

Isn't this the definition of person? Or do you define personhood by
something else.

 Even the notion of OM, as it is 
 used in most of the recent posts, seems to me be a construction of the 
 mind of some person. It is personhood which makes possible to attribute 
 some sense to our many living 1-person OMs.
 
 Bruno
 
 
 http://iridia.ulb.ac.be/~marchal/
 

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 ()
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpMbp4kmkYGl.pgp
Description: PGP signature