Re: language, cloning and thought experiments

2009-03-11 Thread Stathis Papaioannou

2009/3/11 Wei Dai :
>
> Jack Mallah wrote:
>> They might not, but I'm sure most would; maybe not exactly that U, but a
>> lot closer to it.
>
> Can you explain why you believe that?
>
>> No.  In U = Sum_i M_i Q_i, you sum over all the i's, not just the ones
>> that are similar to you.  Of course your Q_i (which is _your_ utility per
>> unit measure for the observer i) might be highly peaked around those that
>> are similar to you, but there's no need for a precise cutoff in
>> similarity.  And it's even very likely that it will have even higher peaks
>> around people that are not very much like you at all (these are the people
>> that you would sacrifice yourself for).
>>
>> By contrast, in your proposal for U, you do need a precise cutoff, for
>> which there is no justification.
>
> Ok, I see what you're saying, and it is a good point. But most people
> already have a personal identity that is sufficiently well-defined in the
> current environment where mind copying is not possible, so in practice
> deciding which i's to sum over isn't a serious problem (yet).

The same problem would apply to calculating probabilities. If one copy
of me will see heads and a million copies of me who have a one
millionth degree of similarity to me will see tails, what is my
expectation of heads? I suggest introducing a factor R, a number
between 0 and 1 representing the degree of similarity to the original:

Pr(H) = M1R1 / (M1R1 + M2R2) = (1*1) / (1*1 + 10^6*10^-6) = 1/2

The analogous equation for utility, where Q is the absolute utility
experienced by an individual copy, is then:

U = (M1R1Q1 + M2R2Q2) / (M1R1 + M2R2)


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-10 Thread George Levy
Jack,

You say "Q_i (which is _your_ utility per unit measure for the observer i)."
 This is an oxymoron. How can observer i know or care what YOUR Q 
(Quality) is? How can this observer feel what it feels being you?. The 
only observer that matters in evaluating your Q is you as a 
self-observer. The sum is no sum at all:

U = M_o Q_o  where o = you as observer.

George

Wei Dai wrote:
> Jack Mallah wrote:
>   
>> They might not, but I'm sure most would; maybe not exactly that U, but a 
>> lot closer to it.
>> 
>
> Can you explain why you believe that?
>
>   
>> No.  In U = Sum_i M_i Q_i, you sum over all the i's, not just the ones 
>> that are similar to you.  Of course your Q_i (which is _your_ utility per 
>> unit measure for the observer i) might be highly peaked around those that 
>> are similar to you, but there's no need for a precise cutoff in 
>> similarity.  And it's even very likely that it will have even higher peaks 
>> around people that are not very much like you at all (these are the people 
>> that you would sacrifice yourself for).
>>
>> By contrast, in your proposal for U, you do need a precise cutoff, for 
>> which there is no justification.
>> 
>
> Ok, I see what you're saying, and it is a good point. But most people 
> already have a personal identity that is sufficiently well-defined in the 
> current environment where mind copying is not possible, so in practice 
> deciding which i's to sum over isn't a serious problem (yet).
>  
>
>
> >
>
>   


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-10 Thread Wei Dai

Jack Mallah wrote:
> They might not, but I'm sure most would; maybe not exactly that U, but a 
> lot closer to it.

Can you explain why you believe that?

> No.  In U = Sum_i M_i Q_i, you sum over all the i's, not just the ones 
> that are similar to you.  Of course your Q_i (which is _your_ utility per 
> unit measure for the observer i) might be highly peaked around those that 
> are similar to you, but there's no need for a precise cutoff in 
> similarity.  And it's even very likely that it will have even higher peaks 
> around people that are not very much like you at all (these are the people 
> that you would sacrifice yourself for).
>
> By contrast, in your proposal for U, you do need a precise cutoff, for 
> which there is no justification.

Ok, I see what you're saying, and it is a good point. But most people 
already have a personal identity that is sufficiently well-defined in the 
current environment where mind copying is not possible, so in practice 
deciding which i's to sum over isn't a serious problem (yet).
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-09 Thread Stathis Papaioannou

2009/3/9 Brent Meeker  wrote:
>
> Stathis Papaioannou wrote:
>> 2009/3/8 Brent Meeker :
>>
>>> And if it went to zero you certainly wouldn't know and wouldn't care.
>>
>> If I died I wouldn't be around to know or care, but I would care in
>> anticipation of dying, since it would radically alter my future
>> experiences by eliminating them. On the other hand, 1->1 or many->1
>> copying would leave my future experiences the same as if the
>> teleportation hadn't occurred.
>
> Only for one of you.  Many-1 of you would have different future experiences
> (according to this theory).  Why don't you care about the loss of those 
> experiences?

I meant the case of culling teleportation, where many identical copies
disappear and one copy appears. If 1->1 teleportation is OK for each
individual copy then many->1 should also be OK. In other words, each
of the many copies feels he survives as the one copy, so each of the
many copies is satisfied with the outcome.

>>You might say that this is an illusion
>> since the original me will actually be dead but the same illusion
>> occurs in ordinary life, and it is the circumstances under which the
>> illusion is preserved that interests me when I think about survival.
>
> I'm not sure what you mean by this.  What illusion of ordinary life do you 
> refer
> to?  That "you" are the same as the Stathis of last year?

Of last year or last night, as Quentin said.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-09 Thread Quentin Anciaux
2009/3/9 Brent Meeker 

>
> Stathis Papaioannou wrote:
> > 2009/3/8 Brent Meeker :
> >
> >> And if it went to zero you certainly wouldn't know and wouldn't care.
> >
> > If I died I wouldn't be around to know or care, but I would care in
> > anticipation of dying, since it would radically alter my future
> > experiences by eliminating them. On the other hand, 1->1 or many->1
> > copying would leave my future experiences the same as if the
> > teleportation hadn't occurred.
>
> Only for one of you.  Many-1 of you would have different future experiences
> (according to this theory).  Why don't you care about the loss of those
> experiences?
>
> >You might say that this is an illusion
> > since the original me will actually be dead but the same illusion
> > occurs in ordinary life, and it is the circumstances under which the
> > illusion is preserved that interests me when I think about survival.
>
> I'm not sure what you mean by this.  What illusion of ordinary life do you
> refer
> to?  That "you" are the same as the Stathis of last year?
>
> Brent



I think he means the same thing as what you mean when you say you are you
when you wake up the morning and remember going to sleep the previous day...
that you remember and assert that you are you...


>
>
> >
>


-- 
All those moments will be lost in time, like tears in rain.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-08 Thread Brent Meeker

Stathis Papaioannou wrote:
> 2009/3/8 Brent Meeker :
> 
>> And if it went to zero you certainly wouldn't know and wouldn't care.
> 
> If I died I wouldn't be around to know or care, but I would care in
> anticipation of dying, since it would radically alter my future
> experiences by eliminating them. On the other hand, 1->1 or many->1
> copying would leave my future experiences the same as if the
> teleportation hadn't occurred. 

Only for one of you.  Many-1 of you would have different future experiences 
(according to this theory).  Why don't you care about the loss of those 
experiences?

>You might say that this is an illusion
> since the original me will actually be dead but the same illusion
> occurs in ordinary life, and it is the circumstances under which the
> illusion is preserved that interests me when I think about survival.

I'm not sure what you mean by this.  What illusion of ordinary life do you 
refer 
to?  That "you" are the same as the Stathis of last year?

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-07 Thread Stathis Papaioannou

2009/3/8 Brent Meeker :

> And if it went to zero you certainly wouldn't know and wouldn't care.

If I died I wouldn't be around to know or care, but I would care in
anticipation of dying, since it would radically alter my future
experiences by eliminating them. On the other hand, 1->1 or many->1
copying would leave my future experiences the same as if the
teleportation hadn't occurred. You might say that this is an illusion
since the original me will actually be dead but the same illusion
occurs in ordinary life, and it is the circumstances under which the
illusion is preserved that interests me when I think about survival.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-07 Thread Brent Meeker

Stathis Papaioannou wrote:
> 2009/3/8 Jack Mallah  wrote:
> 
>> It's not the addition then loss that's bad (since you end up with the same 
>> measure you started with); it's the loss.
>>
>> In the culling teleportation, both people are lost, which is doubly bad.  
>> Elsewhere, one new person appears, which is good, but not as good as there 
>> being two people.  So it's not a wash; it's a loss.
> 
> It's a loss that makes no difference to either of the two people that
> vanish, since both will feel that they have survived. This is not to
> say that the loss does not matter in any sense; for example, as a
> result there will be only half as many hands to chop wood. But the
> loss makes no difference to personal survival, which is what we are
> discussing here.
> 
>>> I don't agree with the way you calculate utility at all.
>> It's easy to say you don't agree but you haven't given an alternative.  
>> Precisely how would you calculate it?  U = ...
> 
> U = (Sum_i M_i Q_i) / (Sum_i M_i), as Wei Dai wrote. I don't
> understand your objection that this is less well defined than U =
> Sum_i M_i Q_i, since the variables are exactly the same in each case.
> However, stating a formula is simply another way of stating an
> opinion. The crux of the matter is that if neither I nor my copies
> will feel any different as a result of being culled, then being culled
> does not matter. It could be that my measure is being regularly halved
> every Tuesday and Thursday, and I have no way of knowing and no reason
> to care as long as it doesn't go to zero.
> 
> 

And if it went to zero you certainly wouldn't know and wouldn't care.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-07 Thread Stathis Papaioannou

2009/3/8 Jack Mallah  wrote:

> It's not the addition then loss that's bad (since you end up with the same 
> measure you started with); it's the loss.
>
> In the culling teleportation, both people are lost, which is doubly bad.  
> Elsewhere, one new person appears, which is good, but not as good as there 
> being two people.  So it's not a wash; it's a loss.

It's a loss that makes no difference to either of the two people that
vanish, since both will feel that they have survived. This is not to
say that the loss does not matter in any sense; for example, as a
result there will be only half as many hands to chop wood. But the
loss makes no difference to personal survival, which is what we are
discussing here.

>> I don't agree with the way you calculate utility at all.
>
> It's easy to say you don't agree but you haven't given an alternative.  
> Precisely how would you calculate it?  U = ...

U = (Sum_i M_i Q_i) / (Sum_i M_i), as Wei Dai wrote. I don't
understand your objection that this is less well defined than U =
Sum_i M_i Q_i, since the variables are exactly the same in each case.
However, stating a formula is simply another way of stating an
opinion. The crux of the matter is that if neither I nor my copies
will feel any different as a result of being culled, then being culled
does not matter. It could be that my measure is being regularly halved
every Tuesday and Thursday, and I have no way of knowing and no reason
to care as long as it doesn't go to zero.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-07 Thread Günther Greindl

Stathis, Brent,

> There are two copies of me in perfect lockstep, A1 and A2. I'm one of
> these copies and not the other (though I don't know which). Suppose
> I'm A1 and I decide to teleport 100km away. That means A1 disappears
> and a new copy, B, appears 100m away. I'm happy, since I feel I've
> traveled 100km with little effort. If I am instead A2, I go through
> exactly the same process: A2 disappears, B appears 100km away, and I'm
> happy. If I'm A1 the presence or absence of A2 does not make any
> difference to me, and if I'm A2 the presence or absence of A1 doesn't
> make any difference to me. It doesn't make sense to say that the
> presence of the other A copy will diminish my chances of ending up as
> B, or diminish my quantity or quality of consciousness once I end up
> as B.

I think Brent's "vanishing" is meant to refer to 3rd person perspective.

 From 1POV, Stathis is correct in pressing the button - he will never 
experience vanishing.

But if the button _really_ does some work - that is, reduce measure (it 
is only a thought experiment of course) that would mean that under 
3-Plural-POV there must indeed be "vanishing" or termination or whatever 
- else, there is no sense in which one could say measure has been 
reduced (if it is neither noticeable in 3-Plural POV nor in 1POV. If one 
reduces measure strictly in God's eye viewpoint without it being 
3-Plural noticeable, it seems to me that you also have to reduce measure 
of all people polyplicated with you - then symmetry would be restored; 
but that goes beyond the original proposed thought experiment).


Cheers,
Günther



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-07 Thread Jack Mallah


--- On Fri, 3/6/09, Wei Dai  wrote:
> > No.  First, I don't agree that the real question is what the utility 
> > function is or should be.  The real question is whether the measure, M, is 
> > conserved or whether it decreases.  It's just that a lot of people don't 
> > understand what that means.
> 
> I agree that a lot of people don't understand what that means, and I 
> certainly appreciate your effort to educate them. But it seems to me that 
> once someone does understand that issue, it's not assured that they'll fall 
> into the U=M*Q camp automatically.

They might not, but I'm sure most would; maybe not exactly that U, but a lot 
closer to it.

> U=Q would be generalized to (Sum_i M_i Q_i) / (Sum_i M_i).
> This seems just  as well defined as Sum_i M_i Q_i. You objected that 
> "personal identity is not well-defined" but don't you need to define personal 
> identity to compute Sum_i M_i Q_i as well, in order to determine which i to 
> sum over?

No.  In U = Sum_i M_i Q_i, you sum over all the i's, not just the ones that are 
similar to you.  Of course your Q_i (which is _your_ utility per unit measure 
for the observer i) might be highly peaked around those that are similar to 
you, but there's no need for a precise cutoff in similarity.  And it's even 
very likely that it will have even higher peaks around people that are not very 
much like you at all (these are the people that you would sacrifice yourself 
for).

By contrast, in your proposal for U, you do need a precise cutoff, for which 
there is no justification.

-- On Fri, 3/6/09, Stathis Papaioannou  wrote:
> > It's not the addition of the other copy that's the problem; it's the loss 
> > of it.  Losing people is bad.
> 
> How would the addition then loss of the extra copy be bad for the original, 
> or for that matter for the disappearing extra copy, given that neither copy 
> has any greater claim to being resurrected in the morning as B?

It's not the addition then loss that's bad (since you end up with the same 
measure you started with); it's the loss.

In the culling teleportation, both people are lost, which is doubly bad.  
Elsewhere, one new person appears, which is good, but not as good as there 
being two people.  So it's not a wash; it's a loss.

> I don't agree with the way you calculate utility at all.

It's easy to say you don't agree but you haven't given an alternative.  
Precisely how would you calculate it?  U = ...




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-06 Thread Stathis Papaioannou

2009/3/7 Brent Meeker :

>> I don't agree with the way you calculate utility at all. If I got $5
>> every time I pressed a button which decreased my absolute measure in
>> the multiverse a millionfold I would happily press the button all day.
>
> Which "I"?  Aren't you concerned that you would press the button - and vanish?

No, that's the whole point. If you accept that teleportation isn't
suicide, then you should accept that culling teleportation (to use
Jack Mallah's term) isn't suicide.

There are two copies of me in perfect lockstep, A1 and A2. I'm one of
these copies and not the other (though I don't know which). Suppose
I'm A1 and I decide to teleport 100km away. That means A1 disappears
and a new copy, B, appears 100m away. I'm happy, since I feel I've
traveled 100km with little effort. If I am instead A2, I go through
exactly the same process: A2 disappears, B appears 100km away, and I'm
happy. If I'm A1 the presence or absence of A2 does not make any
difference to me, and if I'm A2 the presence or absence of A1 doesn't
make any difference to me. It doesn't make sense to say that the
presence of the other A copy will diminish my chances of ending up as
B, or diminish my quantity or quality of consciousness once I end up
as B.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-06 Thread Wei Dai

> No.  First, I don't agree that the real question is what the utility 
> function is or should be.  The real question is whether the measure, M, is 
> conserved or whether it decreases.  It's just that a lot of people don't 
> understand what that means.

I agree that a lot of people don't understand what that means, and I 
certainly appreciate your effort to educate them. But it seems to me that 
once someone does understand that issue, it's not assured that they'll fall 
into the U=M*Q camp automatically.

> The next point is that while U=M*Q is perfectly well defined, U=Q is not, 
> and I don't know what you mean by it.
>
> OK, you might ask "huh?" when I say that.  What I mean is that M*Q is just 
> a caricature of a utility function but should obviously be generalized to 
> the case of multiple types of observations by using Sum_i M_i Q_i.

U=Q would be generalized to (Sum_i M_i Q_i) / (Sum_i M_i). This seems just 
as well defined as Sum_i M_i Q_i. You objected that "personal identity is 
not well-defined" but don't you need to define personal identity to compute 
Sum_i M_i Q_i as well, in order to determine which i to sum over?

BTW, I note that there seems to be a parallel between this debate, and the 
one between average and total utilitarianism.
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-06 Thread Günther Greindl

> Which "I"?  Aren't you concerned that you would press the button - and vanish?
> Brent


The psychological continuer - the one who remembers having pressed the 
button but with +5 dollars on his account.

@Stathis: would you really do this (press the button, also in the 
absoute measure scenario)? After all, in lots of universes 
family/friends will have found you vanished/whatever happens to you in 
the magic button measure decreasing case, causing them to suffer.

I do think that also in RSSA, one should care about all successors.

Think Egan's Law(*), also in Ethics: it all adds up to normality.


Cheers,
Günther

(*) Egan, Greg. Quarantine.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-06 Thread Brent Meeker

Stathis Papaioannou wrote:
> 2009/3/6 Jack Mallah  wrote:
> 
>>> If you're not worried about the fair trade, then to be consistent you 
>>> shouldn't be worried about the unfair trade either. In the fair trade, one 
>>> version of you A disappears overnight, and a new version of you B is 
>>> created elsewhere in the morning. The unfair trade is the same, except that 
>>> there is an extra version of you A' which disappears overnight. Now why 
>>> should the *addition* of another version make you nervous when you wouldn't 
>>> have been nervous otherwise?
>> It's not the addition of the other copy that's the problem; it's the loss of 
>> it.  Losing people is bad.
> 
> How would the addition then loss of the extra copy be bad for the
> original, or for that matter for the disappearing extra copy, given
> that neither copy has any greater claim to being resurrected in the
> morning as B?
> 
>>> That Riker's measure increased is not the important thing here: it is that 
>>> the two Rikers differentiated. Killing one of them after they had 
>>> differentiated would be wrong, but killing one of them before they had 
>>> differentiated would be OK.
>> That would be equivalent to U = Sum_i Q_i in which no changes in the 
>> wavefunction matter at all, since M_i > 0 for all i no matter what.  I don't 
>> think you thought that one through.
> 
> I don't agree with the way you calculate utility at all. If I got $5
> every time I pressed a button which decreased my absolute measure in
> the multiverse a millionfold I would happily press the button all day.

Which "I"?  Aren't you concerned that you would press the button - and vanish?

Brent

> It would be easy money and I'd feel exactly the same afterwards, just
> $5 richer. On the other hand, if pressing the button decreased the
> measure of those versions of me having good experiences by 1% relative
> to the versions of me having bad experiences, then I wouldn't press
> it, and certainly not repeatedly.
> 
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-06 Thread rmiller
At 07:31 AM 3/6/2009, Stathis Papaioannou wrote:

>2009/3/6 Jack Mallah  wrote:
>
> >> If you're not worried about the fair trade, 
> then to be consistent you shouldn't be worried 
> about the unfair trade either. In the fair 
> trade, one version of you A disappears 
> overnight, and a new version of you B is 
> created elsewhere in the morning. The unfair 
> trade is the same, except that there is an 
> extra version of you A' which disappears 
> overnight. Now why should the *addition* of 
> another version make you nervous when you wouldn't have been nervous 
> otherwise?
> >
> > It's not the addition of the other copy 
> that's the problem; it's the loss of it. Â Losing people is bad.
>
>How would the addition then loss of the extra copy be bad for the
>original, or for that matter for the disappearing extra copy, given
>that neither copy has any greater claim to being resurrected in the
>morning as B?
>
> >> That Riker's measure increased is not the 
> important thing here: it is that the two Rikers 
> differentiated. Killing one of them after they 
> had differentiated would be wrong, but killing 
> one of them before they had differentiated would be OK.
> >
> > That would be equivalent to U = Sum_i Q_i in 
> which no changes in the wavefunction matter at 
> all, since M_i > 0 for all i no matter what. Â 
> I don't think you thought that one through.
>
>I don't agree with the way you calculate utility at all. If I got $5
>every time I pressed a button which decreased my absolute measure in
>the multiverse a millionfold I would happily press the button all day.
>It would be easy money and I'd feel exactly the same afterwards, just
>$5 richer. On the other hand, if pressing the button decreased the
>measure of those versions of me having good experiences by 1% relative
>to the versions of me having bad experiences, then I wouldn't press
>it, and certainly not repeatedly.
>
>
>--
>Stathis Papaioannou


I've been following this discussion and have a 
comment re absolute measure in the 
multiverse.  The assumption is the same one David 
Deutsch has expressed: other than the 
interference observed in the Young's experiment, 
there can be no contact between the multiverses.

However, suppose our consciousness was 
essentially a topological object---a fibre bundle 
through a manifold of similar universes?  The 
universes where things are remarkably different 
would be ignored by the observer in favor of the 
probabilistic "picture of reality" associated 
with the median experience bundle.  Focusing on 
the "volume section" of such a distribution might 
be the function of an entity such as Hilgard's 
hidden observer <>.

In this model, the platform for consciousness is 
simply a manifold formed by equivalent behavioral 
elements across the multiverse (no pun 
intended.)  Eliminating them one by one would 
result in a commensurate decrease in overall consciousness.

Richard Miller



>
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-06 Thread Stathis Papaioannou

2009/3/6 Jack Mallah  wrote:

>> If you're not worried about the fair trade, then to be consistent you 
>> shouldn't be worried about the unfair trade either. In the fair trade, one 
>> version of you A disappears overnight, and a new version of you B is created 
>> elsewhere in the morning. The unfair trade is the same, except that there is 
>> an extra version of you A' which disappears overnight. Now why should the 
>> *addition* of another version make you nervous when you wouldn't have been 
>> nervous otherwise?
>
> It's not the addition of the other copy that's the problem; it's the loss of 
> it.  Losing people is bad.

How would the addition then loss of the extra copy be bad for the
original, or for that matter for the disappearing extra copy, given
that neither copy has any greater claim to being resurrected in the
morning as B?

>> That Riker's measure increased is not the important thing here: it is that 
>> the two Rikers differentiated. Killing one of them after they had 
>> differentiated would be wrong, but killing one of them before they had 
>> differentiated would be OK.
>
> That would be equivalent to U = Sum_i Q_i in which no changes in the 
> wavefunction matter at all, since M_i > 0 for all i no matter what.  I don't 
> think you thought that one through.

I don't agree with the way you calculate utility at all. If I got $5
every time I pressed a button which decreased my absolute measure in
the multiverse a millionfold I would happily press the button all day.
It would be easy money and I'd feel exactly the same afterwards, just
$5 richer. On the other hand, if pressing the button decreased the
measure of those versions of me having good experiences by 1% relative
to the versions of me having bad experiences, then I wouldn't press
it, and certainly not repeatedly.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-03-05 Thread Jack Mallah


--- On Tue, 2/24/09, Wei Dai  wrote:
> Jack, welcome back.

Hi Wei.

Now that the interesting Consciousness Online web conference is over, it's time 
to get back to the this.
http://consciousnessonline.wordpress.com/

BTW, I have to say that the qualia issue remains mysterious to me.  It's hard 
to see how e.g. color qualia can arise, whether by math or not.  So the dualism 
idea is not as easy to dismiss as we tend to think.  OTOH I still think dualism 
is not plausible - it would be quite a coincidence for 
nonmaterial/nonmathematical properties to exist that happen to be exactly like 
the properties that material/mathematical creatures tend to believe they have.  
So what are qualia?

> The ASSA/RSSA and QTI debates can be rephrased as whether U should equal M*Q, 
> or just Q, but that is an "ought" question.

No.  First, I don't agree that the real question is what the utility function 
is or should be.  The real question is whether the measure, M, is conserved or 
whether it decreases.  It's just that a lot of people don't understand what 
that means.

The next point is that while U=M*Q is perfectly well defined, U=Q is not, and I 
don't know what you mean by it.

OK, you might ask "huh?" when I say that.  What I mean is that M*Q is just a 
caricature of a utility function but should obviously be generalized to the 
case of multiple types of observations by using Sum_i M_i Q_i.

There is no corresponding generalization for Q.  You could use Sum_i Q_i, but 
in that case the sum is just a constant that does not depend on the physical 
situation (which determines the measure distribution over observation types, 
M_i; and in the the MWI the M_i will all be nonzero) and in that case no 
decision you could make would matter at all, so that can't be what you mean.

Probably what you have in mind is some kind of Q_average, where the average is 
over observations by the same person, but personal identity is not well-defined.

--- On Wed, 2/25/09, Stathis Papaioannou  wrote:
> If you're not worried about the fair trade, then to be consistent you 
> shouldn't be worried about the unfair trade either. In the fair trade, one 
> version of you A disappears overnight, and a new version of you B is created 
> elsewhere in the morning. The unfair trade is the same, except that there is 
> an extra version of you A' which disappears overnight. Now why should the 
> *addition* of another version make you nervous when you wouldn't have been 
> nervous otherwise?

It's not the addition of the other copy that's the problem; it's the loss of 
it.  Losing people is bad.

> That Riker's measure increased is not the important thing here: it is that 
> the two Rikers differentiated. Killing one of them after they had 
> differentiated would be wrong, but killing one of them before they had 
> differentiated would be OK.

That would be equivalent to U = Sum_i Q_i in which no changes in the 
wavefunction matter at all, since M_i > 0 for all i no matter what.  I don't 
think you thought that one through.




  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-02-25 Thread Stathis Papaioannou

2009/2/25 Jack Mallah :

> 1) The fair trade
>
> This is the teleportation or Star Trek transporter thought experiment.  A 
> person is disintegrated, while a physically identical copy is created 
> elsewhere.
>
> Even on Star Trek, not everyone was comfortable with doing this.  The first 
> question is: Is the original person killed, or is he merely moved from one 
> place to another?  The second question is, should he be worried?
>
> The answer to the first question depends on the definition of personal 
> identity.  If it is a causal chain, then if the transporter is reliable, the 
> causal chain will continue.  However, if the copy was only created due to 
> extreme luck and its memory (though coincidentally identical to that of the 
> original) is not determined by that of the original, then the chain was ended 
> and a new one started.
>
> The second question is more important.
>
> Since we are considering the situation before the experiment, we have to use 
> Caring Measure here.  The temptation is to skip such complications because 
> there is no splitting and no change in measure, but skipping it here can lead 
> to confusion in more complicated situations.
>
> The utility function I'll use is oversimplified for most people in terms of 
> being so utilitarian (as opposed to conservatively history-respecting, which 
> might oppose 'teleportation') but will serve.
>
> So if our utility function is U = M Q, where M is the guy's measure (which is 
> constant here) and Q is his quality of life factor (which we can assume to be 
> constant), we see that it does not depend on whether or not the teleportation 
> is done.  (In practice, Q should be better afterwards, or there is no reason 
> to do it.)  Therefore it is OK to do it.  It is a fair trade.
>
> 2) The unfair trade
>
> Now we come to the situation where there are 2 ‘copies’ of a person in the 
> evening, but one will be removed overnight, leaving just one from then on.  
> I’ll call this a culling.
>
> I pointed out that in this situation, the person does not know which copy he 
> is, so subjectively he has a 50% chance of dying overnight.  That is true, 
> using causal chains to define identity, but the objection was raised that 
> ‘since one copy survives, the person survives’ based on the ‘teleportation’ 
> idea that the existence elsewhere of a person with the same memory and 
> functioning is equivalent to the original person surviving.
>
> So to be clear, we can combine a culling with teleportation as follows: both 
> copies are destroyed overnight, but elsewhere a new copy is created that is 
> identical to what the copies would have been like had they survived.
>
> Is it still true that the person has a subjective 50% chance to die 
> overnight?  If causal chains are the definition, then depending on the 
> unreliability of the teleporter and how it was done, the chance of dying 
> might be more like 100%.  But as we have seen, definitions of personal 
> identity are not important.  What matters is whether the person should be 
> unhappy about this state of affairs; in other words, whether his utility 
> function is decreased by conducting the culling.
>
> Using U = M Q, it obviously is decreased, since M is halved and Q is 
> unchanged.  So as far as I can see, the only point of contention that might 
> remain is whether this is a reasonable utility function.  That is what the 
> next thought experiment will address.

If you're not worried about the fair trade, then to be consistent you
shouldn't be worried about the unfair trade either. In the fair trade,
one version of you A disappears overnight, and a new version of you B
is created elsewhere in the morning. The unfair trade is the same,
except that there is an extra version of you A' which disappears
overnight. Now why should the *addition* of another version make you
nervous when you wouldn't have been nervous otherwise? Sure, you don't
know whether you are A or A', but the situation is symmetrical: if you
are A the presence of A' should make no difference to you, and if you
are A' the presence of A should make no difference to you. And if
something makes no difference to you, it shouldn't impact on your
utility function.

> 3) The Riker brothers
>
> Will T. Riker tried to teleport from his spaceship down to a planet, but due 
> to a freak storm, there was a malfunction.  Luckily he was reconstructed back 
> on the ship, fully intact, and the ship left the area.
>
> Unknown to those on the ship, a copy of him also materialized on the planet.. 
>  He survived, and years later, the two were reunited when Will’s new ship 
> passed by.  Now known as Tom, the copy that was on the planet did not join 
> Star Fleet but went on to have many adventures of his own, often supporting 
> rebel causes that Will would not.  Will and Tom over their lifetimes played 
> important but often conflicting roles in galactic events.  They married 
> different women and had children of their own.
>
> The 

Re: language, cloning and thought experiments

2009-02-24 Thread Günther Greindl

Jack, Wei Dai,

> machines are invented, there will be a much greater selection pressure 
> towards U=M*Q. But given that U=Q is closer to the reality today, I'm not 
> sure what good it would do to "taking a stand against QS/QI".

To "translate":
U=M*Q is 3rd person POV (hypothetical; viewed from outside 
platonia/spacetime worm/insert fav. metaphysics)

U=Q is 1st POV (a cognitive agent reasoning if he/she/it will have next 
experience)

Again, it seems that QI is conceded, and an emotional argument made to 
care for all successors. I see no contradiction, and no refutation or 
whatever of QI.

 >The real key point at which the QS fallacy appears seems to be that 
 >some people find it inconcievable that they will not have a future.

Having only a finite number of successor moments is standard materialist 
assumption. It is computationalism which seems to suggest otherwise. 
Inconceivability does not enter the picture.

 >Thus, they assume that they will survive and only need to take into 
 >account effective probabilities that are conditional on survival.

That is correct, as not surviving is not an experience. As long as a 
successor with "your" memories of previous moments exists, you survive.

 > 5) The groggy mornings
 >If he were immortal, then his expected age would diverge, and the 
 >‘chance’ that his age would be normal is 0%.

Ok, this is ASSA reasoning. But it does not follow with RSSA. So, it is 
just the old argument that with ASSA you don't have immortality but with 
RSSA you do?

A "refutation" of QI would also require refuting it under RSSA, 
otherwise you simply have claim _if_ ASSA _then_ no QI, which does not 
seem disputed.

As written in a previous post, I think an RSSA-reasoner should also care 
for all his successors, and thus not engage in a QS-experiment or stuff 
like that.

Cheers,
Günther

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-02-24 Thread Wei Dai

Jack, welcome back. I no longer read every post here, but I read this post 
and found your positions pretty close to my own. This one, especially, I 
totally agree with:

> The important thing to realize is that _definitions don't matter_! 
> Predictions, decisions, appropriate emotions to a situation - these are 
> completely independent of definitions of personal identity.

This one is more problematic:

> So if our utility function is U = M Q, where M is the guy's measure (which 
> is constant here) and Q is his quality of life factor (which we can assume 
> to be constant), [...]

The ASSA/RSSA and QTI debates can be rephrased as whether U should equal 
M*Q, or just Q, but that is an "ought" question. If we accept the standard 
view in decision theory that the utility function is completely subjective, 
then that means the ASSA/RSSA debate can't be resolved by objective 
arguments.

We can get around this a bit by asking what most people's utility functions 
actually are, instead of what they ought to be. Are they closer to M*Q, or 
Q? I'm afraid that for most people, it's closer to Q than M*Q. One might 
have expected that evolution would have programmed us to have U=M*Q, but 
that doesn't seem to have been the case. I have a couple of speculations as 
to why:

1. M cannot be perceived directly. It can be inferred, but that takes a lot 
of work.

2. In our EEA, M couldn't increase, only decrease. (Because there were no 
mind-copying machines.) So evolution could essentially simulate the effect 
of U=M*Q with U=Q plus fear of pain and fear of dying, and that's what it 
did because it's a lot easier than getting the brain to compute M. (For a 
similar reason, we value sex instead of number of offspring.)

Initially I was also an advocate of ASSA until I realized that it's 
ultimately a subjective question of values. I think once mind-copying 
machines are invented, there will be a much greater selection pressure 
towards U=M*Q. But given that U=Q is closer to the reality today, I'm not 
sure what good it would do to "taking a stand against QS/QI".
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: language, cloning and thought experiments

2009-02-24 Thread daddycaylor

I noticed someone taking my name in vain. ;)  (though experiment where
I, Tom, am a clone of Will Riker)  The magic of thought experiments,
it's amazing.  I felt my measure decrease, but only after I read the
thought experiment.

I trust this will not derail anyone's personal identity here, but I
have some subjectively random thoughts.  The importance of, the
necessary dependence of any model of reality on, our subjective view
of things has struck me lately.  For instance, I brought up the
"should I worry about a clone dying?" question a while back.  This
seems totally subjective to me.  And that's not a put-down, it's an
observation.  What if we started thinking about making a clone as sort
of like having a baby?  Then it would suddenly change from being
something uncertain to something exhilating, plus there probably
wouldn't be any diapers involved.  I'm not suggesting we do that, just
performing a thought experiment.  When we die, the "personal identity"
pain is less if we have passed on something of ourselves in some way.
But then of course we care if that passed-on stuff is discontinued,
for instance if one of our children die, or a symphony we've written
is forgotten.  The Golden Rule still applies.

The latest Scientific American has an article about non-locality, and
it seems to me that this is related to this topic, through the causal-
chain aspect of it.  One thought-picture that was used to try to
convey non-locality is that it is like a fist punching in Chicago and
a face being hurt in Los Angeles.  So it occurred to me that it is
only in the presence of a consciously-aware assignment of cause that a
causal chain is present.  We are not omniscient, there is always
something in a universe which cannot be predicted by any model within
that universe.  There are always those faces feeling pain for which we
have no knowledge of the cause, so how can we claim that everything
has a cause?  So effectively there are things happening in any
universe which have no cause.

But what does that mean (subjectively of course) to us?  It shouldn't
stop us from trying to find causes and do predictions, since this
works for everything that we need to work, macroscopic things, local
things.  Just rambing here, it seems to me that the whole quantum
entanglement/non-locality thing is fairly intuitive.  It is based on
the fact that you need at least three things to have meaning.  (For
instance, the distance between two points by themselves is
meaningless.)  And one of those things seems to be consciousness.

Tom (Riker's brother)

P.S. On your groggy morning, does the fact that you can have an
infinitely long tail but have a finite area underneath it have any
bearing?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



language, cloning and thought experiments

2009-02-24 Thread Jack Mallah

--- On Wed, 2/11/09, Stathis Papaioannou  wrote:
> Well, this seems to be the real point of disagreement between you and the 
> pro-QI people. If I am one of the extra versions and die overnight, but the 
> original survives, then I have survived. This is why there can be a many to 
> one relationship between earlier and later copies. If you don't agree with 
> this then you should make explicit your theory of personal identity.

It is close to the point, but there is room for a misunderstanding so I have to 
be careful.  Here I am consolidating replies to some of the branched post 
threads and will present some thought experiments.

On personal identity:

As I explained, there are several possible definitions of personal identity, 
and the most useful ones are 1) All branched/fused people are the same person, 
2) Causal chains determine identity, and 3) Observer - moments.

This can become confusing because it is not always clear which definition 
someone is using, especially if quickly typing out a reply to a tangentially 
related post.  This can lead to a kind of Hydra-whacking effect in which one 
point is dealt with, only another confusion is simultaneously created because 
(for example) I was not clear on what I did not spell out as it was not the 
main point at issue in the post I was responding to.

That was the case recently when some people misconstrued my use of the causal 
chain (in terms of "you might die, only the original will survive") as some 
kind of crucial point.  Causal differentiation applied to the question at hand, 
so I used that one.  If anyone has read my QI paper, you would have known that 
I accept that "teleportation is OK" and that measure is what matters, not so 
much the original vs. copy issue.  I will explain more below on these important 
points.

If I had to pick one definition and stick with it, I would go with the least 
misleading one, which is an observer-moment.

The important thing to realize is that _definitions don't matter_!  
Predictions, decisions, appropriate emotions to a situation - these are 
completely independent of definitions of personal identity.  Personal identity 
is a useful concept in practice but not a fundamental thing, and therefore can 
have no fundamental relevance, unlike its misuse in QS thinking where it could 
supposedly affect a measure distribution.

On probability:

Bruno Marchal wrote:
> You say: "no randomness involved" but you seem to accept probabilities. Do I 
> just miss something here?

Yes, Bruno, you did, though my quickness contributed.  In my QI paper I defined 
"effective probability" and carefully spelled out the roles it can play.  But 
again, in posting on tangentially related topics, it is much easier to just say 
"probability" and hope that people remember what I am really talking about.

Classically there are two kinds of probability: true randomness, and subjective 
uncertainty due to ignorance.  I do not believe that the former exists..  When 
I talk about probability it either involves some ignorance on the part of the 
subject (as in the Reflection argument), or the use of "effective probability" 
in theory confirmation.

I may get sloppy sometimes (and say probability) when talking about a situation 
after an experiment that is yet to be performed, but in thinking about such 
cases it is absolutely necessary to remember that in the MWI there is neither 
randomness nor subjective ignorance, and that one must use Caring Measure.

On the "first person" slogan:

Any observation is made by the person observing it.  In that sense, they are 
all "first person".

Truths do not depend on point of view.  We do not know the measure 
distribution, but we can guess about it, and can study a model for it.  
Assuming the model is accurate, it is the distibution of these "first person" 
observations.

Calling it a "third person" view is a false charge; an accurate model is not a 
view, it is simply the truth.  Invoking "first person measure distributions" as 
an alternative is an empty slogan.

The real key point at which the QS fallacy appears seems to be that some people 
find it inconcievable that they will not have a future.  Thus, they assume that 
they will survive and only need to take into account effective probabilities 
that are conditional on survival.  This fallacy is undefined (in terms of 
personal identity which is required for the condition) and is false by 
definition of the measure distribution.

This can be seen using either causal chains (if a person is defined as a casual 
chain, then when the chain ends, so will he) or more generally just in terms of 
decreasing measure of observer-moments with age.  In the latter case increasing 
age is no different than, for example, increasing brightness of your visual 
field.  There is a sequence of observer-moments in which what you see is more 
and more bright, and after some point the measure distribution will decline as 
a function of increasing brightness.  You c