The ASSA leads to a unique utilitarism

2007-10-01 Thread Youness Ayaita

In this message, I neither want to support the ASSA nor utilitarism.
But I will argue that the former has remarkable consequences for the
latter.

To give a short overview of the concepts, I remind you that
utilitarism is a doctrine measuring the morality of an action only by
its outcome. Those actions are said to be more moral than others if
they cause a greater sum of happiness or pleasure (for all people
involved). Though this theory seems to be attractive, it has to cope
with a lot of problems. Maybe the most fundamental problem is to
define how 'happiness' and 'pleasure' are measured: In order to decide
which action is the most moral one, we need a 'felicific calculus'.
However, it seems that there is no chance to find a unique felicific
calculus everyone would agree upon. Until today, there is a lot of
arbitrariness:

- How do we measure happiness?
- How do we compare the happiness of different people?
- How do we account for pain and suffering? Which weight is assigned
to them?
- Even maximizing 'the sum of happiness' in some felicific calculus
does not necessarily determine a unique action. Maybe it's possible to
increase the happiness of some individuals and to decrease the
happiness of other individuals without changing the 'sum of
happiness'. What is preferable?

Most of us have a mathematical or scientific background. We know that
such a situation can lead to an infinity of possible felicific calculi
each one defined by arbitrary measures and parameters. In the
sciences, one would usually discard a theory that contains so much
arbitrariness (philosophy however is not that rigorous).

The application of the ASSA can help to surmount these conceptual
difficulties. Assuming the ASSA, we are able to define a uniquely
determined utilitarism. Nonetheless, the practical problem of deciding
which action one has to prefer remains rather unchanged.

1st step: Reducing the number of utilitarisms to the number of human
beings.

The ASSA states that my next experience is randomly chosen out of all
observer moments. For the decision of my action, only those observer
moments are of interest that are significantly influenced by my
decision (e.g. observer moments in the past aren't). Since my next
observer moment can be any of those observer moments, I am driven to a
utilitarian action. Utilitarism directly arises whenever an observer
wants to act rationally while assuming the ASSA. I could say that
utilitarism is 'egoism + ASSA'.

2nd step: The unique utilitarism.

Starting from the definition that utilitarism is egoism in combination
with the ASSA, I argue that all observers will agree upon the same
action. At first you might think that the preferred action depends on
the individual preferences of the deciding individual. For example, if
I was suffering from hunger, I could perform an action to minimize
hunger in the world. But this is a wrong conclusion. When I experience
another observer moment, I am no longer affected by my former needs
and preferences.

Directly speaking: Since all observers must expect to get their next
observer moments out of the same ensemble of observer moments, there
is no reason to insist on different preferences.

But there is still one problem left. Different observers have
different states of knowledge about the consequences of a potential
action. In theory, we can exclude this problem by defining utilitarism
as the rational decision of a hypothetic observer that knows all the
consequences of all potential actions (of course while assuming the
ASSA).

It's a nice feature of the ASSA that it naturally leads to a theory of
morality. The RSSA does not seem to provide such a result. Though, I'd
like to have similar concepts out of the RSSA (according to Stathis, I
belong to the RSSA camp).

Youness Ayaita


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: against UD+ASSA, part 1

2007-10-01 Thread Stathis Papaioannou

On 01/10/2007, Jesse Mazer [EMAIL PROTECTED] wrote:

 Is the DA incompatible with QI? According to MWI, your measure in the
 multiverse is constantly dropping with age as versions of you meet
 their demise. According to DA, your present OM is 95% likely to be in
 the first 95% of all OM's available to you. Well, that's why you're a
 few decades old, rather than thousands of years old at the
 ever-thinning tail end of the curve. But this is still consistent with
 the expectation of an infinite subjective lifespan as per QI.

 Well, this view would imply that although I am likely to reach reasonable
 conclusions about measure if I assume my current OM is typical, I am
 inevitably going to find myself in lower and lower measure OMs in the
 future, where the assumption that the current one is typical will lead to
 more and more erroneous conclusions.

That's right, but the same is true in any case for the atypical
observers who assume that they are typical. Suppose I've forgotten how
old I am, but I am reliably informed that I will live to the age of
120 years and one minute. Then I would be foolish to guess that I am
currently over 120 years old; but at the same time, I know with
certainty that I will *eventually* reach that age.

 I guess if you believe there is no real
 temporal relation between OMs, that any sense of an observer who is
 successively experiencing a series of different OMs is an illusion and that
 the only real connection between OMs is that memories one has may resemble
 the current experiences of another, then there isn't really a problem with
 this perspective (after all, I have no problem with the idea that the
 ordinary Doomsday Argument applied to civilizations implies that eventually
 the last remaining humans will have a position unusually close to the end,
 and they'll all reach erroneous conclusions if they attempt to apply the
 Doomsday Argument to their own birth order...the reason I have no problem
 with this is that I don't expect to inevitably 'become' them, they are
 separate individuals who happen to have an unusual place in the order of all
 human births).

That's exactly how I view OM's. It is necessary that they be at least
this, since even if they are strung together temporally in some other
way (such as being generated in the same head) they won't form a
continuous stream of consciousness unless they have the appropriate
memory relationship. It is also sufficient, since I would have the
sense of continuity of consciousness even if my OM's were generated at
different points in space and time.

 But I've always favored the idea that a theory of
 consciousness would determine some more objective notion of temporal flow
 than just qualitative similarities in memories, that if my current OM is X
 then there would be some definite ratio between the probability that my next
 OM would be Y vs. Z.

If you assume that the probability is determined by the ratio of the
measure of Y to Z, given that Y and Z are equally good candidate
successor OM's, this takes care of it and is moreover completely
independent of any theory of consciousness. All that is needed is that
the appropriate OM's be generated; how, when, where or by whom is
irrelevant.

 This leads me to the analogy of pools of water with
 water flowing between them that I discussed in this post:

 http://groups.google.com/group/everything-list/msg/07cd5c7676f6f6a1

 Consider the following analogy--we have a bunch of tanks of water, and each
 tank is constantly pumping a certain amount of its own water to a bunch of
 other tanks, and having water pumped into it from other tanks. The ratio
 between the rates that a given tank is pumping water into two other tanks
 corresponds to the ratio between the probabilities that a given
 observer-moment will be
 succeeded by one of two other possible OMs--if you imagine individual water
 molecules as observers, then the ratio between rates water is going to the
 two tanks will be the same as the ratio between the probabilities that a
 given molecule in the current tank will subsequently find itself in one of
 those two tanks. Meanwhile, the total amount of water in a tank would
 correspond to the absolute probability of a given OM--at any given time, if
 you randomly select a single water molecule from the collection of all
 molecules in all tanks, the amount of water in a tank is proportional to
 the
 probability your randomly-selected molecule will be in that tank.
 
 Now, for most ways of arranging this system, the total amount of water in
 different tanks will be changing over time. In terms of the analogy, this
 would be like imposing some sort of universal time-coordinate on the whole
 multiverse and saying the absolute probability of finding yourself
 experiencing a given OM changes with time, which seems pretty implausible
 to me. But if the system is balanced in such a way that, for each tank, the
 total rate that water is being pumped out is equal to the total rate that
 

Re: RSSA / ASSA / Single Mind Theory

2007-10-01 Thread Jason Resch
On 4/29/07, Jason [EMAIL PROTECTED] wrote:


 Two things in my mind make personal identity fuzzy:

 1. The MWI of quantum mechanics, which if true means each person
 experiences a perhaps infinite number of histories across the multi-
 verse.  Should personal identity extend to just one branch or to all
 branches?  If all branches where do you draw the line between who is
 and is not that person?  Remember across the multi-verse you can move
 across branches that differ only by the location of one photon,
 therefore there is a continuum linking a person in one branch to any
 other person.

 2. Duplication/transportation/simulation thought experiments, which
 show that minds can't be tied to a single physical body, simulation
 thought experiments suggest there doesn't even have to be a physical
 body for there to be a person.  If a person can be reduced to
 information is it the same person if you modify some bits (as time
 does), how many bits must be modified before you no longer consider it
 to be the same person?  What happens if you make copies of those bits
 (as the MWI implies happens), or destroy one copy and reconstitute it
 elsewhere?

 Person identity is useful when talking about everyday situations, but
 I think it muddies things, especially if one tries to bind a
 continuous conscious experience with a person.  For example, how can
 you explain what happens if one were to make 5 exact duplicates of
 some individual?  Do you say their consciousness fractures, do you say
 it multiplies, do you say it selects one of them?  Just because
 observers have memories of experiencing the same observer's past
 perspectives in no way implies there is a single consciousness that
 follows a person as they evolve through time (even though it very much
 seems that way subjectively).

 Jason

 On Apr 26, 3:11 pm, John Mikes [EMAIL PROTECTED] wrote:
  Interleaving ONE tiny question:
 
  On 4/20/07, Jason [EMAIL PROTECTED] wrote:
  (Jason:)
  ...Personhood becomes fuzzy and a truly object treatment of conscious
  experience might do well to abandon the idea of personal identity
  altogether. ...
 
  Sais WHO?
 
  John


 



I've thought of two other ideas which further complicate personal identity:

3. Mind uploading / Simulation Argument / Game worlds in the context of
infinite universes.

If all universes are real there are an infinite number of causes for your
current observer moment, including the explanation that your OM is
instantiated in a computer simulation or game world.  The instantiation
could be part of a game some alien who uploaded his mind is playing,
perhaps the game is called simhuman, when the being awakens from the game
all the memories of your human life will be integrated into the alien
being's memories.  Therefore it could be said that there are an infinite
number of observers (each with highly varied experiences and memories) to
which this OM belongs.  A nice consequence of this is that it can provide
escape from eternal agedness implied by many worlds.

4. All particles in the observable universe are interacting.  The neurons in
our brain which instantiate thoughts are not closed loops, they are fed in
with data from the senses, thoughts can be communicated between brains (as
they are now when you read this post), my neural activity can affect your
neural activity, there is only a longer and slower path connecting neurons
between everyone's brain.  Think of a grid computer consisting of super
computers connected with 14.4 Kbps modems, the bandwidth is not sufficient
for transferring large amounts of data or the content of their hard drives
in any reasonable time, but short and compressed information can still be
shared.  If they are interacting as part of the same large state machine
then minds are not islands, and it lends credence to their being a universal
mind.

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: RSSA / ASSA / Single Mind Theory

2007-10-01 Thread Vladimir Nesov

Also single mind can be regarded as collection of parts interacting
with each other. If each part can be regarded as its information
content, each physical implementation ties together instantiations of
parts. If single mind can be implemented by multiple implementations,
each of these implementations also implements all parts of mind, so
mind can be composed of different parts, where each of the parts is
implemented in different universe. So, brain can be half- p-zombie and
half-conscious.

On 10/1/07, Jason Resch [EMAIL PROTECTED] wrote:
 4. All particles in the observable universe are interacting.  The neurons in
 our brain which instantiate thoughts are not closed loops, they are fed in
 with data from the senses, thoughts can be communicated between brains (as
 they are now when you read this post), my neural activity can affect your
 neural activity, there is only a longer and slower path connecting neurons
 between everyone's brain.  Think of a grid computer consisting of super
 computers connected with 14.4 Kbps modems, the bandwidth is not sufficient
 for transferring large amounts of data or the content of their hard drives
 in any reasonable time, but short and compressed information can still be
 shared.  If they are interacting as part of the same large state machine
 then minds are not islands, and it lends credence to their being a universal
 mind.

 Jason



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: RSSA / ASSA / Single Mind Theory

2007-10-01 Thread Stathis Papaioannou

On 02/10/2007, Vladimir Nesov [EMAIL PROTECTED] wrote:

 Also single mind can be regarded as collection of parts interacting
 with each other. If each part can be regarded as its information
 content, each physical implementation ties together instantiations of
 parts. If single mind can be implemented by multiple implementations,
 each of these implementations also implements all parts of mind, so
 mind can be composed of different parts, where each of the parts is
 implemented in different universe. So, brain can be half- p-zombie and
 half-conscious.

I don't see in what sense it could be a single mind if part of it is
zombified. If your visual cortex were unconscious, you would be blind,
and you would know you were blind. (Except for unusual situations like
Anton's Syndrome, where people don't realise that they're blind).




-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: against UD+ASSA, part 1

2007-10-01 Thread Jesse Mazer

Stathis Papaioannou wrote:


On 01/10/2007, Jesse Mazer [EMAIL PROTECTED] wrote:

  I guess if you believe there is no real
  temporal relation between OMs, that any sense of an observer who is
  successively experiencing a series of different OMs is an illusion and 
that
  the only real connection between OMs is that memories one has may 
resemble
  the current experiences of another, then there isn't really a problem 
with
  this perspective (after all, I have no problem with the idea that the
  ordinary Doomsday Argument applied to civilizations implies that 
eventually
  the last remaining humans will have a position unusually close to the 
end,
  and they'll all reach erroneous conclusions if they attempt to apply the
  Doomsday Argument to their own birth order...the reason I have no 
problem
  with this is that I don't expect to inevitably 'become' them, they are
  separate individuals who happen to have an unusual place in the order of 
all
  human births).

That's exactly how I view OM's. It is necessary that they be at least
this, since even if they are strung together temporally in some other
way (such as being generated in the same head) they won't form a
continuous stream of consciousness unless they have the appropriate
memory relationship. It is also sufficient, since I would have the
sense of continuity of consciousness even if my OM's were generated at
different points in space and time.

I'm not talking about whether they are generated at different points in 
space in time or not from a 3rd-person perspective, I'm talking about 
whether there is a theory of consciousness that determines some sort of 
objective truths about the temporal flow between OMs from a 1st-person 
perspective (for example, an objective truth about the relative 
probabilities that an experience of OM X will be followed by OM Y vs. OM Z), 
or whether there is no such well-defined and objectively correct theory, and 
the only thing we can say is that the memories of some OMs have purely 
qualitative similarities to the experiences of others. Are you advocating 
the latter?


  But I've always favored the idea that a theory of
  consciousness would determine some more objective notion of temporal 
flow
  than just qualitative similarities in memories, that if my current OM is 
X
  then there would be some definite ratio between the probability that my 
next
  OM would be Y vs. Z.

If you assume that the probability is determined by the ratio of the
measure of Y to Z, given that Y and Z are equally good candidate
successor OM's, this takes care of it and is moreover completely
independent of any theory of consciousness.

But the theory of consciousness is needed to decide whether Y and Z are 
indeed equally good candidate successor OMs. For example, what if X is an 
observer-moment of the actual historical Napoleon, Y is another OM of the 
historical Napoleon, while Z is an OM of a delusional patient who thinks 
he's Napoleon, and who by luck happens to have a set of fantasy memories 
which happen to be quite similar to memories that the actual Napoleon had. 
Is there some real fact of the matter about whether Z can qualify as a valid 
successor, or is it just a matter of opinion?

I also see no reason to think that the question of whether observer-moment Y 
is sufficiently similar to observer-moment X to qualify as a successor 
should be a purely binary question as opposed to a fuzzy  one. After all, 
if you say the answer is yes, and if Y can be described in some 
mathematical language as a particular computation or pattern of 
cause-and-effect or somesuch, then you can consider making a series of small 
modifications to the computation/causal pattern, giving a series of similar 
OMs Y', Y'', Y''', etc...eventually you'd end with a totally different OM 
that had virtually no resemblance to either X or Y. So is there some point 
in the sequence where you have an observer-moment that qualifies as a valid 
successor to X, and then you change one bit of the computation or one 
neural-firing event, and suddenly you have an observer-moment that is 
completely invalid as a successor to X? This seems implausible to me, it 
makes more sense that a theory of consciousness would determine something 
like a degree of similarity between an OM X and a candidate successor OM 
Y, and that this degree of similarity would factor into the probability that 
an experience of X would be followed by an experience of Y.

In this case, if I am currently experiencing X, the relative probabilities 
that my next OM is Y or Z might be determined by both the relative degree 
of similarity of Y and Z to X *and* the absolute measure of Y and Z (or it 
might be even more complicated; perhaps it would depend on some measure of 
the internal coherence of all the different infinite sequences of OMs which 
contain X and which have Y or Z as a successor).

If you have time you might want to take a look at the discussion in the 
thread FW: Quantum 

Re: RSSA / ASSA / Single Mind Theory

2007-10-01 Thread Vladimir Nesov

Not single mind is half-zombified, but single brain. Half of the brain
implements half of the mind, and another half of the brain is zombie.
Another half of the mind (corresponding to zombie part of the brain)
exists as information content and can be implemented in different
universe. This view can be applied to gradual uploading argument.

On 10/2/07, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 02/10/2007, Vladimir Nesov [EMAIL PROTECTED] wrote:
 
  Also single mind can be regarded as collection of parts interacting
  with each other. If each part can be regarded as its information
  content, each physical implementation ties together instantiations of
  parts. If single mind can be implemented by multiple implementations,
  each of these implementations also implements all parts of mind, so
  mind can be composed of different parts, where each of the parts is
  implemented in different universe. So, brain can be half- p-zombie and
  half-conscious.

 I don't see in what sense it could be a single mind if part of it is
 zombified. If your visual cortex were unconscious, you would be blind,
 and you would know you were blind. (Except for unusual situations like
 Anton's Syndrome, where people don't realise that they're blind).




 --
 Stathis Papaioannou

 



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: RSSA / ASSA / Single Mind Theory

2007-10-01 Thread Jesse Mazer

Vladimir Nesov wrote:


Not single mind is half-zombified, but single brain. Half of the brain
implements half of the mind, and another half of the brain is zombie.
Another half of the mind (corresponding to zombie part of the brain)
exists as information content and can be implemented in different
universe. This view can be applied to gradual uploading argument.

But why do you think there could be any functionally identical 
implementations of a part of a brain that would be zombies, i.e. not 
really conscious?

Jesse

_
It's the Windows Live(tm) Hotmail(R) you love -- on your phone! 
http://www.microsoft.com/windowsmobile/mobilehotmail/default.mspx?WT.mc_ID=MobileHMTagline2


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: RSSA / ASSA / Single Mind Theory

2007-10-01 Thread Vladimir Nesov

Are you asking why I consider notion of p-zombieness meaningful?

On 10/2/07, Jesse Mazer [EMAIL PROTECTED] wrote:

 Vladimir Nesov wrote:
 
 
 Not single mind is half-zombified, but single brain. Half of the brain
 implements half of the mind, and another half of the brain is zombie.
 Another half of the mind (corresponding to zombie part of the brain)
 exists as information content and can be implemented in different
 universe. This view can be applied to gradual uploading argument.

 But why do you think there could be any functionally identical
 implementations of a part of a brain that would be zombies, i.e. not
 really conscious?

 Jesse

 _
 It's the Windows Live(tm) Hotmail(R) you love -- on your phone!
 http://www.microsoft.com/windowsmobile/mobilehotmail/default.mspx?WT.mc_ID=MobileHMTagline2


 



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: RSSA / ASSA / Single Mind Theory

2007-10-01 Thread Stathis Papaioannou

On 02/10/2007, Vladimir Nesov [EMAIL PROTECTED] wrote:

 Not single mind is half-zombified, but single brain. Half of the brain
 implements half of the mind, and another half of the brain is zombie.
 Another half of the mind (corresponding to zombie part of the brain)
 exists as information content and can be implemented in different
 universe. This view can be applied to gradual uploading argument.

So what would it actually be like for you if in the next minute your
visual cortex was zombified, i.e. still functioned processing visual
signals (all visual signals - not selectively lacking in V1 function
as in blindsight) but lacking phenomenal consciousness?


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---