Re: the redness of the red

2010-02-01 Thread Bruno Marchal


On 31 Jan 2010, at 03:10, soulcatcher☠ wrote:


I see a red rose. You see a red rose. Is your experience of redness
the same as mine?
1. Yes, they are identical.
2. They are different as long as neural organization of our brains is
slightly different, but you are potentially capable of experiencing my
redness with some help from neurosurgeon who can shape your brain in
the way as mine is.
3. They are different as long as some 'code' of our brains is slightly
different but you (and every machine) is potentially capable of
experiencing my redness if they somehow achieve the same 'code'.
5. They are different and absolutely private - you (and anybody else,
be it a human or machine) don't and can't experience my redness.
6. The question doesn't have any sense because ... (please elaborate)
7. ...
What is your opinion?


It is between 3 and 5, I would say. Intuitively, assuming that the  
mechanist substitution level being high,  e may expect our qualia to  
differ between us, as much as the shape of our body. But then logic  
can explain that in such place (other's experience) intuition might  
not be the best adviser.






My (naive) answer is (3). Our experiences are identical (would a
correct term be 'ontologically identical'?) as long as they have the
same symbolic representation and the symbols have the same grounding
in the physical world. The part about grounding is just an un-educated
guess,  I don't understand the subject and have only an intuitive
feeling that semantics (what computation is about) is important and
somehow determined by the physical world out there.


You are right. Our first person consciousness stability has to rely on  
the infinite computations which statistically stabilize the physical  
world. But the semantics will be typically a creation of the person's  
brain.





Let me explain with example. Suppose, that you:
1. simulate my brain in a computer program, so we can say that this
program represents my brain in your symbols.
2. simulate a red rose
3. feed rose data into my simulated brain.
I think (more believe than think) that this simulated brain won't see
my redness - in fact, it won't see nothing at all cause it isn't
conscious.


Then digital mechanism is false, or you have chosen an incorrect level  
of substitution, and your brain may have to include a part of the  
environment.





But if you:
1. make a robot that simulates my brain in my symbols i.e. behaves
(relative to the physical world) in the same ways as I do
2. show a rose to the robot
I think that robot will experience the same redness as me.


See Jason Resch comment.




Would be glad if somebody suggests something to read about 'symbols
grounding', semantics, etc., I have a lot of confusion here, I've
always thought that logic is a formal language for a 'syntactic'
manipulation with 'strings' that acquire meaning only in our minds.


Actually logic is more about the relation between syntax and  
semantics. Both syntax and semantics, and the relation in between are  
studied mathematically by logicians. I would suggest you to study a  
good introduction to mathematical logic like the book by Elliot  
Mendelson. See:


http://www.amazon.com/Introduction-Mathematical-Fourth-Elliott-Mendelson/dp/0412808307

But logic is not a formal language. It is the informal mathematical  
study OF formal languages and theories together with their semantics/ 
meaning. (Proof theory, model theory, computability theory, axiomatic  
set theory, etc.)


Bruno
http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread soulcatcher☠

 What would you say about this setup:

 Computer Simulation-Physical Universe-Your Brain

 That is to say, what if our physical universe were simulated in some
 alien's computer instead of being some primitive physical world?


This setup doesn't sound very convincing to me:
- I believe that simulated objects (agents) can't be conscious
- I believe that I am consious
= I'm not simulated and all the universe is not simulated.

 And another interesting thought experiment to think about:
 What if a baby from birth was never allowed to see the real world, but
 instead were given VR goggles providing a realistic interactive
environment,
 entirely generated from a computer simulation.  Would that infant be
 unconscious of the things it saw?

This argument sound better, but still:
1. Goggles are not enough - baby learns via active interaction with the
outside world, i.e. motor function matters and you should provide baby with
a full-body armor that completely simulates the environment and makes
interaction consistent (so haptic, proprioceptive and visual experiences
don't contradict each other). But that's hard and maybe impossible - you
can't (or can?) completely prevent the contaminating influence of the
world - for example, you should feed the baby.
2. The most important is that baby has nervous system that evolved for a
very long time and already somehow encodes external symbols. You just
substituting real input with virtual input but that virtual input is
already properly encoded and speaks the symbolic language that is grounded
in real world and comprehensible by baby's brain.
3. Baby, itself, is real and made from matter and, maybe, real baby in VR
!= virtual baby in VR. In the other words, there is a special class of
real Turing machine implementations that posses the meaning grounded in
the environment.

OK, i agree that it's very tempting to accept computationalism, but i'm
still not ready, maybe gotta try harder )

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Jason Resch
On Mon, Feb 1, 2010 at 8:05 AM, soulcatcher☠ soulcatche...@gmail.comwrote:

 What would you say about this setup:

 Computer Simulation-Physical Universe-Your Brain

 That is to say, what if our physical universe were simulated in some
 alien's computer instead of being some primitive physical world?


 This setup doesn't sound very convincing to me:
 - I believe that simulated objects (agents) can't be conscious
 - I believe that I am consious
 = I'm not simulated and all the universe is not simulated.

  And another interesting thought experiment to think about:
  What if a baby from birth was never allowed to see the real world, but
  instead were given VR goggles providing a realistic interactive
 environment,
  entirely generated from a computer simulation.  Would that infant be
  unconscious of the things it saw?

 This argument sound better, but still:
 1. Goggles are not enough - baby learns via active interaction with the
 outside world, i.e. motor function matters and you should provide baby with
 a full-body armor that completely simulates the environment and makes
 interaction consistent (so haptic, proprioceptive and visual experiences
 don't contradict each other). But that's hard and maybe impossible - you
 can't (or can?) completely prevent the contaminating influence of the
 world - for example, you should feed the baby.
 2. The most important is that baby has nervous system that evolved for a
 very long time and already somehow encodes external symbols. You just
 substituting real input with virtual input but that virtual input is
 already properly encoded and speaks the symbolic language that is grounded
 in real world and comprehensible by baby's brain.
 3. Baby, itself, is real and made from matter and, maybe, real baby in VR
 != virtual baby in VR. In the other words, there is a special class of
 real Turing machine implementations that posses the meaning grounded in
 the environment.


Maybe we have definitions for what is meant by simulation.  I say this
because of your last comment about meaning needing to be grounded in an
environment.  Within realistic computer simulations there is an environment
which encodes many of the same relations we are used to.  Concreteness of
objects, Newtonian mechanics ( http://www.youtube.com/watch?v=Ae6ovaDBiDE ),
light effects ( http://www.youtube.com/watch?v=lvI1l0nAd1c ) etc. are all
embedded within the code that informs the simulation how to evolve, just as
the laws of physics would in a physical world.  Do you see the meaning of
physical laws being somehow different from the programmed laws that simulate
an environment?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread soulcatcher☠

 Do you see the meaning of physical laws being somehow different from the
 programmed laws that simulate an environment?

Yes, I feel that simulated mind is not identical to the real one. Simulation
is only the extension of the mind - just a tool, a mental crutch, a
pluggable module that gives you additional abilities. For example, if I had
the computation power of my brain sufficient enough, I could simulate other
minds entirely in my mind (in imagination, whatever) - but these imaginary
minds won't be conscious, will they?
In the other words:
1. I accept that computation is a description (the impretaive one) of
reality, like math (declarative) or human language.
2. I don't believe (for now)  that it has any meaning (and consciousness)
per se.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Jason Resch
On Mon, Feb 1, 2010 at 9:27 AM, soulcatcher☠ soulcatche...@gmail.comwrote:

 Do you see the meaning of physical laws being somehow different from the
 programmed laws that simulate an environment?

 Yes, I feel that simulated mind is not identical to the real one.
 Simulation is only the extension of the mind - just a tool, a mental
 crutch, a pluggable module that gives you additional abilities. For
 example, if I had the computation power of my brain sufficient enough, I
 could simulate other minds entirely in my mind (in imagination, whatever) -
 but these imaginary minds won't be conscious, will they?


I think that depends on the level of resolution to which you are simulating
them.  The people you see in your dreams aren't conscious, but if a super
intelligence could simulate another's mind to the resolution of their
neurons, I think those simulated persons would be conscious.



 In the other words:
 1. I accept that computation is a description (the impretaive one) of
 reality, like math (declarative) or human language.


There is a difference between computation as a description (say a print out
or CD containing a program's source code) and the computation as an action
or process.  The CD wouldn't be conscious, but if you loaded it into a
computer and executed it, I think it would be.


 2. I don't believe (for now)  that it has any meaning (and consciousness)
 per se.



So you think the software mind in a software environment would never
question the redness of red, when the robot brain would?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Why I am I?

2010-02-01 Thread Bruno Marchal


On 28 Jan 2010, at 20:27, RMahoney wrote:


On Jan 8, 12:38 pm, Bruno Marchal marc...@ulb.ac.be wrote:

Welcome RMahoney,

Nice thought experiments. But they need amnesia (like in going from
you to Cruise). I tend to think like you that it may be the case that
we are the same person (like those who result from a self-
duplication,  both refer as being the same person as the original,  
yet

acknowledge their respective differentiation.


Yes I think I understand what you mean by amnesia, you couldn't
carry any rememberance of your old self when changing to Tom Cruise,
but you would in the intermediary steps and gradually would lose the
concept of your old self that is gradually replaced by Tom's self
concept.


OK.
I think there is an agnosologic path from any person to any  
person, for example from you to a bacteria, or Peano Arithmetic,  
perhaps even the empty person. Agnosia is a term used for disease  
with deny, like people who become blind and pretend not having  
perceive any difference.




Thing is, it is very similar to the process happening as we age. I
began
a journal when I was in my 20's, capturing my thoughts every time I
visited this subject in my mind trips. So when I read a page from
that
journal today, I sometimes go wow, I was thinking that, then? I've
obviously acquired a bit of amnesia. Yet I feel like I'm the same
person
because I've always had this body (although an aging body). What
would
it be like if everyone had default amnesia such that any thought
older
than 20 years is erased?  So you wouldn't remember your earlier years
but you were that person once. I could claim to have originated from
Tom Cruise's childhood and it wouldn't make any difference.


Sure. From a third person point of view identity is relative.
But from a first person point of view it is a sort of absolute related  
to the way you have build your (current) self through your experiences  
and inheritage relatively to a normal set of computations. We are what  
we value, I would say, but this makes it a personal question.
Note that the uda reasoning is made in a way which prevents the need  
for clarifying those considerations, albeit very interesting.




Just like
I don't believe it makes any difference to say why I am I? and not
you?,
as we are we, simultaneously, and we are they, all those who lived
past lives, etc.


... and future lives, alternate lives, and states.
OK, especially if you see that such a view prevent relativism. When  
the 'other' makes a mistake, in the past, or the present, (or the  
future!) the question is how could *I* be wrong, how could *I* have  
been wrong, how could *I* help for being less wrong. Such an attitude  
encourages the dialog and the appreciation of the other(s), despite  
(or thanks to) its relative unknown nature. Eventually this can help  
to develop some faith in the unknown, together with the lucidity on  
the hellish paths, which can then be seen as mostly the product of  
certainty idolatry, and security idolatry. It is a natural price of  
consciousness: by knowing they are universal, Lobian machine know that  
they can crash. And being never satisfied, they will complain for more  
memory space and time to their most probable local universal  
neighbors, up, for some, to their universal recognizance, and so quite  
happy to dispose of what 'God' (arithmetical truth) can offer them  
(and has already offer them).
Knowing you are the other is a reason to embellish the relation with  
the many possible and probable universal neighbor(s). The  
computationalist good cannot make the bad disappears, but it may be  
able to confine it more and more  in the phantasms and fantasies, or  
second order, virtual, dreamed realities.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread soulcatcher☠

 I think those simulated persons would be conscious.

The possibility of superintelligence that creates worlds in its dreams kinda
freaks me out :)

So you think the software mind in a software environment would never
 question the redness of red, when the robot brain would?

 No, I think that good enough simulation of me must question the redness of
the red simply by definition  - because I'm questioning and it simulates my
behavior.
Nevertheless, I think that this simulation won't be conscious and has only
descriptive power, like a reflection in the mirror (bad example but confers
the idea). But I can't tell what exactly is the difference, what is that
obscure physicalist principle that I meant speaking about symbol grounding
in the real world and that makes me (and not my simulation) conscious.
ok, suppose we'll record a day in the life of my simulation and then replay
it - will it still be conscious?

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: A Question...

2010-02-01 Thread José Ignacio Orlicki
I am subscribed to this list but I am very well-versed, there is a PDF
or something I can read as an introduction, I've ready read the
introduction by Jürgen Schmidhuber to computable universes:

http://www.idsia.ch/~juergen/everything/html.html

but I need more introductory reading...

Thanks!
José Ignacio.

On Thu, Jan 28, 2010 at 3:58 AM, freqflyer07281972
thismindisbud...@gmail.com wrote:
 Hey There,

 I love reading the posts on this group, and I find a lot of the ideas
 mindblowing (and more than occasionally over my head) but I was
 wondering if anyone could clarify this question(s):

 1) Is QI implied by UDA and comp?
 2) Is QI implied by ASSA/RSSA?

 More generally, what is the existential/phenomenological import of all
 these crazy (meant in a totally respectful way) ideas?

 Curious,

 Dan

 --
 You received this message because you are subscribed to the Google Groups 
 Everything List group.
 To post to this group, send email to everything-l...@googlegroups.com.
 To unsubscribe from this group, send email to 
 everything-list+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/everything-list?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Brent Meeker

soulcatcher? wrote:


Do you see the meaning of physical laws being somehow different
from the programmed laws that simulate an environment?

Yes, I feel that simulated mind is not identical to the real one. 
Simulation is only the extension of the mind - just a tool, a mental 
crutch, a pluggable module that gives you additional abilities. For 
example, if I had the computation power of my brain sufficient enough, 
I could simulate other minds entirely in my mind (in imagination, 
whatever) - but these imaginary minds won't be conscious, will they?

In the other words:
1. I accept that computation is a description (the impretaive one) of 
reality, like math (declarative) or human language.
2. I don't believe (for now) that it has any meaning (and 
consciousness) per se.


I would say that it gets its meaning (interpretation) from you. The 
meaning you assign it comes from your internal model of the world you 
interact with. This is partly hardwired by evolution and partly learned 
from your experience.


Brent



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Jason Resch
On Mon, Feb 1, 2010 at 12:10 PM, soulcatcher☠ soulcatche...@gmail.comwrote:

 I think those simulated persons would be conscious.

 The possibility of superintelligence that creates worlds in its dreams
 kinda freaks me out :)


Carl Sagan in Cosmos said that in the Hindu religion, there are an infinite
number of Gods, each dreaming their own universe:
http://www.youtube.com/watch?v=4E-_DdX8Ke0



 So you think the software mind in a software environment would never
 question the redness of red, when the robot brain would?

 No, I think that good enough simulation of me must question the redness of
 the red simply by definition  - because I'm questioning and it simulates my
 behavior.
 Nevertheless, I think that this simulation won't be conscious and has only
 descriptive power, like a reflection in the mirror (bad example but confers
 the idea). But I can't tell what exactly is the difference, what is that
 obscure physicalist principle that I meant speaking about symbol grounding
 in the real world and that makes me (and not my simulation) conscious.
 ok, suppose we'll record a day in the life of my simulation and then replay
 it - will it still be conscious?


I don't think your recording will be conscious.  It lacks the causal
relations that give meaning to its symbols.  I believe the symbols are
grounded and related to each other through their interactions in the
processing by the CPU/Turing machine/physical laws.

Do you think the redness of red is a physical property of red light or an
internal property of you (the organization of neurons in your brain)?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Jason Resch
On Sat, Jan 30, 2010 at 8:10 PM, soulcatcher☠ soulcatche...@gmail.comwrote:

 Let me explain with example. Suppose, that you:
 1. simulate my brain in a computer program, so we can say that this
 program represents my brain in your symbols.
 2. simulate a red rose
 3. feed rose data into my simulated brain.
 I think (more believe than think) that this simulated brain won't see
 my redness - in fact, it won't see nothing at all cause it isn't
 conscious.
 But if you:
 1. make a robot that simulates my brain in my symbols i.e. behaves
 (relative to the physical world) in the same ways as I do
 2. show a rose to the robot
 I think that robot will experience the same redness as me.
 Would be glad if somebody suggests something to read about 'symbols
 grounding', semantics, etc., I have a lot of confusion here, I've
 always thought that logic is a formal language for a 'syntactic'
 manipulation with 'strings' that acquire meaning only in our minds.


When I play a video game I am conscious.  Presumably I would still be
conscious even using a fully immersive system like the vertebrain system
described on this page ( http://marshallbrain.com/discard8.htm ).  If that
is true, and you agree with me so far, do you think a brain in a vat (
http://en.wikipedia.org/wiki/Brain_in_a_vat ) would be conscious?  Would it
be conscious whether its optic nerve were connected to a webcam or connected
to the TV/OUT port of a video game?  What about a human brain that spent its
whole life as a brain in a vat from the time it was born (assuming it were
given a robot body for input, or assuming it was given a computer game
realistic reality)?  I am curious at what point you think the consciousness
would cease.

If you agree that the brain in the vat would be conscious in all cases (even
when given input from a video game) and you agree that a robot body with a
software brain would be conscious, why would it stop working when you put a
software brain in the same position as the brain in a vat?

Jason

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: the redness of the red

2010-02-01 Thread Brent Meeker

Jason Resch wrote:
On Sat, Jan 30, 2010 at 8:10 PM, soulcatcher☠ soulcatche...@gmail.com 
mailto:soulcatche...@gmail.com wrote:


Let me explain with example. Suppose, that you:
1. simulate my brain in a computer program, so we can say that this
program represents my brain in your symbols.
2. simulate a red rose
3. feed rose data into my simulated brain.
I think (more believe than think) that this simulated brain won't see
my redness - in fact, it won't see nothing at all cause it isn't
conscious.
But if you:
1. make a robot that simulates my brain in my symbols i.e. behaves
(relative to the physical world) in the same ways as I do
2. show a rose to the robot
I think that robot will experience the same redness as me.
Would be glad if somebody suggests something to read about 'symbols
grounding', semantics, etc., I have a lot of confusion here, I've
always thought that logic is a formal language for a 'syntactic'
manipulation with 'strings' that acquire meaning only in our minds.


When I play a video game I am conscious.  Presumably I would still be 
conscious even using a fully immersive system like the vertebrain 
system described on this page 
( http://marshallbrain.com/discard8.htm ).  If that is true, and you 
agree with me so far, do you think a brain in a vat 
( http://en.wikipedia.org/wiki/Brain_in_a_vat ) would be conscious? 
 Would it be conscious whether its optic nerve were connected to a 
webcam or connected to the TV/OUT port of a video game?  What about a 
human brain that spent its whole life as a brain in a vat from the 
time it was born (assuming it were given a robot body for input, or 
assuming it was given a computer game realistic reality)?  I am 
curious at what point you think the consciousness would cease.



I think that if the brain in a vat had sufficient efferent/afferent 
nerve connections so that it was able to both perceive and and act in 
the world (either real or virtual) then it would be conscious.  If it 
were very restricted, e.g. it only go to play the same virtual video 
game over and over, it's consciousness would be similarly limited (I 
think there are degrees of consciousness). And if it were too limited it 
would crash.


Brent


If you agree that the brain in the vat would be conscious in all cases 
(even when given input from a video game) and you agree that a robot 
body with a software brain would be conscious, why would it stop 
working when you put a software brain in the same position as the 
brain in a vat?


Jason

--
You received this message because you are subscribed to the Google 
Groups Everything List group.

To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: measure again '10

2010-02-01 Thread Jack Mallah
--- On Wed, 1/27/10, Brent Meeker meeke...@dslextreme.com wrote:
 Jack is talking about copies in the common sense of initially physically 
 identical beings who however occupy different places in the same spacetime 
 and hence have different viewpoints and experiences.

No, that's incorrect.  I don't know where you got that idea but I'd best put 
that misconception to rest first.

When I talk about copies I mean the same thing as the others on this list - 
beings who not only start out as the same type but also receive the same type 
of inputs and follow the same type of sequence of events.  Note: They follow 
the same sequence because they use the same algorithm but they must operate 
independently and in parallel - there are no causal links to enforce it.  If 
there are causal links forcing them to be in lockstep I might say they are 
shadows, not copies.

Such copies each have their own, separate consciousness - it just happens to be 
of the same type as that of the others.  It is not redundancy in the sense of 
needless redundancy.  Killing one would end that consciousness, yes.  In 
philosophy jargon, they are of the same type but are different tokens of it.

--- On Thu, 1/28/10, Jason Resch jasonre...@gmail.com wrote:
 Total utilitarianism advocates measuring the utility of a population based on 
 the total utility of its members.
 Average utilitarianism, on the other hand, advocates measuring the utility of 
 a population based on the average utility of that population.

I basically endorse total utilitarianism.  (I'm actually a bit more 
conservative but that isn't relevant here.)  I would say that average 
utilitarianism is completely insane and evil.  Ending the existence of a 
suffering person can be positive, but only if the quality of life of that 
person is negative.  Such a person would probably want to die.  OTOH not 
everyone who wants to die has negative utility, even if they think they do.

--- On Wed, 1/27/10, Stathis Papaioannou stath...@gmail.com wrote:
 if there were a million copies of me in lockstep and all but one were 
 destroyed, then each of the million copies would feel that they had 
 continuity of consciousness with the remaining one, so they are OK with what 
 is about to happen.

Suppose someone killed all copies but lied to them first, saying that they 
would survive.  They would not feel worried.  Would that be OK?  It seems like 
the same idea to me.

 Your measure-preserving criterion for determining when it's OK to kill a 
 person is just something you have made up because you think it sounds 
 reasonable, and has nothing to do with the wishes and feelings of the person 
 getting killed.

First, I should reiterate something I have already said: It is not generally OK 
to kill someone without their permission even if you replace them.  The reason 
it's not OK is just that it's like enslaving someone - you are forcing things 
for them.  This has nothing particularly to do with killing; the same would 
apply, for example, to cutting off someone's arm and replacing it with a new 
one.  Even if the new one works fine, the guy has a right to be mad if his 
permission was not asked for this.  That is an ethical issue.  I would make an 
exception for a criminal or bad guy who I would want to imprison or kill 
without his permission.

That said, as my example of lying to the person shows, Stathis, your criterion 
of caring about whether the person to be killed 'feels worried' is irrelevant 
to the topic at hand.

Measure preservation means that you are leaving behind the same number of 
people you started with.  There is nothing arbitrary about that.  If, even 
having obtained Bob's permission, you kill Bob, I'd say you deserve to be 
punished if I think Bob had value.  But if you also replace him with Charlie, 
then if I judge that Bob and Charlie are of equal value, I'd say you deserve to 
be punished and rewarded by the same amount.  The same goes if you kill Bob and 
Dave and replace them with Bob' and Dave', or if you kill 2 Bobs and replace 
them with 2 other Bobs.  That is measure preservation.  If you kill 2 Bobs and 
replace them with only one then you deserve a net punishment.

  Suppose there is a guy who is kind of a crazy oriental monk.  He meditates 
  and subjectively believes that he is now the reincarnation of ALL other 
  people.  Is it OK to now kill all other people and just leave alive this 
  one monk?
 
 No, because the people who are killed won't feel that they have continuity of 
 consciousness with the monk, unless the monk really did run emulations of all 
 of them in his mind. 

They don't know what's in his mind either way, so what they believe before 
being killed is utterly irrelevant here.  We can suppose for arguments' sake 
that they are all good peasants, they never miss giving their rice offerings, 
and so they believe anything the monk tells them.  And he believes what he says.

Perhaps what you were trying to get at is that _after_they 

Re: measure again '10

2010-02-01 Thread Brent Meeker

Jack Mallah wrote:

--- On Wed, 1/27/10, Brent Meeker meeke...@dslextreme.com wrote:
  

Jack is talking about copies in the common sense of initially physically 
identical beings who however occupy different places in the same spacetime and 
hence have different viewpoints and experiences.



No, that's incorrect.  I don't know where you got that idea but I'd best put 
that misconception to rest first.

When I talk about copies I mean the same thing as the others on this list - 
beings who not only start out as the same type but also receive the same type 
of inputs and follow the same type of sequence of events.  Note: They follow 
the same sequence because they use the same algorithm but they must operate 
independently and in parallel - there are no causal links to enforce it.  If 
there are causal links forcing them to be in lockstep I might say they are 
shadows, not copies.
  


I see don't that as possible except possibly by realizing the two copies 
in two virtual realities so that whole environment is simulated.  And 
the simulated worlds would have to be completely deterministic - no 
quantum randomnes.




Such copies each have their own, separate consciousness - it just happens to be of the 
same type as that of the others.  It is not redundancy in the sense of 
needless redundancy.  Killing one would end that consciousness, yes.  In philosophy 
jargon, they are of the same type but are different tokens of it.
  


Philosophy jargon doesn't require that two of the same type be the same 
in every respect, e.g. A and A are two tokens of the same type, but they 
are not identical (one is the left of the other for example).


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.