Re: Dualism and the DA

2005-06-16 Thread Russell Standish
On Wed, Jun 15, 2005 at 10:30:11PM -0700, Jonathan Colvin wrote:
 
 Nope, I'm thinking of dualism as the mind (or consciousness) is separate
 from the body. Ie. The mind is not identical to the body.
 

These two statements are not equivalent. You cannot say that the fist
is separate from the hand. Yet the fist is not identical to the
hand. Another example. You cannot say that a smile is separate from
someone's mouth. Yet a smile is not identical to the mouth.

 
  But unless I am an immaterial soul or other sort of 
 cartesian entity, 
  this is not possible.
 
 I disagree completely. You will need to argue your case hard 
 and fast on this one.
 
 See below.
 

Yah - I'm still waiting...

 
  If I am simply my body, then the
  statement I could have been someone else is as ludicrous 
 as pointing 
  to a tree and saying Why is that tree, that tree? Why couldn't it 
  have been a different tree? Why couldn't it have been a lion?
  
  Jonathan Colvin
 
 The tree, if conscious, could ask the question of why it isn't 
 a lion. The only thing absurd about that question is that we 
 know trees aren't conscious.
 
 That seems an absurd question to me. How could a tree be a lion? Unless the
 tree's consciousness is not identical with its body (trunk, I guess), this
 is a meaningless question. To ask that question *assumes* a dualism. It's a
 subtle dualism, to be sure.
 

Of course a mind is not _identical_ to a body. What an absurd thing to
say. If your definition of dualism is that mind and body are not
identical, then this is a poor definition indeed. It is tautologically
true. My definition would be something along the lines of minds and
bodies have independent existence - ie positing the existence of
disembodied minds is dualism. Such an assumption is not required to
apply the Doomsday argument. I may make such assumptions in other
areas though - such as wondering why the Anthropic Principle is
valid. Not dualism implies the Anthropic Principle.

 As a little boy once asked, Why are lions, lions? Why aren't lions ants?
 
 Jonathan Colvin
 

I have asked this question of myself Why I am not an ant?. The
answer (by the Doomsday Argument) is that ants are not conscious. The
question, and answer is quite profound. 


-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 ()
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpPja83xbdPO.pgp
Description: PGP signature


RE: Dualism and the DA

2005-06-16 Thread Jonathan Colvin
Russell Standish wrote:
 Nope, I'm thinking of dualism as the mind (or consciousness) is 
 separate from the body. Ie. The mind is not identical to the body.
 

These two statements are not equivalent. You cannot say that 
the fist is separate from the hand. Yet the fist is not 
identical to the hand.

Well, actually I'd say the fist *is* identical to the hand. At least, my
fist seems to be identical to my hand.


 Another example. You cannot say that a 
smile is separate from someone's mouth. Yet a smile is not 
identical to the mouth.

Depends whether you are a Platonist (dualist) about smiles. I'd say a
smiling mouth *is* identical to a mouth.


  But unless I am an immaterial soul or other sort of
 cartesian entity,
  this is not possible.
 
 I disagree completely. You will need to argue your case 
hard and fast 
 on this one.
 
 See below.
 

Yah - I'm still waiting...

Well, to explicate, the DA suffers from the issue of defining an appropriate
reference set. Now, we are clearly not both random observers on the class of
all observers(what are the chances of two random observers from the class of
all observers meeting at this time on the same mailing list? Googleplexianly
small). Neither are we both random observers from the class of humans
(same argument..what are the chances that both our birth ranks are
approximately the same?). For instance, an appropriate reference set for me
(or anyone reading this exchange) might be people with access to email
debating the DA. But this reference set nullifies the DA, since my birth
rank is no longer random; it is constrained by the requirement, for example,
that email exists (a pre-literate caveman could not debate the DA).

The only way to rescue the DA is to assume that I *could have had* a
different birth rank; in other words, that I could have been someone other
than me (me as in my body). If the body I'm occupying is contingent (ie.
I could have been in any human body, and am in this one by pure chance),
then the DA is rescued. This seems to require a dualistic account of
identity. All theories that reify the observer are essentially dualistic,
IMHO.



 
  If I am simply my body, then the
  statement I could have been someone else is as ludicrous
 as pointing
  to a tree and saying Why is that tree, that tree? Why 
couldn't it 
  have been a different tree? Why couldn't it have been a lion?
  
  Jonathan Colvin
 
 The tree, if conscious, could ask the question of why it isn't a 
 lion. The only thing absurd about that question is that we 
know trees 
 aren't conscious.
 
 That seems an absurd question to me. How could a tree be a lion? 
 Unless the tree's consciousness is not identical with its 
body (trunk, 
 I guess), this is a meaningless question. To ask that question 
 *assumes* a dualism. It's a subtle dualism, to be sure.
 

Of course a mind is not _identical_ to a body. What an absurd 
thing to say. If your definition of dualism is that mind and 
body are not identical, then this is a poor definition indeed. 
It is tautologically true.

Why do you say of course? I believe that I (my mind) am exactly identical
to my body (its brain, to be specific).


 My definition would be something 
along the lines of minds and bodies have independent existence 
- ie positing the existence of disembodied minds is dualism. 
Such an assumption is not required to apply the Doomsday 
argument. I may make such assumptions in other areas though - 
such as wondering why the Anthropic Principle is valid. Not 
dualism implies the Anthropic Principle.

Then how can a tree be a lion without assuming that minds and bodies can
have independent existance? Assuming dualism, its easy; simply switch the
lion's mind with the tree's.

 As a little boy once asked, Why are lions, lions? Why 
aren't lions ants?

I have asked this question of myself Why I am not an ant?. 
The answer (by the Doomsday Argument) is that ants are not 
conscious. The question, and answer is quite profound.

That doesn't seem profound; it seems obvious. Even more obvious is the
answer If you were an ant, you wouldn't be Russell Standish. So it is a
meaningless question.

Switch the question. Why aren't you me (Jonathan Colvin)? I'm conscious
(feels like I am, anyway).

Jonathan Colvin




Re: Dualism and the DA

2005-06-16 Thread Russell Standish
On Thu, Jun 16, 2005 at 01:02:11AM -0700, Jonathan Colvin wrote:
 Russell Standish wrote:
  Nope, I'm thinking of dualism as the mind (or consciousness) is 
  separate from the body. Ie. The mind is not identical to the body.
  
 
 These two statements are not equivalent. You cannot say that 
 the fist is separate from the hand. Yet the fist is not 
 identical to the hand.
 
 Well, actually I'd say the fist *is* identical to the hand. At least, my
 fist seems to be identical to my hand.
 

Even when the hand is open


 
  Another example. You cannot say that a 
 smile is separate from someone's mouth. Yet a smile is not 
 identical to the mouth.
 
 Depends whether you are a Platonist (dualist) about smiles. I'd say a
 smiling mouth *is* identical to a mouth.
 

Even when the mouth is turned down???

 Well, to explicate, the DA suffers from the issue of defining an appropriate
 reference set. Now, we are clearly not both random observers on the class of
 all observers(what are the chances of two random observers from the class of
 all observers meeting at this time on the same mailing list? Googleplexianly
 small). Neither are we both random observers from the class of humans
 (same argument..what are the chances that both our birth ranks are
 approximately the same?). For instance, an appropriate reference set for me
 (or anyone reading this exchange) might be people with access to email
 debating the DA. But this reference set nullifies the DA, since my birth
 rank is no longer random; it is constrained by the requirement, for example,
 that email exists (a pre-literate caveman could not debate the DA).


This would be true if we are arguing about something that depended on
us communicating via email. The DA makes no such argument, so
therefore the existence of email, and of our communication is irrelevant.

 
 The only way to rescue the DA is to assume that I *could have had* a
 different birth rank; in other words, that I could have been someone other
 than me (me as in my body). If the body I'm occupying is contingent (ie.
 I could have been in any human body, and am in this one by pure chance),
 then the DA is rescued. 

Yes.

 This seems to require a dualistic account of
 identity. 

Why? Explain this particular jump of logic please? I'm not being
stubborn here, I seriously do not understand how you draw this conclusion.

 
 Of course a mind is not _identical_ to a body. What an absurd 
 thing to say. If your definition of dualism is that mind and 
 body are not identical, then this is a poor definition indeed. 
 It is tautologically true.
 
 Why do you say of course? I believe that I (my mind) am exactly identical
 to my body (its brain, to be specific).
 

Really? Even when you're not conscious? What about after you've died?
What about after brain surgery? After being copied by Bruno Marchal's
teletransporter? 

 
  My definition would be something 
 along the lines of minds and bodies have independent existence 
 - ie positing the existence of disembodied minds is dualism. 
 Such an assumption is not required to apply the Doomsday 
 argument. I may make such assumptions in other areas though - 
 such as wondering why the Anthropic Principle is valid. Not 
 dualism implies the Anthropic Principle.
 
 Then how can a tree be a lion without assuming that minds and bodies can
 have independent existance? Assuming dualism, its easy; simply switch the
 lion's mind with the tree's.

The question Why am I not a lion? is syntactically similar to Why I
am not an ant, or Why I am not Jonathon Colvin?. The treeness (or
otherwise) of the questioner is rather irrelevant. In any case, the
answers to both the latter questions do not assume minds can be
swapped.

 
  As a little boy once asked, Why are lions, lions? Why 
 aren't lions ants?
 
 I have asked this question of myself Why I am not an ant?. 
 The answer (by the Doomsday Argument) is that ants are not 
 conscious. The question, and answer is quite profound.
 
 That doesn't seem profound; it seems obvious. Even more obvious is the
 answer If you were an ant, you wouldn't be Russell Standish. So it is a
 meaningless question.
 

I _didn't_ ask the question Assuming I am Russell Standish, why am I
not an ant? I asked the question of Why wasn't I an ant?. Its a
different question completely.


 Switch the question. Why aren't you me (Jonathan Colvin)? I'm conscious
 (feels like I am, anyway).
 
 Jonathan Colvin
 

This one is also easy to answer also. I'm just as likely to have been
born you as born me. But I have to have been born someone. I just so
happened to have been born me. This is called symmetry breaking.

In the ant case it is different. It is around a million times more
likely that I would have been born an ant rather than a human
being. Consequently the answer is different.

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic 

Re: Conscious descriptions

2005-06-16 Thread Bruno Marchal


Le 15-juin-05,  01:39, Russell Standish a crit :


On Tue, Jun 14, 2005 at 04:39:57PM +0200, Bruno Marchal wrote:



OK but it can be misleading (especially in advanced stuff!). neither a
program, nor a machine nor a body nor a brain can think. A person can
think, and manifest eself (I follow Patrick for the pronouns) through 
a

program, or a machine or a brain, 


Actually, I think I was the one introducing these 3rd person neutral
pronouns (e, er  em). I picked up the habit from Michael Spivak
(well known mathematician).

Doesn't this beg the question a bit as to what a person really is?
In loose everyday conversation, a person is a member of the species
homo sapiens. However, surely we don't want to rule out the
possibility of other conscious things before we even start. And also
as you mention below, there are odd corner cases - the sleeping human
being etc.




I just identify the first person with the knower. Think about someone 
being cutted in Brussels and being pasted in thwo cities: A and B, 
and nowhere else. Each copy makes an experience, one in A, the other in 
B. Each of them know where they have been reconsituted and so each of 
them get one bit of information. But this bit is uncommunicable from a 
third person point of view. An outsider would get 0 bit from a phone 
call by each copy (by default I assume the cut/past device is 100% 
reliable.
I identify the third person with the body or with any third person 
description of the body, it could be program (with comp). Despite 
Jonathan (I know you agrees with me) I consider as fundamental to 
distinguish the 1-person knower from the 3-person body/brain/program. 
So when I say that only a person can think, I am really meaning a 
1-person.
What is cute with comp, is that the theatetus definition of knowledge 
(and most of its variants) leads to a well defined distinction between 
1 and 3 person. What is nice also, is that the knower is not, in any 
way, 3-describable (we get freely a 
Brouwerian-Heraclitean-Bergsonian-Poincarean ... theory of 
conscious-time-duration... at the place where we would the least expect 
it ).


For the modalist I recall this consists in defining knowing p by 
proving p and p is true: Cp = Bp  p. The non equivalence follows from 
incompleteness.


snip: I snip when I agree, or when I believe the disagreement would 
push us outside the main topic







Church-Turing thesis and arithmetical platonism (my all
description strings condition fulfills a similar role to arithmetical
platonism) are enough.



I am not so sure. You are not always clear if the strings describe the
equivalent of a program (be it an universal program or not), or
describes a computations (be it finite or infinite).


Both actually. One can feed a description into the input tape of a
UTM, hence it becomes a program. They may also be generated by a
program running on a machine.




I was not making that distinction. I was distinguishing between a 
program (being a product of another program or not) and the 
computation, that is the running of the program. The computation can be 
described by the description of the trace of the program (like when we 
debug a program). For example the basic program 10 goto 10 has an 
infinite trace, like 10 10 10 10 10 10 10 ...


That distinction is primordial for the understanding of the work of the 
Universal Dovetailer which dovetails on all programs. The UD generates 
all programs and dovetail on all their executions. The possibility or 
consistency of this is a consequence of Church's thesis.









Consciousness
eventually is related to bunch of (sheaves of) infinite computations.
they can be coded by infinite strings, but they are not programs.



Is this  because they are ultimately not computable (due to the
inherent indeterminism)?




I don't think so. It is just because for any computational states there 
are an infinity of computations going through that states, and this is 
a logical cause of the 1-person indeterminacy (given that the 
1-person are not aware of the huge delays (number of steps of the 
execution of the UD). This is one line summary of the UDA (see my URL 
for links to longer explanation).






There are various strengthenings of the CT thesis which are far from
obvious, and even false in some cases. One of my criticisms of your
work is that I'm not sure you aren't using one of the strong CT
theses, but we can come back to that.




I am using the original thesis by Church, Post, Markov, Turing, ... 
They are equivalent and can be summarizes anachronically by all 
universal digital machine computes the same functions from N to N.










This obviates
having to fix the UTM. Perhaps this is the route into the anthropic
principle.



? Church's thesis just say things does not depend on which UTM you
choose initially


All programs need to be interpreted with respect to a particular
machine. The machine can be changed by appeal to universal
computation, but then the program 

another puzzzle

2005-06-16 Thread Stathis Papaioannou


You find yourself in a locked room with no windows, and no memory of how you 
got there. The room is sparsely furnished: a chair, a desk, pen and paper, 
and in one corner a light. The light is currently red, but in the time you 
have been in the room you have observed that it alternates between red and 
green every 10 minutes. Other than the coloured light, nothing in the room 
seems to change. Opening one of the desk drawers, you find a piece of paper 
with incredibly neat handwriting. It turns out to be a letter from God, 
revealing that you have been placed in the room as part of a philosophical 
experiment. Every 10 minutes, the system alternates between two states. One 
state consists of you alone in your room. The other state consists of 10^100 
exact copies of you, their minds perfectly synchronised with your mind, each 
copy isolated from all the others in a room just like yours. Whenever the 
light changes colour, it means that God is either instantaneously creating 
(10^100 - 1) copies, or instantaneously destroying all but one randomly 
chosen copy.


Your task is to guess which colour of the light corresponds with which state 
and write it down. Then God will send you home.


Having absorbed this information, you reason as follows. Suppose that right 
now you are one of the copies sampled randomly from all the copies that you 
could possibly be. If you guess that you are one of the 10^100 group, you 
will be right with probability (10^100)/(10^100+1) (which your calculator 
tells you equals one). If you guess that you are the sole copy, you will be 
right with probability 1/(10^100+1) (which your calculator tells you equals 
zero). Therefore, you would be foolish indeed if you don't guess that you in 
the 10^100 group. And since the light right now is red, red must correspond 
with the 10^100 copy state and green with the single copy state.


But just as you are about to write down your conclusion, the light changes 
to green...


What's wrong with the reasoning here?

--Stathis Papaioannou

_
REALESTATE: biggest buy/rent/share listings   
http://ninemsn.realestate.com.au




Re: another puzzzle

2005-06-16 Thread rmiller

At 09:12 AM 6/16/2005, Stathis Papaioannou wrote:

You find yourself in a locked room with no windows, and no memory of how 
you got there. The room is sparsely furnished: a chair, a desk, pen and 
paper, and in one corner a light.


RM: You've just described me at work in my office.

The light is currently red, but in the time you have been in the room you 
have observed that it alternates between red and green every 10 minutes. 
Other than the coloured light, nothing in the room seems to change.


RM. . .at my annual New Years' party.

Opening one of the desk drawers, you find a piece of paper with incredibly 
neat handwriting. It turns out to be a letter from God, revealing that you 
have been placed in the room as part of a philosophical experiment. Every 
10 minutes, the system alternates between two states. One state consists 
of you alone in your room. The other state consists of 10^100 exact copies 
of you, their minds perfectly synchronised with your mind, each copy 
isolated from all the others in a room just like yours. Whenever the light 
changes colour, it means that God is either instantaneously creating 
(10^100 - 1) copies, or instantaneously destroying all but one randomly 
chosen copy.


Your task is to guess which colour of the light corresponds with which 
state and write it down. Then God will send you home.


Having absorbed this information, you reason as follows. Suppose that 
right now you are one of the copies sampled randomly from all the copies 
that you could possibly be. If you guess that you are one of the 10^100 
group, you will be right with probability (10^100)/(10^100+1) (which your 
calculator tells you equals one). If you guess that you are the sole copy, 
you will be right with probability 1/(10^100+1) (which your calculator 
tells you equals zero). Therefore, you would be foolish indeed if you 
don't guess that you in the 10^100 group. And since the light right now is 
red, red must correspond with the 10^100 copy state and green with the 
single copy state.


But just as you are about to write down your conclusion, the light changes 
to green...


What's wrong with the reasoning here?


RM: Nothing wrong with the premise or the reasoning IMHO.  Happens to me 
every day---while sitting at a traffic light alone in my car(s) all 10^100 
of me come up with a great idea---I try to write it down and the light 
changes to green.






--Stathis Papaioannou

_
REALESTATE: biggest buy/rent/share listings
http://ninemsn.realestate.com.au






Re: another puzzzle

2005-06-16 Thread rmiller

At 09:12 AM 6/16/2005, Stathis Papaioannou wrote:

You find yourself in a locked room with no windows, and no memory of how 
you got there. \


(snip)

 The other state consists of 10^100 exact copies of you, their minds 
perfectly synchronised with your mind, each copy isolated from all the 
others in a room just like yours. Whenever the light changes colour, it 
means that God is either instantaneously creating (10^100 - 1) copies, or 
instantaneously destroying all but one randomly chosen copy.


RM's two cents worth: If all the 10^100 copies have exactly the same 
sensory input, exactly the same past, exactly the same environment and have 
exactly the same behavior systems, then there would be no overall increase 
in complexity (no additional links between nodes), but there would overall 
be a multiplication of intensity (10^100).  Would this result in a more 
clarified perception during the time period when one is represented 
(magnified?) by 10^100?  It's an open switch (i.e. who knows???)  However, 
the increase in intensity would *not* result in greater perception; that 
would involve linking additional nodes---i.e. getting more neurons or 
elements of the behavior system involved---and the number of links over the 
10^100 copies would remain static.


If Stathis includes the possibility of chaos into the system at the node 
level (corresponding to random fluctuations among interactions at the node 
level) then these differences among the 10^100 copies would amount to 
10^100 specific layers of the individual all linked by the equivalence of 
the similarly-configured behavior systems.  If one could see this from the 
perspective of (say) Hilbert space, it may look like a deck of perfectly 
similar individuals with minor variations or fuzziness.  These links as 
well as the fuzziness over many worlds may be what corresponds to 
consciousness.   





Re: Dualism

2005-06-16 Thread Stephen Paul King

Dear Joanthan,

- Original Message - 
From: Jonathan Colvin [EMAIL PROTECTED]
To: 'Stephen Paul King' [EMAIL PROTECTED]; 
everything-list@eskimo.com

Sent: Thursday, June 16, 2005 1:14 AM
Subject: RE: Dualism and the DA



Stephen Paul King wrote:

   Pardon the intrusion, but in your opinion does every form
of dualism require that one side of the duality has properties
and behaviors that are not constrained by the other side of
the duality, as examplified by the idea of randomly emplaced souls?
   The idea that all dualities, of say mind and body, allow
that minds and bodies can have properties and behaviours that
are not mutually constrained is, at best, an incoherent straw dog.


I don't really uderstand the question the way you've phrased it (I'm not
sure what you mean by mutually constrained); I *think* you are asking
whether I believe that it is necessary that any duality must have mutually
exclusive properties (if not, please elaborate).


[SPK]

   The same kind of mutual constraint that exist between a given physical 
object, say a IBM z990 or a 1972 Jaguar XKE or the human Stephen Paul King, 
and the possible complete descriptions of such. It is upon this distiction 
betwen physical object and its representations, or equivalently, between a 
complete description and its possible implementations, that the duality that 
I argue for is based. This is very different from the Cartesian duality of 
substances (res extensa and res cognitas) that are seperate and 
independent and yet mysteriously linked.




I think this is implied by the very concept of dualism; if the properties 
of
the dual entities (say mind and body, or particle and wave) are NOT 
mutually
exclusive, then there is no dualism to talk about. If the mind and the 
body

are identical, there is no dualism.


[SPK]

   Mutual exclusivity does not make a dualism, and it should be obvious 
that identity is not the negation of mutual exclusivity!


Stephen 



Re: possible solution to modal realism's problem of induction

2005-06-16 Thread Bruno Marchal


Le 14-juin-05,  18:26, Brian Holtz a crit :



Hi everyone (in this world and all relevantly similar ones :-),




Welcome to the list Brian. Thanks for the link to Alexander R  Pruss' 
web page, which seems quite interesting (and which I will comment a 
little bit too, here or in a next post).




I like the solution to the Induction / Dragon / Exploding Cow problem 
that I see in work by Malcolm, Standish, Tegmark, and Schmidhuber.





It is equivalent to the white rabbit problem we talk indeed about, 
all along this list, and which is *almost* solved in my phd thesis 
(to be short). May I attract  your attention to it by referring you to 
my web page? http://iridia.ulb.ac.be/~marchal/


It is a good occasion to sum up the main differences and the main 
similarities between Standish, Schmidhuber, Lewis, Tegmark, Levy,  
Ruhl, Mitra, Mazer, Finney, ... and my own. All approach are indeed 
form of modal realism, and this is indeed what the everything-list is 
all about.


Now I want to be short and I apology in advance for some 
oversimplification, and please, any of you, don't hesitate to correct 
me.


To simplify the comparison I think it could be useful to compare them 
from their ontology and their epistemology, and the way they tackle the 
Dragon problem.


Finney, for example, (a pillar of the list) borrows Bostrom's notion of 
observer-moment (OM). He argues they are fundamental, and its modal 
realism consists to accept or postulate *all observer-moments*
Then he borrows a computationalist hypothesis from Schmidhuber, and 
associates to each OM a finite binary strings.
Then he tackles the dragon problem by attaching to those binary 
string/OM their Kolmogorov complexity from which he infers a absolute 
measure. Little strings will have higher measure, and this should make 
the dragon disappearing through Bostrom Self-sampling assumption SSA, 
taken in some absolute version of it: ASSA.
My critics: OM are described by Bostrom as first person subjective 
construct, and it is not clear how they can or should be related to the 
strings, in such a way that we can make personal prediction. The dragon 
disappear, but then in one second I will be a bacteria!


Schmidhuber postulates a big programmer which runs all programs. He 
postulates some universe and he postulates the possible universes are 
computational objects. Then he try to find some prior explaining the 
importance of short programs at the origin of one universe capable of 
sustaining self-aware structure like us.
My critics: there are simply no notion of first person available. 
Worst, Schmidhuber is obliged to postulate some totally unknown 
physical reality. So the epistemology is empty and the ontology is 
unknowable, though, according guessable (this is quite close to 
traditional physicalism).


Tegmark, in his first paper, suggest the existence of ALL Mathematical 
Structures. This is ontologically much interesting than Schmidhuber 
frame, imo. Unfortunately it is too big, and Tegmark seems not to know 
the failure of all mathematician to capture all of mathematics 
mathematically. I have discussed elsewhere in the list, at lenght, some 
cardinality problem related to Tegmarkian approaches. In Tegmark there 
is an embryo of distinction between first and third person point of 
view, but it is either vague, or locally clear only under the 
assumption of QM, but then it is exactly the (very interesting) 
difference between the subjective and objective knowledge already 
introduced in Everett basic papers. The mind body problem is still 
under the rug.


Both Tegmark ande Schmidhuber assumes unclear relaltion between 
observer and universe, which in general presuppose Aristotle theory of 
substance.


In that regard, epistemologically, Malcolm has the same physicalist 
attitude. He describes quite clearly three sort of *physical* 
theories, having in their intended model (in logician's sense) either 
one universe toward having all logically possible universe, and he 
defends, quite convincingly (imo) that last sort of theories. But he 
discusses to quicky the relation between universe and information so 
that I cannot really say more.
Main critics: the approach relies to much on some aristotelian notion 
of universe, and the 1-3 distinction is not really tackled.


Standish is not yet enough clear about its assumptions, but seems to 
get a pretty derivation of schroedinger equation, which is an 
improvement. He does assume time, with the topology of the reals, which 
is my main critics. The 1-3 distinction is present and used in an 
anthropic way, but I have not yet understood it precisely.


George Levy is completely aware of the 1-3 distinction, and makes the 
1-person at the origin of a purely first person plenitude. Well, so 
much that it is not clear for me if the plenitude is really suited for 
being described in a 3-person theory, and this explains some its 
silence in the list.


Note that some people, like Wei Dai, the list's 

Re: another puzzzle

2005-06-16 Thread daddycaylor

Stathis wrote:


 You find yourself in a locked room with no windows, and no memory of how you got there
 What's wrong with the reasoning here?

This is also in response to your explanation to me of copying etc. in your last post to "Many pasts?..."
I think there is too much we don't know about quantum behavior vs. macro-matter (e.g. human bodies) behavior to say that copying, and subsequent diverging histories,is not like dividing by zero.I think that even if it were possible to copy a body (i.e. exactly)and have more than one copy at the same time, for the purposes of your thought-experiment why wouldn't it be the equivalent of quantum entanglement where you really have the equivalent of just the original? This is where I think the reasoning in your puzzle is flawed. Having 10^100+1 identicalbodies is equivalent to having one body, so it makes it a 50/50 chance. Until the information is actually revealed, it would be just like the copying didn't happen, therefore there is no way to tell which state (copied or not copied) is currently in effect. Even though this may not be an appealing option, I believe that copying, if possible,wou!
 ldn't change anything having to do with identity(it doesn't "add to the measure"). Like Einstein said, insanity is doing the same thing over and over again and expecting a different result.

In addition, even if copying a body with two subsequent diverging histories were possible, why wouldn't this become just like two different people? Who cares if there are disputes? That's nothing new. What does that have to do with consiousness? I don't believe that identity is dependent on consciousness.

Tom Caylor



Re: another puzzzle

2005-06-16 Thread Hal Finney
Stathis Papaioannou writes:
 You find yourself in a locked room with no windows, and no memory of how you 
 got there. The room is sparsely furnished: a chair, a desk, pen and paper, 
 and in one corner a light. The light is currently red, but in the time you 
 have been in the room you have observed that it alternates between red and 
 green every 10 minutes. Other than the coloured light, nothing in the room 
 seems to change. Opening one of the desk drawers, you find a piece of paper 
 with incredibly neat handwriting. It turns out to be a letter from God, 
 revealing that you have been placed in the room as part of a philosophical 
 experiment. Every 10 minutes, the system alternates between two states. One 
 state consists of you alone in your room. The other state consists of 10^100 
 exact copies of you, their minds perfectly synchronised with your mind, each 
 copy isolated from all the others in a room just like yours. Whenever the 
 light changes colour, it means that God is either instantaneously creating 
 (10^100 - 1) copies, or instantaneously destroying all but one randomly 
 chosen copy.

 Your task is to guess which colour of the light corresponds with which state 
 and write it down. Then God will send you home.

Let me make a few comments about this experiment.  I would find it quite
alarming to be experiencing these conditions.  When the light changes
and I go from the high to the low measure state, I would expect to die.
When it goes from the low to the high measure state, I would expect that
my next moment is in a brand new consciousness (that shares memories
with the old).  Although the near-certainty of death is balanced by the
near-certainty of birth, it is to such an extreme degree that it seems
utterly bizarre.  Conscious observers should not be created and destroyed
so cavalierly, not if they know about it.

Suppose you stepped out of a duplicating booth, and a guy walked up with
a gun, aimed it at you, pulled the trigger and killed you.  Would you
say, oh, well, I'm only losing two seconds of memories, my counterpart
will go on anyway?  I don't think so, I think you would be extremely
alarmed and upset at the prospect of your death.  The existence of
your counterpart would be small comfort.  I am speaking specifically
of your views, Stathis, because I think you have already expressed your
disinterest in your copies.

God is basically putting you in this situation, but to an enormously,
unimaginably vaster degree.  He is literally playing God with your
consciousness.  I would say it's a very bad thing to do.

And what happens at the end?  Suppose I guess right, all 10^100 of me?
How do we all go home?  Does God create 10^100 copies of entire
universes for all my copies to go home to as a reward?  I doubt it!
Somehow I think the old guy is going to kill me off again, all but one
infinitesimal fraction of me, and let this tiny little piece go home.

Well, so what?  What good is that?  Why do I care, given that I am
going to die, what happens to the one in 10^100 part of me?  That's an
inconceivably small fraction.

In fact, I might actually prefer to have that tiny fraction stay in the
room so I can be reborn.  Having 10^100 copies 50% of the time gives me
a lot higher measure than just being one person.  I know I just finished
complaining about the ethical problems of putting a conscious entity in
this situation, but maybe there are reasons to think it's good.

So I don't necessarily see that I am motivated to follow God's
instructions and try to guess.  I might just want to sit there.
And in any case, the reward from guessing right seems pretty slim
and unmotivating.  Congratulations, you get to die.  Whoopie.

Hal Finney



RE: Dualism and the DA

2005-06-16 Thread Jonathan Colvin

 Russell Standish wrote:
  Nope, I'm thinking of dualism as the mind (or consciousness) is 
  separate from the body. Ie. The mind is not identical to 
the body.
  
 
 These two statements are not equivalent. You cannot say 
that the fist 
 is separate from the hand. Yet the fist is not identical to 
the hand.
 
 Well, actually I'd say the fist *is* identical to the hand. 
At least, 
 my fist seems to be identical to my hand.
 
Even when the hand is open

Define fist. You don't seem to be talking about a thing, but some sort
of Platonic form. That's an expressly dualist position.

  Another example. You cannot say that a
 smile is separate from someone's mouth. Yet a smile is not 
identical 
 to the mouth.
 
 Depends whether you are a Platonist (dualist) about smiles. 
I'd say a 
 smiling mouth *is* identical to a mouth.
 

Even when the mouth is turned down???

As above. Is it your position that you are the same sort of thing as a
smile? That's a dualist position. I'd say I'm the same sort of thing as a
mouth.


 Well, to explicate, the DA suffers from the issue of defining an 
 appropriate reference set. Now, we are clearly not both random 
 observers on the class of all observers(what are the chances of two 
 random observers from the class of all observers meeting at 
this time 
 on the same mailing list? Googleplexianly small). Neither 
are we both random observers from the class of humans
 (same argument..what are the chances that both our birth ranks are 
 approximately the same?). For instance, an appropriate reference set 
 for me (or anyone reading this exchange) might be people 
with access 
 to email debating the DA. But this reference set nullifies the DA, 
 since my birth rank is no longer random; it is constrained by the 
 requirement, for example, that email exists (a pre-literate 
caveman could not debate the DA).


This would be true if we are arguing about something that 
depended on us communicating via email. The DA makes no such 
argument, so therefore the existence of email, and of our 
communication is irrelevant.

It depends on us communicating per se. Thus, we could not be a pre-literate
caveman. In fact, the reference class of all people before the 19th century
is likely excluded, since the intellectual foundations for formulating the
DA were not yet present. Presumably in a thousand years the DA will no
longer be controversial, so it is likely that our reference class should
exclude such people as well. All these considerations (and I can think of
many others as well) nullify the nave DA (that assumes our appropriate
reference class is simply all humans.)

But your response above is ambiguous. I'm not sure if you are agreeing that
our appropriate reference class is *not* all humans, but disagreeing as to
whether email is important, or disagreeing with the entire statement above
(in which case presumably you think our appropriate refererence class for
the purposes of the DA is all humans). Can you be more specific about what
you disagree with?

 
 The only way to rescue the DA is to assume that I *could have had* a 
 different birth rank; in other words, that I could have been someone 
 other than me (me as in my body). If the body I'm 
occupying is contingent (ie.
 I could have been in any human body, and am in this one by pure 
 chance), then the DA is rescued.

Yes.

Ok, at least we agree on that. Let's go from there.


 This seems to require a dualistic account of identity.

Why? Explain this particular jump of logic please? I'm not 
being stubborn here, I seriously do not understand how you 
draw this conclusion.

Read the above again (to which I assume you agree, since you replied yes.)
Note particularly the phrase If the body I'm occupying is contingent. How
can I occupy a body without a dualistic account of identity? How could I
have been in a different body, unless I am somehow separate from my body
(ie. Dualism)?


 Of course a mind is not _identical_ to a body. What an absurd thing 
 to say. If your definition of dualism is that mind and body are not 
 identical, then this is a poor definition indeed.
 It is tautologically true.
 
 Why do you say of course? I believe that I (my mind) am exactly 
 identical to my body (its brain, to be specific).
 

Really? Even when you're not conscious? What about after you've died?
What about after brain surgery?

For the purposes of this discussion, yes to all.

 After being copied by Bruno 
Marchal's teletransporter?

Let's not get into that one right now. That's a whole other debate.


 
  My definition would be something
 along the lines of minds and bodies have independent existence
 - ie positing the existence of disembodied minds is dualism. 
 Such an assumption is not required to apply the Doomsday 
argument. I 
 may make such assumptions in other areas though - such as wondering 
 why the Anthropic Principle is valid. Not dualism implies the 
 Anthropic Principle.
 
 Then how can a tree be a lion without assuming that 

Re: Dualism and the DA

2005-06-16 Thread Quentin Anciaux
Le Jeudi 16 Juin 2005 10:02, Jonathan Colvin a crit:
 Switch the question. Why aren't you me (Jonathan Colvin)? I'm conscious
 (feels like I am, anyway).

Hi Jonathan,

I think you do not see the real question, which can be formulated (using your 
analogy) by :

Why (me as) Russell Standish is Russell Standish rather Jonathan Colvin ? I 
(as RS) could have been you (JC)... but it's a fact that I'm not, but the 
question is why I'm not, why am I me rather than you ? What force decide 
for me to be me ? :)

Quentin



Re: another puzzzle

2005-06-16 Thread Quentin Anciaux
Le Jeudi 16 Juin 2005 16:12, Stathis Papaioannou a crit:
 One state consists of you alone in your room. The other state
 consists of 10^100 exact copies of you, their minds perfectly synchronised
 with your mind, each copy isolated from all the others in a room just like
 yours. Whenever the light changes colour, it means that God is either
 instantaneously creating (10^100 - 1) copies, or instantaneously destroying
 all but one randomly chosen copy.

 Your task is to guess which colour of the light corresponds with which
 state and write it down. Then God will send you home.

 SNIP

 But just as you are about to write down your conclusion, the light changes
 to green...

 What's wrong with the reasoning here?

Hi Stathis,

If I was in this position, I would not even try to guess, because you (or 
god :) are explaining me that it is possible to copy me (not only me, but 
really all the behavior/feelings/mental state/indoor/outdoor state copying, a 
copy as good as an original or a copy cannot say which is which and even a 
3rd person observer could not distinguish). If it is the case, this means 
that :

1- I'm clonable
2- I is not real
3- A single I does not means anything

So I ask you, if it's the case (real complete copy...), why should I guess 
anything ? Who is the I that must guess ? 

Quentin



RE: Dualism and the DA

2005-06-16 Thread Jonathan Colvin
Quentin wrote:

 Switch the question. Why aren't you me (Jonathan Colvin)? I'm 
 conscious (feels like I am, anyway).

I think you do not see the real question, which can be 
formulated (using your
analogy) by :

Why (me as) Russell Standish is Russell Standish rather 
Jonathan Colvin ? I (as RS) could have been you (JC)... but 
it's a fact that I'm not, but the question is why I'm not, why 
am I me rather than you ? What force decide for me to be me ? :)

My argument is that this is a meaningless question. In what way could you
(as RS) have been me (as JC)? Suppose you were. How would the universe be
any different than it is right now? This question is analogous to asking
Why is 2 not 3?. Why is this tree not that telescope?. Why is my aunt
not a wagon?.

The only way I can make sense of a question like this is to adopt a
dualistic position. In this case, the question makes good sense: me (my
soul, consciousness, whatever), might not have been in my body; it might
have been in someone else's. 

It is easy to forget, I think, that the SSA is a *reasoning principle*, not
an ontological statement. In the absence of evidence to the contrary, we
should reason *as if* we are a random sample from the set of all observers
in our reference class. This is NOT the same as an ontological statement to
the effect that we *are* random observers, which seems hard to justify
unless we assume a species of dualism.

Jonathan Colvin 



Re: Dualism

2005-06-16 Thread Stephen Paul King

Dear Jonathan,

- Original Message - 
From: Jonathan Colvin [EMAIL PROTECTED]
To: 'Stephen Paul King' [EMAIL PROTECTED]; 
everything-list@eskimo.com

Sent: Thursday, June 16, 2005 9:15 PM
Subject: RE: Dualism
snip

[SPK]

   The same kind of mutual constraint that exist between a
given physical object, say a IBM z990 or a 1972 Jaguar XKE or
the human Stephen Paul King, and the possible complete
descriptions of such. It is upon this distiction betwen
physical object and its representations, or equivalently,
between a complete description and its possible
implementations, that the duality that I argue for is based.
This is very different from the Cartesian duality of
substances (res extensa and res cognitas) that are seperate
and independent and yet mysteriously linked.


I'm not sure what a complete description is. Are we talking about a
dualism between, say, a perfect blueprint of a skyscraper and a 
skyscraper?
I'm not sure I'd call that equation a dualism at all. I'd call it a 
category

error. A description of a falling skyscraper can not hurt you (unless you
are also a description ... I agree with Bruno here), whereas a falling
skyscraper can. But please elaborate.

Jonathan Colvin


[SPK]

   Let me turn the question around a little. Are Information and the 
material substrate one and the same? If not, this is a dualism.


Stephen



Re: another puzzzle

2005-06-16 Thread Stathis Papaioannou

Tom Caylor wrote:


Stathis wrote:
 You find yourself in a locked room with no windows, and no memory of how 
you got there

 What's wrong with the reasoning here?


This is also in response to your explanation to me of copying etc. in your 
last post to Many pasts?...
I think there is too much we don't know about quantum behavior vs. 
macro-matter (e.g. human bodies) behavior to say that copying, and 
subsequent diverging histories, is not like dividing by zero.  I think that 
even if it were possible to copy a body (i.e. exactly) and have more than 
one copy at the same time, for the purposes of your thought-experiment why 
wouldn't it be the equivalent of quantum entanglement where you really have 
the equivalent of just the original?  This is where I think the reasoning 
in your puzzle is flawed.  Having 10^100+1 identical bodies is equivalent 
to having one body, so it makes it a 50/50 chance.  Until the information 
is actually revealed, it would be just like the copying didn't happen, 
therefore there is no way to tell which state (copied or not copied) is 
currently in effect.  Even though this may not be an appealing option, I 
believe that copying, if possible, wouldn't change anything having to do 
with identity (it doesn't add to the measure).  Like Einstein said, 
insanity is doing the same thing over and over again and expecting a 
different result.


In addition, even if copying a body with two subsequent diverging histories 
were possible, why wouldn't this become just like two different people?  
Who cares if there are disputes?  That's nothing new.  What does that have 
to do with consiousness?  I don't believe that identity is dependent on 
consciousness.


The idea of exact copying not being consistent with QM is raised quite 
often on this list. The problem with this is that you don't need literally 
exact copying to get the same mental state. If you did, our minds would 
diverge wildly after only nanoseconds, given the constant changes that occur 
even at the level of macromolecules, let alone the quantum state of every 
subatomic particle. It is like saying you could never copy a CD, because you 
could never get the quantum states exactly the same as in the original. 
Brains are far more complex than CD's, but like CD's they must be tolerant 
of a fair amount of noise at *way* above the quantum level, or you would at 
the very least turn into a different person every time you scratched your 
head. If this does not convince you, then you can imagine that the thought 
experiments involving exact copying are being implemented on a (classical) 
computer, and the people are actually AI programs. Once the difficulty of 
creating an AI was overcome, it would be a trivial matter to copy the 
program to another machine (or as a separate process on the same machine) 
and give it the same inputs.


As for your other questions: yes, of course once the copies diverge they are 
completely different people. For the purposes of this exercise, however, I 
am assuming they *don't* diverge. In that case, I  agree you have given the 
correct answer to my puzzle: from a first person perspective, identical 
mental states are the same mental state, and at any point there is a 50-50 
chance that you are either one of the 10^100 group or on your own. But not 
everyone on this list would agree, which is why I made up this puzzle.


--Stathis Papaioannou

_
Dating? Try Lavalife ? get 7 days FREE! Sign up NOW. 
http://lavalife9.ninemsn.com.au/clickthru/clickthru.act?context=an99locale=en_AUa=19180




Re: another puzzzle

2005-06-16 Thread Jesse Mazer

Stathis Papaioannou wrote:

I  agree you have given the correct answer to my puzzle: from a first 
person perspective, identical mental states are the same mental state, and 
at any point there is a 50-50 chance that you are either one of the 10^100 
group or on your own. But not everyone on this list would agree, which is 
why I made up this puzzle.


Would you say that because you think running multiple identical copies of a 
given mind in parallel doesn't necessarily increase the absolute measure of 
those observer-moments (that would be my opinion), or because you don't 
believe the concept of absolute measure on observer-moments is meaningful at 
all, or for some other reason?


Jesse




Re: another puzzzle

2005-06-16 Thread Eric Cavalcanti
On 6/17/05, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 You find yourself in a locked room with no windows, and no memory of how you
 got there.
 (...) a light (...) alternates between red and green every 10 minutes.
(...)
 Every 10 minutes, the system alternates between two states. One
 state consists of you alone in your room. The other state consists of 10^100
 exact copies of you, their minds perfectly synchronised with your mind, each
 copy isolated from all the others in a room just like yours.

 Your task is to guess which colour of the light corresponds with which state
 and write it down. Then God will send you home.
(...)
 But just as you are about to write down your conclusion, the light changes
 to green...

 What's wrong with the reasoning here?

To make the story more visualisable, imagine that God throws a coin
(since he doesn't play dice) to decide whether he will initialise the system
in state A (one person) or B (many). We can imagine that at this point
the universe is split in two, and in universe 1 there are many people
in the room, while in universe 2 there is only one.

After ten minutes, God switches the state of *both* universes. In
universe 1 there is now one person in the room, while in universe 2
there are many, most of which with a false memory of being there
for more than 10 minutes.

This happens for a while before the people in the rooms start to learn
about the experiment and God's game. But you can convince yourself
that it doesn't matter much what was the initial state and how many times
the light has switched; if you believe God's story, the most likely is that
you have just been created after the last switch, and you have a false
memory of being there for a while.

Eric.



Copies Count

2005-06-16 Thread Hal Finney
Jesse Mazer writes:
 Would you say that because you think running multiple identical copies of a 
 given mind in parallel doesn't necessarily increase the absolute measure of 
 those observer-moments (that would be my opinion)...

Here is an argument I wrote a couple of years ago on another list
that first made me think that copies count, that is, that having more
identical copies should be considered to increase measure.  Previously I
was skeptical about it, I thought what mattered was whether something
or someone got instantiated at all, not how many there were.  And of
course since then I not only believe that copies count, I have followed
my logic to the absurd sounding conclusion that size and slowness increase
measure as well.

Consider an experiment where we are simulating someone and can give
them either a good or bad experience.  These are not replays, they are
new experiences which we can accurately anticipate will be pleasant
or unpleasant.

Suppose we are going to flip a biased quantum coin, one which has a 90%
chance of coming up heads.  We will generate the good or bad experience
depending on the outcome of the coin flip.  I claim that it is obvious
that it is better to give the good experience when we get the 90% outcome
and the bad experience when we get the 10% outcome.  That's the assumption
I will start with.

Now consider Tegmark's level 1 of parallelism, the fact that in a
sufficiently large volume of space I can find a large number of copies
of me, in fact copies of the entire earth and our entire visible universe
(the Hubble bubble?).  When I do my quantum coin flip, 90% of the copies
will see it come up heads and cause the good experience for the subject,
and 10% will see tails and cause the bad experience.

I will also assume that my knowledge of this fact about the physical
universe will not change my mind about the ethical value of my decision
to give the good experience for the 90% outcome.

Now the problem is this.  There are really only two different programs
being run for our experimental subject, the guy in the simulation.  One is
a good experience and one is bad.  All my decision does is to change how
many copies of each of these two programs are run.  In making my decision
about which experiences to assign to the two coin flip outcomes, I have
chosen that the copies of the good experience will outnumber copies of
the bad experience by 9 to 1.

But if I don't believe that the number of copies being run makes a
difference, then I haven't accomplished what I desired.  The fact that
I am running more copies of the good program than the bad wouldn't make
any difference.  Therefore there is no actual ethical value in what I
have done, I might have just as validly reversed the outcome of my coin
flips and it wouldn't have made any difference.

In this way I reach a contradiction between the belief that the number
of copies doesn't matter, the belief that the existence of distant
parallel copies of myself doesn't make much difference in what I should
do, and the idea that there is value in making people happy.  Of these,
the most questionable seems to be the assumption that copies don't matter,
so this line of reasoning turns me away from that belief.

I can come up with similar contradictions from simpler cases like
our own observations of subjective probability.  The fact that I do
experience a subjective 90% chance of seeing the quantum coin come
up heads corresponds very well with the fact that 90% of the copies
of me will see heads - but only if I assume that the multiplicity of
the copies matters.  After the coin flip, in a certain voume of space
there are 90 copies of me that see heads and 10 copies that see tails.
But within the two groups all copies are identical (neglecting other
quantum events which would further split me).  If the multiplicity
doesn't count, then there are really just two outcomes and I might
expect to subjectively experience equal probability for them.

This is a variant on an old argument against the MWI, but in that case I
always felt the answer was measure, that some of the outcomes occured
in a quantum branch which had this intangible quality which made it count
more.  In this case I can't invoke any such magic; all the copies of me
are running in the same universe and with equal quantum amplitude.  I have
to resort to counting the instances separately and assuming that each one
makes its own independent contribution to my subjective experiences, in
order to gain correspondence with subjective probability.  Therefore it
is most consistent to say that separate runs of identical programs do
count, they do add to the measure of the subjective experience.

Hal Finney