Re: MGA 1 bis (exercise)

2008-11-23 Thread Bruno Marchal


On 20 Nov 2008, at 19:38, Brent Meeker wrote:


>  Talk about consciousness will seem as quaint
> as talk about the elan vital does now.


Then you are led to eliminativism of consciousness. This makes MEC+MAT  
trivially coherent. The price is big: consciousness does no more  
exist, like the "elan vital". MEC becomes vacuoulsy true: I say yes to  
the doctor, without even meaning it. But it seems to me that  
consciousness is not like the "elan vital". I do make the, admittedly  
non sharable, experience of consciousness all the time, so it seems to  
me that such a move consists in negating the data. If the idea of  
keeping the notion of primitive matter, which I recall is really an  
hypothesis, is so demanding that I have to abandon the idea that I am  
conscious, I will abandon the hypothetical notion of primitive matter  
instead.
But you make my point.

Bruno

http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Brent Meeker

Kory Heath wrote:
> 
> On Nov 20, 2008, at 3:33 PM, Brent Meeker wrote:
>> Doesn't the question go away if it is nomologically impossible?
> 
> I'm sort of the opposite of you on this issue. You don't like to use  
> the term "logically possible", while I don't like to use the term  
> "nomologically impossible". I don't see the relevance of nomological  
> possibility to any philosophical question I'm interested in. For  
> anything that's nomologically impossible, I can just imagine a  
> cellular automaton or some other computational or mathematical  
> "physics" in which that thing is nomologically possible. And then I  
> can just imagine physically instantiating that universe on one of our  
> real computers. And then all of my philosophical questions still apply.
> 
> I can certainly imagine objections to that viewpoint. But life is  
> short. My point was that, since you already agreed that it's  
> nomologically possible for a random robot to outwardly behave like a  
> conscious person for some indefinite period of time, we can sidestep  
> the (probably interesting) discussion we might have about nomological  
> vs. logical possibility in this case.
> 
>> Does a random number generator have computational functionality just  
>> in case it
>> (accidentally) computes something?  I would say it does not.  But  
>> referring the
>> concept of zombie to a capacity, rather than observed behavior,  
>> makes a
>> difference in Bruno's question.
> 
> I think that Dennett explicitly refers to computational capacities  
> when talking about consciousness (and zombies), and I follow him. But  
> Dennett's point is that computational capacity is always, in  
> principle, observed behavior - or, at least, behavior that can be  
> observed. In the case of Lucky Alice, if you had the right tools, you  
> could examine the neurons and see - based on how they were behaving! -  
> that they were not causally connected to each other. (The fact that a  
> neuron is being triggered by a cosmic ray rather than by a neighboring  
> neuron is an observable part of its behavior.) That observed behavior  
> would allow you to conclude that this brain does not have the  
> computational capacity to compute the answers to a math test, or to  
> compute the trajectory of a ball.
> 
>> I would regard it as an empirical question about how the robots  
>> brain worked.
>> If the brain processed perceptual and memory data to produce the  
>> behavior, as in
>> Jason's causal relations, I would say it is conscious in some sense  
>> (I think
>> there are different kinds of consciousness, as evidenced by Bruno's  
>> list of
>> first-person experiences).  If it were a random number generator, i.e.
>> accidental behavior, I'd say not.
> 
> I agree. But why do you say you're puzzled about how to answer Bruno's  
> question about Lucky Alice? I think you just answered it - for you,  
> Lucky Alice wouldn't be conscious. (Or do you think that Lucky Alice  
> is different than a robot with a random-number-generator in its head?  
> I don't.)

I think Alice is different.  She has the capacity to be conscious.  This is 
potentially, temporarily interrupted by some mysterious failure of gates (or 
neurons) in her brain - but wait, these failures are serendipitously canceled 
out by a burst of cosmic rays, so they all get the same input/output as if 
nothing had happened.  So, functionally, it's as if the gates didn't fail at 
all.  This functionality is beyond external behavior; it includes forming 
memories, paying attention, etc.  Of course we may say it is not causally 
related to Alice's environment, but this depends on a certain theory of 
causality, a physical theory.  If the cosmic rays exactly replace all the gate 
functions to maintain the same causal chains then from an informational 
perspective we might say the rays were caused by the relations to her 
environment.

Brent

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Kory Heath


On Nov 20, 2008, at 3:33 PM, Brent Meeker wrote:
> Doesn't the question go away if it is nomologically impossible?

I'm sort of the opposite of you on this issue. You don't like to use  
the term "logically possible", while I don't like to use the term  
"nomologically impossible". I don't see the relevance of nomological  
possibility to any philosophical question I'm interested in. For  
anything that's nomologically impossible, I can just imagine a  
cellular automaton or some other computational or mathematical  
"physics" in which that thing is nomologically possible. And then I  
can just imagine physically instantiating that universe on one of our  
real computers. And then all of my philosophical questions still apply.

I can certainly imagine objections to that viewpoint. But life is  
short. My point was that, since you already agreed that it's  
nomologically possible for a random robot to outwardly behave like a  
conscious person for some indefinite period of time, we can sidestep  
the (probably interesting) discussion we might have about nomological  
vs. logical possibility in this case.

> Does a random number generator have computational functionality just  
> in case it
> (accidentally) computes something?  I would say it does not.  But  
> referring the
> concept of zombie to a capacity, rather than observed behavior,  
> makes a
> difference in Bruno's question.

I think that Dennett explicitly refers to computational capacities  
when talking about consciousness (and zombies), and I follow him. But  
Dennett's point is that computational capacity is always, in  
principle, observed behavior - or, at least, behavior that can be  
observed. In the case of Lucky Alice, if you had the right tools, you  
could examine the neurons and see - based on how they were behaving! -  
that they were not causally connected to each other. (The fact that a  
neuron is being triggered by a cosmic ray rather than by a neighboring  
neuron is an observable part of its behavior.) That observed behavior  
would allow you to conclude that this brain does not have the  
computational capacity to compute the answers to a math test, or to  
compute the trajectory of a ball.

> I would regard it as an empirical question about how the robots  
> brain worked.
> If the brain processed perceptual and memory data to produce the  
> behavior, as in
> Jason's causal relations, I would say it is conscious in some sense  
> (I think
> there are different kinds of consciousness, as evidenced by Bruno's  
> list of
> first-person experiences).  If it were a random number generator, i.e.
> accidental behavior, I'd say not.

I agree. But why do you say you're puzzled about how to answer Bruno's  
question about Lucky Alice? I think you just answered it - for you,  
Lucky Alice wouldn't be conscious. (Or do you think that Lucky Alice  
is different than a robot with a random-number-generator in its head?  
I don't.)

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Brent Meeker

Kory Heath wrote:
> 
> On Nov 20, 2008, at 10:38 AM, Brent Meeker wrote:
>> I think you really you mean nomologically possible.
> 
> I mean logically possible, but I'm happy to change it to  
> "nomologically possible" for the purposes of this conversation.

Doesn't the question go away if it is nomologically impossible?

> 
>> I think Dennett changes the question by referring to
>> neurophysiological "actions".  Does he suppose wetware can't be  
>> replaced by
>> hardware?
> 
> No, he definitely argues that wetware can replaced by hardware, as  
> long as the hardware retains the computational functionality of the  
> wetware.

But that's the catch. Computational functionality is a capacity, not a fact. 
Does a random number generator have computational functionality just in case it 
(accidentally) computes something?  I would say it does not.  But referring the 
concept of zombie to a capacity, rather than observed behavior, makes a 
difference in Bruno's question.

> 
>> In general when I'm asked if I believe in philosophical zombies, I  
>> say no,
>> because I'm thinking that the zombie must outwardly behave like a  
>> conscious
>> person in all circumstances over an indefinite period of time, yet  
>> have no inner
>> experience.  I rule out an accidental zombie accomplishing this as  
>> to improbable
>> - not impossible.
> 
> I agree. But if you accept that it's nomologically possible for a  
> robot with a random-number-generator in its head to outwardly behave  
> like a conscious person in all circumstances over an indefinite period  
> of time, then your theory of consciousness, one way or another, has to  
> answer the question of whether or not this unlikely robot is  
> conscious. Now, maybe your answer is "The question is misguided in  
> that case, and here's why..." But that's a significant burden.

I would regard it as an empirical question about how the robots brain worked. 
If the brain processed perceptual and memory data to produce the behavior, as 
in 
Jason's causal relations, I would say it is conscious in some sense (I think 
there are different kinds of consciousness, as evidenced by Bruno's list of 
first-person experiences).  If it were a random number generator, i.e. 
accidental behavior, I'd say not.  Observing the robot for some period of time, 
in some circumstances can provide strong evidence against the "accidental" 
hypothesis, but it cannot rule it out completely.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Kory Heath


On Nov 20, 2008, at 10:38 AM, Brent Meeker wrote:
> I think you really you mean nomologically possible.

I mean logically possible, but I'm happy to change it to  
"nomologically possible" for the purposes of this conversation.

> I think Dennett changes the question by referring to
> neurophysiological "actions".  Does he suppose wetware can't be  
> replaced by
> hardware?

No, he definitely argues that wetware can replaced by hardware, as  
long as the hardware retains the computational functionality of the  
wetware.

> In general when I'm asked if I believe in philosophical zombies, I  
> say no,
> because I'm thinking that the zombie must outwardly behave like a  
> conscious
> person in all circumstances over an indefinite period of time, yet  
> have no inner
> experience.  I rule out an accidental zombie accomplishing this as  
> to improbable
> - not impossible.

I agree. But if you accept that it's nomologically possible for a  
robot with a random-number-generator in its head to outwardly behave  
like a conscious person in all circumstances over an indefinite period  
of time, then your theory of consciousness, one way or another, has to  
answer the question of whether or not this unlikely robot is  
conscious. Now, maybe your answer is "The question is misguided in  
that case, and here's why..." But that's a significant burden.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Brent Meeker

Kory Heath wrote:
> 
> On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote:
>> So I'm puzzled as to how answer Bruno's question.  In general I  
>> don't believe in
>> zombies, but that's in the same way I don't believe my glass of  
>> water will
>> freeze at 20degC.  It's an opinion about what is likely, not what is  
>> possible.
> 
> I take this to mean that you're uncomfortable with thought experiments  
> which revolve around logically possible but exceedingly unlikely  
> events. 

I think you really you mean nomologically possible.  I'm not uncomfortable with 
them, I just maintain a little skepticism.  For one thing what is nomologically 
possible or impossible is often reassessed.  Less than a century ago the 
experimental results Elizer, Vaidman, Zeilenger, et al, on delayed choice, 
non-interaction measurement, and other QM phenomena would all have been 
dismissed in advance as "logically" impossible.

>I think that's understandable, but ultimately, I'm on the  
> philosopher's side. It really is logically possible - although  
> exceedingly unlikely - for a random-number-generator to cause a robot  
> to walk around, talk to people, etc. It really is logically possible  
> for a computer program to use a random-number-generator to generate a  
> lattice of changing bits that "follows" Conway's Life rule. Mechanism  
> and materialism needs to answer questions about these scenarios,  
> regardless of how unlikely they are.

I don't disagree with that.  My puzzlement about how to answer Bruno's question 
comes from the ambiguity as to what we mean by a philosophical zombie.  Do we 
mean its outward actions are the same as a conscious person?  For how long? 
Under what circumstances?  I can easily make a robot that acts just like a 
sleeping person.  I think Dennett changes the question by referring to 
neurophysiological "actions".  Does he suppose wetware can't be replaced by 
hardware?

In general when I'm asked if I believe in philosophical zombies, I say no, 
because I'm thinking that the zombie must outwardly behave like a conscious 
person in all circumstances over an indefinite period of time, yet have no 
inner 
experience.  I rule out an accidental zombie accomplishing this as to 
improbable 
- not impossible.  In other words if I were constructing a robot that had to 
act 
as a conscious person would over a long period of time in a wide variety of 
circumstances, I would have to build into the robot some kind of inner 
attention 
module that selected what was important to remember, compressed into short 
representation, linked it to other memories.  And this would be an inner 
narrative.  Similary for the other "inner" processes.  I don't know if that's 
really what it takes to build a conscious robot, but I'm pretty sure it's 
something like that.  And I think once we understand how to do this, we'll stop 
worrying about "the hard problem of consciousness".  Instead we'll talk about 
how efficient the inner narration module is or the memory confabulation module 
or the visual imagination module.  Talk about consciousness will seem as quaint 
as talk about the elan vital does now.

Brent


> 
> -- Kory
> 
> 
> > 
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Bruno Marchal


On 20 Nov 2008, at 00:19, Telmo Menezes wrote:

>
>> Could you alter the so-lucky cosmic explosion beam a little bit so
>> that Alice still succeed her math exam, but is, reasonably enough, a
>> zombie  during the exam. With zombie taken in the traditional sense  
>> of
>> Kory and Dennett.
>> Of course you have to keep well *both*  MECH *and* MAT.
>
> I think I can...
>
> Instead of correcting the brain, the cosmic beams trigger output
> neurons in a sequence that makes Alice write the right answers. That
> is to say, the information content of the beams is no longer a
> representation of an area of Alice's brain, but a representation of
> the answers to the exam. An outside observer cannot distinguish one
> case from the other. In the first she is Alice, in the second she is a
> zombie.


Right.

I guess you see that such a zombie is an accidental zombie. We will  
have to come back later on this "accidental" part.

Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-20 Thread Bruno Marchal


On 19 Nov 2008, at 22:43, Brent Meeker wrote:

>
> Bruno Marchal wrote:
>>
>> On 19 Nov 2008, at 16:06, Telmo Menezes wrote:
>>
>>
>>> Bruno,
>>>
 If no one objects, I will present MGA 2 (soon).
>>> I also agree completely and am curious to see where this is going.
>>> Please continue!
>>
>>
>> Thanks Telmo, thanks also to Gordon.
>>
>> I will try to send MGA 2 asap. But this asks me some time.  
>> Meanwhile I
>> suggest a little exercise, which, by the way, finishes the proof of
>> "MECH + MAT implies false", for those who thinks that there is no
>> (conceivable) zombies. (they think that "exists zombie" *is* false).
>>
>> Exercise (mat+mec implies zombie exists or are conceivable):
>>
>> Could you alter the so-lucky cosmic explosion beam a little bit so
>> that Alice still succeed her math exam, but is, reasonably enough, a
>> zombie  during the exam. With zombie taken in the traditional sense  
>> of
>> Kory and Dennett.
>> Of course you have to keep well *both*  MECH *and* MAT.
>>
>> Bruno
>
> As I understand it a philosophical zombie is someone who looks and  
> acts just
> like a conscious person but isn't conscious, i.e. has no "inner  
> narrative".


No inner narrative, no inner image, no inner souvenir, no inner  
sensation, no qualia, no subject, no first person notions at all. OK.



>
> Time and circumstance play a part in this.  As Bruno pointed out a  
> cardboard
> cutout of a person's photograph could be a zombie for a moment.  I  
> assume the
> point of the exam is that an exam is long enough in duration and  
> complex enough
> that it rules out the accidental, cutout zombie.

Well, given that it is a thought experiment, the resources are free,  
and I can make the cosmic lucky explosion as lucky as you need for  
making Alice apparently alive, and with COMP+MAT, indeed alive. All  
its neurons break down all the time, and, because she is so lucky, an  
event which occurred 10 billions years before, send to her, at all  
right moment and place (and thus this is certainly NOT random) the  
lucky ray plumber who fixes momentarily the problem by trigging the  
other neurons to which it was supposed to send the infos (for example).
Keeping comp and mat, making her unconscious here would be equivalent  
to give Alice's neurons a sort of physical prescience.


> But then Alice has her normal
> behavior restored by a cosmic ray shower that is just as improbable  
> as the
> accidental zombie, i.e. she is, for the duration of the shower, an  
> accidental
> zombie.


Well, with Telmo solution of the "MGA 1bis exercise", where only the  
motor output neuron are fixed and where no internal neuron is fixed  
(almost all neurons),  with MEC + MAT, Alice has no working brain at  
all, is only a lucky puppet, and she has to be a zombie. But in the  
original problem, all neurons are fixed, and then I would say Alice is  
not a zombie (if not, you  give a magical physical prescience to the  
neurons).

But now, you are right, that in both case, the luck can only be  
accidental. If, in the same thought experience, keeping the exact same  
"no lucky cosmic explosion, but giving now a phone call to the teacher  
or to Alice, so that she moves 1mm away of the position she had in the  
previous version, she will miss the lucky rays, most probably some  
will go through in wrong places and most probably she will miss the  
exams, and perhaps even die. So you are right, in Telmo's solution of"  
MGA 1bis exercise" she is an accidental zombie. But in the original  
MGA 1, she should remain conscious (with MECH and MAT), even if  
accidentally so.


>
>
> So I'm puzzled as to how answer Bruno's question.

Hope it is clear for every one now?



>  In general I don't believe in
> zombies, but that's in the same way I don't believe my glass of  
> water will
> freeze at 20degC.  It's an opinion about what is likely, not what is  
> possible.

OK. Accidental zombie are possible, but are very unlikely (but wait  
for MGA 2 for a lessening of this statement).
Accidental consciousness (like in MGA 1, with MECH+MAT) is possible  
also, and is as much unlikely (same remark).

Of course, as unlikeley as possible, nobody can test if someone else  
is "really conscious" or is a accidental zombie, because for any  
series of test you can imagine, you can conceive a sufficiently lucky  
cosmic explosion.


>
> It seems similar to the question, could I have gotten in my car and  
> driven to
> the store, bought something, and driven back and yet not be  
> conscious of it.
> It's highly unlikely, yet people apparently have done such things.

(I think here something different occurs, concerning intensity of  
attention with respect to different conscious streams, but it is out- 
of-topic, I think).


Bruno


http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group

Re: MGA 1 bis (exercise)

2008-11-20 Thread Kory Heath


On Nov 19, 2008, at 1:43 PM, Brent Meeker wrote:
> So I'm puzzled as to how answer Bruno's question.  In general I  
> don't believe in
> zombies, but that's in the same way I don't believe my glass of  
> water will
> freeze at 20degC.  It's an opinion about what is likely, not what is  
> possible.

I take this to mean that you're uncomfortable with thought experiments  
which revolve around logically possible but exceedingly unlikely  
events. I think that's understandable, but ultimately, I'm on the  
philosopher's side. It really is logically possible - although  
exceedingly unlikely - for a random-number-generator to cause a robot  
to walk around, talk to people, etc. It really is logically possible  
for a computer program to use a random-number-generator to generate a  
lattice of changing bits that "follows" Conway's Life rule. Mechanism  
and materialism needs to answer questions about these scenarios,  
regardless of how unlikely they are.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-19 Thread Telmo Menezes

> Could you alter the so-lucky cosmic explosion beam a little bit so
> that Alice still succeed her math exam, but is, reasonably enough, a
> zombie  during the exam. With zombie taken in the traditional sense of
> Kory and Dennett.
> Of course you have to keep well *both*  MECH *and* MAT.

I think I can...

Instead of correcting the brain, the cosmic beams trigger output
neurons in a sequence that makes Alice write the right answers. That
is to say, the information content of the beams is no longer a
representation of an area of Alice's brain, but a representation of
the answers to the exam. An outside observer cannot distinguish one
case from the other. In the first she is Alice, in the second she is a
zombie.

Telmo.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1 bis (exercise)

2008-11-19 Thread Brent Meeker

Bruno Marchal wrote:
> 
> On 19 Nov 2008, at 16:06, Telmo Menezes wrote:
> 
> 
>> Bruno,
>>
>>> If no one objects, I will present MGA 2 (soon).
>> I also agree completely and am curious to see where this is going.
>> Please continue!
> 
> 
> Thanks Telmo, thanks also to Gordon.
> 
> I will try to send MGA 2 asap. But this asks me some time. Meanwhile I  
> suggest a little exercise, which, by the way, finishes the proof of  
> "MECH + MAT implies false", for those who thinks that there is no  
> (conceivable) zombies. (they think that "exists zombie" *is* false).
> 
> Exercise (mat+mec implies zombie exists or are conceivable):
> 
> Could you alter the so-lucky cosmic explosion beam a little bit so  
> that Alice still succeed her math exam, but is, reasonably enough, a  
> zombie  during the exam. With zombie taken in the traditional sense of  
> Kory and Dennett.
> Of course you have to keep well *both*  MECH *and* MAT.
> 
> Bruno

As I understand it a philosophical zombie is someone who looks and acts just 
like a conscious person but isn't conscious, i.e. has no "inner narrative". 
Time and circumstance play a part in this.  As Bruno pointed out a cardboard 
cutout of a person's photograph could be a zombie for a moment.  I assume the 
point of the exam is that an exam is long enough in duration and complex enough 
that it rules out the accidental, cutout zombie.  But then Alice has her normal 
behavior restored by a cosmic ray shower that is just as improbable as the 
accidental zombie, i.e. she is, for the duration of the shower, an accidental 
zombie.

So I'm puzzled as to how answer Bruno's question.  In general I don't believe 
in 
zombies, but that's in the same way I don't believe my glass of water will 
freeze at 20degC.  It's an opinion about what is likely, not what is possible. 
It seems similar to the question, could I have gotten in my car and driven to 
the store, bought something, and driven back and yet not be conscious of it. 
It's highly unlikely, yet people apparently have done such things.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---