Re: MGA 1

2008-11-21 Thread Jason Resch
On Fri, Nov 21, 2008 at 7:54 PM, Kory Heath <[EMAIL PROTECTED]> wrote:

>
>
> On Nov 21, 2008, at 9:01 AM, Jason Resch wrote:
> > What you described sounds very similar to a split brain patient I
> > saw on a documentary.
>
> It might seem similar on the surface, but it's actually very
> different. The observers of the split-brain patient and the patient
> himself know that something is amiss. There is a real difference in
> his consciousness and his behavior. If cosmic rays randomly severed
> your corpus callosum right now, you would definitely notice a
> difference. (It's an empirical question whether or not you'd know it
> almost immediately, or if it would take a while for you to figure it
> out. I'm sure the neurologists and cognitive scientists already know
> the answer to that one.)
>
> At no point during the replacement of Alice's fully-functioning
> neurons with cosmic-ray stimulated neurons (or during the replacement
> of cosmic-ray neurons with no neurons at all) will Alice notice any
> difference in her consciousness. In principle, she cannot notice it,
> since every one of her full-functionally neurons always continues to
> do exactly what it would have done. This is a serious problem for the
> mechanistic view of consciousness.
>

What about a case when only some of Alice's neurons have ceased normal
function and became dependent on the lucky rays?  Lets say the neurons in
her visual center stopped working but her speech center was unaffected.  In
this manner could she talk about what she saw without having any conscious
experience of sight?  I'm beginning to see how truly frustrating the MGA
argument is: If all her neurons break and are luckily fixed I believe she is
a zombie, if only one of her neurons fails but we correct it, I don't think
this would effect her consciousness in any perceptible way, but cases where
some part of her brain needs to be corrected are quite strange, and almost
maddeningly so.

I think you are right in that the split brain cases are very different, but
I think the similarity is that part of Alice's consciousness would
disappear, though the lucky effects ensure she acts as if no change had
occurred.  If all of a sudden all her neurons started working properly
again, I don't think she would have any recollection of having lost any part
of her consciousess, the lucky effects should have fixed her memories as
well, and the parts of her brain which remained functional would also not
have detected any inconsitencies, yet the parts of her brain that depended
on lucky cosmic rays generated no subjective experience for whatever set of
information they were processing.  (So I would think)

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Kory Heath


On Nov 21, 2008, at 9:01 AM, Jason Resch wrote:
> What you described sounds very similar to a split brain patient I  
> saw on a documentary.

It might seem similar on the surface, but it's actually very  
different. The observers of the split-brain patient and the patient  
himself know that something is amiss. There is a real difference in  
his consciousness and his behavior. If cosmic rays randomly severed  
your corpus callosum right now, you would definitely notice a  
difference. (It's an empirical question whether or not you'd know it  
almost immediately, or if it would take a while for you to figure it  
out. I'm sure the neurologists and cognitive scientists already know  
the answer to that one.)

At no point during the replacement of Alice's fully-functioning  
neurons with cosmic-ray stimulated neurons (or during the replacement  
of cosmic-ray neurons with no neurons at all) will Alice notice any  
difference in her consciousness. In principle, she cannot notice it,  
since every one of her full-functionally neurons always continues to  
do exactly what it would have done. This is a serious problem for the  
mechanistic view of consciousness.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Kory Heath

On Nov 21, 2008, at 8:52 AM, Jason Resch wrote:
> This is very similar to an existing thought experiment in identity  
> theory:
>
> http://en.wikipedia.org/wiki/Swamp_man

Cool. Thanks for that link!

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Kory Heath


On Nov 21, 2008, at 8:15 AM, Bruno Marchal wrote:
> On 21 Nov 2008, at 10:45, Kory Heath wrote:
>> However, the materialist-mechanist still has some grounds to say that
>> there's something interestingly different about Lucky Kory than
>> Original Kory. It is a physical fact of the matter that Lucky Kory is
>> not causally connected to Pre-Teleportation Kory.
>
>
> Keeping the comp hyp (cf the "qua computatio") this would introduce  
> magic.

I'm not sure it has to. Can you elaborate on what magic you think it  
ends up introducing?

In the context of mechanism-materialism, I am forced to believe that  
Lucky Kory's consciousness, qualia, etc., are exactly what they would  
have been if the teleportation had worked properly. But I don't see  
how that forces me to accept any magic. It doesn't (for instance)  
force me to say that Kory's "real" consciousness magically jumped over  
to Lucky Kory despite the lack of the causal connection. As a  
mechanist, I don't think there's any sense in talking about  
consciousness in that way.

Dennett has a slogan: "When you describe what happens, you've  
described everything." In this weird case, we have to fall back on  
describing what happened. A pattern of molecules was destroyed, and  
somewhere else that exact pattern was (very improbably) created by a  
random process of cosmic rays. Since we mechanists believe that  
consciousness and qualia are just aspects of patterns, the  
consciousness and qualia of the lucky pattern must (by definition) be  
the same as the original would have been. I don't think that causes  
any (immediate) problem for the mechanist. Is Lucky Kory the same  
person as Original Kory? I don't think the mechanist is committed to  
any particular answer to this question. We've already described what  
happened. Now it's just a matter of how we want to use our words. If  
we want to use them in a certain way, there is a sense in which we can  
say that Lucky Kory is not the same person as Original Kory, as long  
as we understand that *all* we mean is that Lucky Kory isn't causally  
connected to Original Kory (along with whatever else that implies).

However, I do start getting uncomfortable when I realize that this  
lucky teleportation can happen over and over again, and if it happens  
fast enough, it just reduces to sheer randomness that just happens to  
be generating an ordered pattern that looks like Kory. I have a hard  
time understanding how a mechanist can consider a bunch of random  
numbers to be conscious. If that's the kind of magic you're referring  
to, then I agree.

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Little exercise

2008-11-21 Thread John Mikes

Kory:
>"...It's not that I don't believe in life"<

In WHAT??? Some people believe in god, some in numbers, none can
reasonably identify the target of their belief. How about you?
*
>"... I just that I think that molecules, bits,
patterns, whatever, are the things that play the role ..."<

The listed 'whatevers' are sweatful explanations of (mis)understood
partial phenomena one received at a primitive level of the epistemic
cognitive inventory-builduing in the line of quantized considerations
and their forced matching into equations. All in the 'evolving' HUMAN
(OOPs, Bruno: Lobian) mindset-part accessible AT THAT TIME to us. (Add
'atoms' to it and the elusive 'energy' - whatever that may be).
*
>"...I don't like "cognitive immaterialism" (or anything with
"immaterialism"), because it implies that I don't believe in matter..."<

I dislike the term on another basis: "IMmaterialism" implies material,
to deny it (like atheism a theos) and 'cognitive' in our usual meaning
is within the 'human' general way of thinking, not in the domains of a
lobian machine beyond our present capabilities.
I try expressions like 'unlimited complexity', 'totality',
'existence(?)' not pointing to our (mis)conception of a physical world
in the sense of our conventional sciences.
I see no possible compromise: on either severs the reductionist lingo,
or falls into it.
What makes me vague, - so be it - I DO NOT  publish or seek
awards/(acad).tenures.
Maybe the grandkids of our grandkids will have a more adequate
language to express things we may think of today.

John M

PS - Bruno wrote to me:

>..."Be careful and be open to your own philosophy. The idea that "digital"
and "numbers" (the concept, not our human description of it) are
restrictions could be due to our human prejudice. May be a machine
could one day believes this is a form of unfounded prejudicial exclusion..."<

Exactly the point I made in the above concluding sentence. We cannot (today) 
overstep the language of our human epistemic level without sounding irrational. 
I feel this in posts about Alice, teleport, etc. and even Bruno's mentioned
 "concept - not description" of digital and numbers. 
The 'capabilities "one day" of an illusorical machine' vs. our expressable 
capabilities NOW do not constitute evidence for what "one day" the unknown 
capabilities of that machine could find reasonable. I have to include my 
reasonable(?) restrictions of today. Maybe in my futuristic trend I am still 
too conservative.
John M




On 11/21/08, Kory Heath <[EMAIL PROTECTED]> wrote:
>
>
> On Nov 20, 2008, at 11:43 AM, Bruno Marchal wrote:
>> On 20 Nov 2008, at 10:13, Kory Heath wrote:
>>> What is your definition of "mathematicalism" here?
>>
>>
>> Strong definition:  the big "everything" is a mathematical object.
>> (But perhaps this is asking too much. The whole of math is already not
>> a mathematical object). So:
>>
>> Weak definition: every thing is mathematical, except everything!
>
> Ok. Do you know of anyone else who uses the term in that way? I don't
> even find it in Tegmark's papers. As I said, it only gets a handful of
> hits on Google, and they're basically all us.
>
> I don't like "cognitive immaterialism" (or anything with
> "immaterialism"), because it implies that I don't believe in matter. I
> guess you could say that I don't, but it's closer to the truth to say
> that I think that mathematical facts simply *are* what materialists
> (gropingly, confusedly) call physical matter. It would be like me, as
> an opponent of vitalism, calling myself an "a-lifer". It's not that I
> don't believe in life. I just that I think that molecules, bits,
> patterns, whatever, are the things that play the role that the
> vitalists have (gropingly, confusedly) called the "life-force".
>
> I like "Mathematical Physicalism", if it's possible for me to keep
> that term distinct from your "mathematicalism".
>
> -- Kory
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 2

2008-11-21 Thread Brent Meeker

Bruno Marchal wrote:
> MGA 2
> 
> 
> The second step of the MGA, consists in making a change to MGA 1 so  
> that we don't have to introduce that unreasonable amount of cosmic  
> luck, or of apparent randomness. It shows the "lucky" aspect of the  
> coming information is not relevant. Jason thought on this sequel.
> 
> 
> Let us consider again Alice, which, as you know as an artificial  
> brain, made of logic gates.
> Now Alice is sleeping, and doing a dream---like Carroll's original  
> Alice.
> 
> Today we know that a REM dream is a conscious experience, or better an  
> experience of consciousness, thanks to the work of Hearne Laberge,  
> Dement, etc.
> Malcolm's theory of dream, where dream are not conscious, has been  
> properly refuted by Hearne and Laberge experiences. (All reference can  
> be found in the bibliography of my "long thesis". Ask me if you have  
> problem to find them.
> 
> I am using a dream experience instead of an experience of awakeness  
> for having less technical problems and being shorter on the relevant  
> points. I let you do the change as an exercise if you want. If you  
> have understood UDA up to the sixth step, such change are easy to do.  
> To convince Brent Meeker, you will have to put the environment,  
> actually its digital functional part in the "generalized brain",  
> making the general setting much longer to describe. (If the part of  
> the environment needed for consciousness to proceed is not Turing  
> emulable, then you already negate MEC of course).
> 
> The dream will facilitate the experience. It is known that in a REM  
> dream we are paralyzed (no outputs), we are cut out from the  
> environment: (no inputs, well not completely because you would not  
> hear the awakening clock, but let us not care about this, or do the  
> exercise above), ... and we are hallucinating: the dream is a natural  
> sort of video game. It shows that the brain is at least a "natural"  
> virtual reality generator. OK?
> 
> Alice has already an artificial digital brain. This consists in a  
> boolean tridimensional  graph with nodes being NOR gates, and vertex  
> being wires. For the MEC+MAT believer, the dream is produced by the  
> physical activity of the "circular digital information processing"  
> done by that boolean graph.
> 
> With MEC, obviously all what matter is that the boolean graph  
> processes the right computation, and we don't have to take into  
> account the precise  position of the gates in space. They are not  
> relevant for the computation (if things like that were relevant we  
> would already have said "no" to the doctor. So we can topologically  
> deform Alice boolean graph brain and project it on a plane so that no  
> gates overlap. Some wires will cross, but (exercise) the crossing of  
> the wires function can itself be implemented with NOR gates. (A  
> solution of that problem, posed by Dewdney, has been given in the  
> Scientific American Journal (and is displayed in "Conscience et  
> Mécanisme" with the reference).
> 
> So Alice's brain can be made into a plane boolean graph.
> 
> Also, a MEC+MAT believer should not insist on the electrical nature   
> of the communication by wires, nor on the electrical nature of the  
> processing of the information by the gates, so that we can use optical  
> information instead. Laser beams play the role of the wires, and some  
> destructive interference can be used for the NOR. The details are not  
> relevant, given that I am not presenting a realist experiment (below,  
> or later, if people harass me with too much engineering question,  I  
> will propose a completely different representation of the same (with  
> respect to the relevance of the reasoning) situation, by using the  
> even less realist Ned Block Chinese People Computer: it can be used  
> for making clear no magic is used in what follows, with the price that  
> its overall implementation is very unrealist, given that the neurons  
> are the chinese willingly playing that role.
> 
> So, now, we put Alice's brain, which has become a two dimensional  
> optical boolean graph, in between two planes of transparent solid  
> material, glass, and we add a sort of "clever" fluid cristal together  
> with the graph,in between the glass plates. The fluid cristal is  
> supposed to have the following peculiar property (which certainly is  
> hard to implement concretely but which is possible in principle). Each  
> time a beam of light trigs a line between two nodes, it trigs a laser  
> beam in the "good" direction between the two optical gates, with the  
> correct frequency-color (to keep right the functioning of the NOR).
> 
> This works well, and we can let that brain work  from time t1 to t2,  
> where Alice dreams specifically, for fixing the matter, that she is in  
> front of a mushroom, talking with a caterpillar who sits on the  
> Muschroom (all right?). We have beforehand save the instantaneous  
> state correspo

Re: MGA 1

2008-11-21 Thread Brent Meeker

Jason Resch wrote:
> 
> 
> On Fri, Nov 21, 2008 at 5:45 AM, Stathis Papaioannou <[EMAIL PROTECTED] 
> > wrote:
> 
> 
> A variant of Chalmers' "Fading Qualia" argument
> (http://consc.net/papers/qualia.html) can be used to show Alice must
> be conscious.
> 
> Alice is sitting her exam, and a part of her brain stops working,
> let's say the part of her occipital cortex which enables visual
> perception of the exam paper. In that case, she would be unable to
> complete the exam due to blindness. But if the neurons in her
> occipital cortex are stimulated by random events such as cosmic rays
> so that they pass on signals to the rest of the brain as they would
> have normally, Alice won't know she's blind: she will believe she sees
> the exam paper, will be able to read it correctly, and will answer the
> questions just as she would have without any neurological or
> electronic problem.
> 
> If Alice were replaced by a zombie, no-one else would notice, by
> definition; also, Alice herself wouldn't notice, since a zombie is
> incapable of noticing anything (it just behaves as if it does). But I
> don't see how it is possible that Alice could be *partly* zombified,
> behaving as if she has normal vision, believing she has normal vision,
> and having all the cognitive processes that go along with normal
> vision, while actually lacking any visual experiences at all. That
> isn't consistent with the definition of a philosophical zombie.
> 
> 
> Stathis,
> 
> What you described sounds very similar to a split brain patient I saw on 
> a documentary.  He was able to respond to images presented to one eye, 
> and ended up drawing them with a hand controlled by the other 
> hemisphere, yet he had no idea why he drew that image when asked.  The 
> problem may not be that he isn't experiencing the visualization, but 
> that the part of his brain that is responsible for speech is 
> disconnected from the part of his brain that can see.
> 
> See: http://www.youtube.com/watch?v=ZMLzP1VCANo
> 
> Jason

I think experiments like this support the idea that consciousness is not a 
single thing.  We tend to identify conscious thought with the thought that is 
reported in speech.  But that's just because it is the thought that is readily 
accessible to experimenters.

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Brent Meeker

Kory Heath wrote:
> 
> On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote:
>> A variant of Chalmers' "Fading Qualia" argument
>> (http://consc.net/papers/qualia.html) can be used to show Alice must
>> be conscious.
> 
> The same argument can be used to show that Empty-Headed Alice must  
> also be conscious. (Empty-Headed Alice is the version where only  
> Alice's motor neurons are stimulated by cosmic rays, while all of the  
> other neurons in Alice's head do nothing. Alice's body continues to  
> act indistinguishably from the way it would have acted, but there's  
> nothing going on in the rest of Alice's brain, random or otherwise.  
> Telmo and Bruno have both indicated that they don't think this Alice  
> is conscious. Or at least, that a mechanist-materialist shouldn't  
> believe that this Alice is conscious.)
> 
> Let's assume that Lucky Alice is conscious. Every neuron in her head  
> (they're all artificial) has become causally disconnected from all the  
> others, but they (very improbably) continue to do exactly what they  
> would have done when they were connected, due to cosmic rays. Let's  
> say that we remove one of the neurons from Alice's head. This has no  
> effect on her outward behavior, or on the behavior of any of her other  
> neurons (since they're already causally disconnected). Of course, we  
> can remove two neurons, and then three, etc. We can remove her entire  
> visual cortex. This can't have any noticeable effect on her  
> consciousness, because the neurons that do remain go right on acting  
> the way they would have acted if the cortex was there. Eventually, we  
> can remove every neuron that isn't a motor neuron, so that we have an  
> empty-headed Alice whose body takes the exam, ducks when I throw the  
> ball at her head, etc.
> 
> If Lucky Alice is conscious and Empty-Headed Alice is not conscious,  
> then there are partial zombies halfway between them. Like you, I can't  
> make any sense of these partial zombies. But I also can't make any  
> sense of the idea that Empty-Headed Alice is conscious. Therefore, I  
> don't think this argument shows that Empty-Headed Alice (and by  
> extension, Lucky Alice) must be conscious. I think it shows that  
> there's a deeper problem - probably with one of our assumptions.
> 
> Even though I actually think that mechanist-materialists should view  
> both Lucky Alice and Empty-Headed Alice as not conscious, I still  
> think they have to deal with this problem. They have to deal with the  
> spectrum of intermediate states between Fully-Functional Alice and  
> Lucky Alice. (Or between Fully-Functional Alice and Empty-Headed Alice.)

If they were just observing Alice's outward behavior they would say, "It 
appears 
that Alice is a conscious being, but of course there's 1e-100 chance that she's 
just an automaton operated by by cosmic rays."  If they were actually observing 
her inner workings, they'd say, "Alice is just an automaton who in an extremely 
improbable coincidence has appeared as if conscious, but we can easily prove 
she 
isn't by watching her future behavior or even by blocking the rays."

Brent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



MGA 2

2008-11-21 Thread Bruno Marchal

MGA 2


The second step of the MGA, consists in making a change to MGA 1 so  
that we don't have to introduce that unreasonable amount of cosmic  
luck, or of apparent randomness. It shows the "lucky" aspect of the  
coming information is not relevant. Jason thought on this sequel.


Let us consider again Alice, which, as you know as an artificial  
brain, made of logic gates.
Now Alice is sleeping, and doing a dream---like Carroll's original  
Alice.

Today we know that a REM dream is a conscious experience, or better an  
experience of consciousness, thanks to the work of Hearne Laberge,  
Dement, etc.
Malcolm's theory of dream, where dream are not conscious, has been  
properly refuted by Hearne and Laberge experiences. (All reference can  
be found in the bibliography of my "long thesis". Ask me if you have  
problem to find them.

I am using a dream experience instead of an experience of awakeness  
for having less technical problems and being shorter on the relevant  
points. I let you do the change as an exercise if you want. If you  
have understood UDA up to the sixth step, such change are easy to do.  
To convince Brent Meeker, you will have to put the environment,  
actually its digital functional part in the "generalized brain",  
making the general setting much longer to describe. (If the part of  
the environment needed for consciousness to proceed is not Turing  
emulable, then you already negate MEC of course).

The dream will facilitate the experience. It is known that in a REM  
dream we are paralyzed (no outputs), we are cut out from the  
environment: (no inputs, well not completely because you would not  
hear the awakening clock, but let us not care about this, or do the  
exercise above), ... and we are hallucinating: the dream is a natural  
sort of video game. It shows that the brain is at least a "natural"  
virtual reality generator. OK?

Alice has already an artificial digital brain. This consists in a  
boolean tridimensional  graph with nodes being NOR gates, and vertex  
being wires. For the MEC+MAT believer, the dream is produced by the  
physical activity of the "circular digital information processing"  
done by that boolean graph.

With MEC, obviously all what matter is that the boolean graph  
processes the right computation, and we don't have to take into  
account the precise  position of the gates in space. They are not  
relevant for the computation (if things like that were relevant we  
would already have said "no" to the doctor. So we can topologically  
deform Alice boolean graph brain and project it on a plane so that no  
gates overlap. Some wires will cross, but (exercise) the crossing of  
the wires function can itself be implemented with NOR gates. (A  
solution of that problem, posed by Dewdney, has been given in the  
Scientific American Journal (and is displayed in "Conscience et  
Mécanisme" with the reference).

So Alice's brain can be made into a plane boolean graph.

Also, a MEC+MAT believer should not insist on the electrical nature   
of the communication by wires, nor on the electrical nature of the  
processing of the information by the gates, so that we can use optical  
information instead. Laser beams play the role of the wires, and some  
destructive interference can be used for the NOR. The details are not  
relevant, given that I am not presenting a realist experiment (below,  
or later, if people harass me with too much engineering question,  I  
will propose a completely different representation of the same (with  
respect to the relevance of the reasoning) situation, by using the  
even less realist Ned Block Chinese People Computer: it can be used  
for making clear no magic is used in what follows, with the price that  
its overall implementation is very unrealist, given that the neurons  
are the chinese willingly playing that role.

So, now, we put Alice's brain, which has become a two dimensional  
optical boolean graph, in between two planes of transparent solid  
material, glass, and we add a sort of "clever" fluid cristal together  
with the graph,in between the glass plates. The fluid cristal is  
supposed to have the following peculiar property (which certainly is  
hard to implement concretely but which is possible in principle). Each  
time a beam of light trigs a line between two nodes, it trigs a laser  
beam in the "good" direction between the two optical gates, with the  
correct frequency-color (to keep right the functioning of the NOR).

This works well, and we can let that brain work  from time t1 to t2,  
where Alice dreams specifically, for fixing the matter, that she is in  
front of a mushroom, talking with a caterpillar who sits on the  
Muschroom (all right?). We have beforehand save the instantaneous  
state corresponding to the begining of that dream, so as to be able to  
repeat that precise graph activity.

Each time we allow the graph doing the computation corresponding to  
the dream (which exists by MEC), 

Re: MGA 1

2008-11-21 Thread Brent Meeker

Kory Heath wrote:
> 
> On Nov 20, 2008, at 10:52 AM, Bruno Marchal wrote:
>> I am afraid you are already too much suspect of the contradictory
>> nature of MEC+MAT.
>> Take the reasoning has a game. Try to keep both MEC and MAT, the game
>> consists in showing the more clearly as possible what will go wrong.
> 
> I understand what you're saying, and I accept the rules of the game. I  
> *am* trying to keep both MEC and MAT. But it seems as though we differ  
> on how we understand MEC and MAT, because in my understanding,  
> mechanist-materialists should say that Bruno's Lucky Alice is not  
> conscious (for the same reason that Telmo's Lucky Alice is not  
> conscious).
> 
>> You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The
>> original Alice, well I mean the one in MGA 1, is functionally
>> identical at the right level of description (actually she has already
>> digital brain). The physical instantiation of a computation is
>> completely realized. No neurons can "know" that the info (correct and
>> at the right places) does not come from the relevant neurons, but from
>> a lucky beam.
> 
> I agree that the neurons don't "know" or "care" where their inputs are  
> coming from. They just get their inputs, perform their computations,  
> and send their outputs. But when it comes to the functional, physical  
> behavior of Alice's whole brain, the mechanist-materialist is  
> certainly allowed (indeed, forced) to talk about where each neuron's  
> input is coming from. That's a part of the computational picture.
> 
> I see the point that you're making. Each neuron receives some input,  
> performs some computation, and then produces some output. We're  
> imagining that every neuron has been disconnected from its inputs, but  
> that cosmic rays have luckily produced the exact same input that the  
> previously connected neurons would have produced. You're arguing that  
> since every neuron is performing the exact same computations that it  
> would have performed anyway, the two situations are computationally  
> identical.
> 
> But I don't think that's correct. I think that plain old, garden  
> variety mechanism-materialism has an easy way of saying that Lucky  
> Alice's brain, viewed as a whole system, is not performing the same  
> computations that fully-functioning Alice's brain is. None of the  
> neurons in Lucky Alice's brain are even causally connected to each  
> other. That's a pretty big computational difference!
> 
> I am arguing, in essence, that for the mechanist-materialist,  
> "causality" is an important aspect of computation and consciousness.  
> Maybe your goal is to show that there's something deeply wrong with  
> that idea, or with the idea of "causality" itself. But we're supposed  
> to be starting from a foundation of MEC and MAT.
> 
> Are you saying that the mechanist-materialist *does* say that Lucky  
> Alice is conscious, or only that the mechanist-materialist *should*  
> say it? Because if you're saying the latter, then I'm "playing the  
> game" better than you are! I'm pretty sure that Dennett (and the other  
> mechanist-materialists I've read) would say that Lucky Alice is not  
> conscious, and for them, they have a perfectly straightforward way of  
> explaining what they *mean* when they say that she's not conscious.  
> They mean (among other things) that the actions of her neurons are not  
> being affected at all by the paper lying in front of her on the table,  
> or the ball flying at her head. For Dennett, it's practically a non- 
> sequitur to say that she's conscious of a ball that's not affecting  
> her brain.
> 
>> But the physical difference does not play a role.
> 
> It depends on what you mean by "play a role". You're right that the  
> physical difference (very luckily) didn't change what the neurons did.  
> It just so happens that the neurons did exactly what they were going  
> to do anyway. But the *cause* of why the neurons did what they did is  
> totally different. The action of each individual neuron was caused by  
> cosmic rays rather than by neighboring neurons. You seem to be asking,  
> "Why should this difference play any role in whether or not Alice was  
> conscious?" But for the mechanist-materialist, the difference is  
> primary. Those kinds of causal connections are a fundamental part of  
> what they *mean* when they say that something is conscious.
> 
>> If you invoke it,
>> how could you accept saying yes to a doctor, who introduce bigger
>> difference?
> 
> Do you mean the "teleportation doctor", who makes a copy of me,  
> destroys me, and then reconstructs me somewhere else using the copied  
> information? That case is not problematic in the way that Lucky Alice  
> is, because there is an unbroken causal chain between the "new" me and  
> the "old" me. What's problematic about Lucky Alice is the fact that  
> her ducking out of the way of the ball (the movements of her eyes, the  
> look of surprise, etc.)

Re: MGA 1

2008-11-21 Thread Jason Resch
On Fri, Nov 21, 2008 at 5:45 AM, Stathis Papaioannou <[EMAIL PROTECTED]>wrote:

>
> A variant of Chalmers' "Fading Qualia" argument
> (http://consc.net/papers/qualia.html) can be used to show Alice must
> be conscious.
>
> Alice is sitting her exam, and a part of her brain stops working,
> let's say the part of her occipital cortex which enables visual
> perception of the exam paper. In that case, she would be unable to
> complete the exam due to blindness. But if the neurons in her
> occipital cortex are stimulated by random events such as cosmic rays
> so that they pass on signals to the rest of the brain as they would
> have normally, Alice won't know she's blind: she will believe she sees
> the exam paper, will be able to read it correctly, and will answer the
> questions just as she would have without any neurological or
> electronic problem.
>
> If Alice were replaced by a zombie, no-one else would notice, by
> definition; also, Alice herself wouldn't notice, since a zombie is
> incapable of noticing anything (it just behaves as if it does). But I
> don't see how it is possible that Alice could be *partly* zombified,
> behaving as if she has normal vision, believing she has normal vision,
> and having all the cognitive processes that go along with normal
> vision, while actually lacking any visual experiences at all. That
> isn't consistent with the definition of a philosophical zombie.
>
>
Stathis,

What you described sounds very similar to a split brain patient I saw on a
documentary.  He was able to respond to images presented to one eye, and
ended up drawing them with a hand controlled by the other hemisphere, yet he
had no idea why he drew that image when asked.  The problem may not be that
he isn't experiencing the visualization, but that the part of his brain that
is responsible for speech is disconnected from the part of his brain that
can see.

See: http://www.youtube.com/watch?v=ZMLzP1VCANo

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Jason Resch
On Fri, Nov 21, 2008 at 3:45 AM, Kory Heath <[EMAIL PROTECTED]> wrote:

>
>
>
> However, the materialist-mechanist still has some grounds to say that
> there's something interestingly different about Lucky Kory than
> Original Kory. It is a physical fact of the matter that Lucky Kory is
> not causally connected to Pre-Teleportation Kory. When someone asks
> Lucky Kory, "Why do you tie your shoes that way?", and Lucky Kory
> says, "Because of something I learned when I was ten years old", Lucky
> Kory's statement is quite literally false. Lucky Kory ties his shoes
> that way because of some cosmic rays. I actually don't know what the
> standard mechanist-materialist way of viewing this situation is. But
> it does seem to suggest that maybe breaks in the causal chain
> shouldn't affect consciousness after all.
>

This is very similar to an existing thought experiment in identity theory:

http://en.wikipedia.org/wiki/Swamp_man

Jason

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Bruno Marchal

On 21 Nov 2008, at 10:45, Kory Heath wrote:


>
> ...
> A much closer analogy to Lucky Alice would be if the doctor
> accidentally destroys me without making the copy, turns on the
> receiving teleporter in desperation, and then the exact copy that
> would have appeared anyway steps out, because (luckily!) cosmic rays
> hit the receiver's mechanisms in just the right way. I actually find
> this thought experiment more persuasive than Lucky Alice (although I'm
> sure some will argue that they're identical). At the very least, the
> mechanist-materialist has to say that the resulting Lucky Kory is
> conscious. I think it's also clear that Lucky Kory's consciousness
> must be exactly what it would have been if the teleportation had
> worked correctly. This does in fact lead me to feel that maybe
> causality shouldn't have any bearing on consciousness after all.


Very good. Thanks.


>
>
> However, the materialist-mechanist still has some grounds to say that
> there's something interestingly different about Lucky Kory than
> Original Kory. It is a physical fact of the matter that Lucky Kory is
> not causally connected to Pre-Teleportation Kory.


Keeping the comp hyp (cf the "qua computatio") this would introduce  
magic.



> When someone asks
> Lucky Kory, "Why do you tie your shoes that way?", and Lucky Kory
> says, "Because of something I learned when I was ten years old", Lucky
> Kory's statement is quite literally false. Lucky Kory ties his shoes
> that way because of some cosmic rays. I actually don't know what the
> standard mechanist-materialist way of viewing this situation is. But
> it does seem to suggest that maybe breaks in the causal chain
> shouldn't affect consciousness after all.

Yes.

> .
> Of course I'm entirely on board with the spirit of your thought
> experiment. You think MECH and MAT implies that Lucky Alice is
> conscious, but I don't think it does. I'm not sure how important that
> difference is. It seems substantial. But I can also predict where
> you're going with your thought experiment, and it's the exact same
> place I go. So by all means, continue on to MGA 2, and we'll see what
> happens.


Thanks.  A last comment on your reply on Stathis' recent comment.

Stathis argument, based on Chalmers' fading qualia is mainly correct I  
think. And it could be that your answer to Stathis is correct too.
And this would finish our work. We would have a proof that Telmo Alice  
is uncouscious and that Telmo Alice is conscious, finishing the  
reductio ad absurbo.
Keep in mind that we are doing a reductio ad absurdo. Those who are  
convinced by bith Stathis and Russell Telmo, ...  can already take  
holidays!

Have to write MGA 2 for the others.

Bruno
http://iridia.ulb.ac.be/~marchal/




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Michael Rosefield
This is one of those questions were I'm not sure if I'm being relevant or
missing the point entirely, but here goes:

There are multiple universes which implement/contain/whatever Alice's
consciousness. During the period of the experiment, that universe may no
longer be amongst them but shadows along with them closely enough that it
certainly rejoins them upon its termination.

So, was Alice conscious during the experiment? Well, from Alice's
perspective she certainly has the memory of consciousness, and due to the
presence of the implementing universes there was certainly a conscious Alice
out there somewhere. Since consciousness has no intrinsic spatio-temporal
quality, there's no reason for that consciousness not to count.


2008/11/21 Kory Heath <[EMAIL PROTECTED]>

>
>
> On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote:
> > A variant of Chalmers' "Fading Qualia" argument
> > (http://consc.net/papers/qualia.html) can be used to show Alice must
> > be conscious.
>
> The same argument can be used to show that Empty-Headed Alice must
> also be conscious. (Empty-Headed Alice is the version where only
> Alice's motor neurons are stimulated by cosmic rays, while all of the
> other neurons in Alice's head do nothing. Alice's body continues to
> act indistinguishably from the way it would have acted, but there's
> nothing going on in the rest of Alice's brain, random or otherwise.
> Telmo and Bruno have both indicated that they don't think this Alice
> is conscious. Or at least, that a mechanist-materialist shouldn't
> believe that this Alice is conscious.)
>
> Let's assume that Lucky Alice is conscious. Every neuron in her head
> (they're all artificial) has become causally disconnected from all the
> others, but they (very improbably) continue to do exactly what they
> would have done when they were connected, due to cosmic rays. Let's
> say that we remove one of the neurons from Alice's head. This has no
> effect on her outward behavior, or on the behavior of any of her other
> neurons (since they're already causally disconnected). Of course, we
> can remove two neurons, and then three, etc. We can remove her entire
> visual cortex. This can't have any noticeable effect on her
> consciousness, because the neurons that do remain go right on acting
> the way they would have acted if the cortex was there. Eventually, we
> can remove every neuron that isn't a motor neuron, so that we have an
> empty-headed Alice whose body takes the exam, ducks when I throw the
> ball at her head, etc.
>
> If Lucky Alice is conscious and Empty-Headed Alice is not conscious,
> then there are partial zombies halfway between them. Like you, I can't
> make any sense of these partial zombies. But I also can't make any
> sense of the idea that Empty-Headed Alice is conscious. Therefore, I
> don't think this argument shows that Empty-Headed Alice (and by
> extension, Lucky Alice) must be conscious. I think it shows that
> there's a deeper problem - probably with one of our assumptions.
>
> Even though I actually think that mechanist-materialists should view
> both Lucky Alice and Empty-Headed Alice as not conscious, I still
> think they have to deal with this problem. They have to deal with the
> spectrum of intermediate states between Fully-Functional Alice and
> Lucky Alice. (Or between Fully-Functional Alice and Empty-Headed Alice.)
>
> -- Kory
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Kory Heath


On Nov 21, 2008, at 3:45 AM, Stathis Papaioannou wrote:
> A variant of Chalmers' "Fading Qualia" argument
> (http://consc.net/papers/qualia.html) can be used to show Alice must
> be conscious.

The same argument can be used to show that Empty-Headed Alice must  
also be conscious. (Empty-Headed Alice is the version where only  
Alice's motor neurons are stimulated by cosmic rays, while all of the  
other neurons in Alice's head do nothing. Alice's body continues to  
act indistinguishably from the way it would have acted, but there's  
nothing going on in the rest of Alice's brain, random or otherwise.  
Telmo and Bruno have both indicated that they don't think this Alice  
is conscious. Or at least, that a mechanist-materialist shouldn't  
believe that this Alice is conscious.)

Let's assume that Lucky Alice is conscious. Every neuron in her head  
(they're all artificial) has become causally disconnected from all the  
others, but they (very improbably) continue to do exactly what they  
would have done when they were connected, due to cosmic rays. Let's  
say that we remove one of the neurons from Alice's head. This has no  
effect on her outward behavior, or on the behavior of any of her other  
neurons (since they're already causally disconnected). Of course, we  
can remove two neurons, and then three, etc. We can remove her entire  
visual cortex. This can't have any noticeable effect on her  
consciousness, because the neurons that do remain go right on acting  
the way they would have acted if the cortex was there. Eventually, we  
can remove every neuron that isn't a motor neuron, so that we have an  
empty-headed Alice whose body takes the exam, ducks when I throw the  
ball at her head, etc.

If Lucky Alice is conscious and Empty-Headed Alice is not conscious,  
then there are partial zombies halfway between them. Like you, I can't  
make any sense of these partial zombies. But I also can't make any  
sense of the idea that Empty-Headed Alice is conscious. Therefore, I  
don't think this argument shows that Empty-Headed Alice (and by  
extension, Lucky Alice) must be conscious. I think it shows that  
there's a deeper problem - probably with one of our assumptions.

Even though I actually think that mechanist-materialists should view  
both Lucky Alice and Empty-Headed Alice as not conscious, I still  
think they have to deal with this problem. They have to deal with the  
spectrum of intermediate states between Fully-Functional Alice and  
Lucky Alice. (Or between Fully-Functional Alice and Empty-Headed Alice.)

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Stathis Papaioannou

A variant of Chalmers' "Fading Qualia" argument
(http://consc.net/papers/qualia.html) can be used to show Alice must
be conscious.

Alice is sitting her exam, and a part of her brain stops working,
let's say the part of her occipital cortex which enables visual
perception of the exam paper. In that case, she would be unable to
complete the exam due to blindness. But if the neurons in her
occipital cortex are stimulated by random events such as cosmic rays
so that they pass on signals to the rest of the brain as they would
have normally, Alice won't know she's blind: she will believe she sees
the exam paper, will be able to read it correctly, and will answer the
questions just as she would have without any neurological or
electronic problem.

If Alice were replaced by a zombie, no-one else would notice, by
definition; also, Alice herself wouldn't notice, since a zombie is
incapable of noticing anything (it just behaves as if it does). But I
don't see how it is possible that Alice could be *partly* zombified,
behaving as if she has normal vision, believing she has normal vision,
and having all the cognitive processes that go along with normal
vision, while actually lacking any visual experiences at all. That
isn't consistent with the definition of a philosophical zombie.


-- 
Stathis Papaioannou

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Bruno Marchal
Hi Gordon,

Le 20-nov.-08, à 21:40, Gordon Tsai a écrit :

> Bruno:
>    I think you and John touched the fundamental issues of human 
> rational. It's a dilemma encountered by phenomenology. Now I have a 
> question: In theory we can't distinguish ourselves from a Lobian 
> Machine. But can lobian machines truly have sufficient rich 
> experiences like human?

This is our assumption. Assuming comp, we are machine, so certainly 
some machine can have our rich experiences. Indeed, us.



> For example, is it possible for a lobian machine to "still its 
> mind' or "cease the computational logic" like some eastern philosophy 
> suggested? Maybe any of the out-of-loop experience is still part of 
> the computation/logic, just as our out-of-body experiences are 
> actually the trick of brain chemicals?


Eventually we will be led to the idea that it is the "brain chemicals" 
which are the result of a trick of "universal consciousness", but here 
I am anticipating. Let us go carefully step by step.

I think I will have some time this afternoon to make MGA 2,

See you there ...

Bruno


http://iridia.ulb.ac.be/~marchal/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Bruno Marchal


Jason,

Nice, you are anticipatiing on MGA 2. So if you don't mind I will 
"answer" your post in MGA 2, or in comments you will perhaps make 
afterward.

... asap.

Bruno


Le 20-nov.-08, à 21:27, Jason Resch a écrit :

>
>
> On Thu, Nov 20, 2008 at 12:03 PM, Bruno Marchal <[EMAIL PROTECTED]> 
> wrote:
>>
>>
>>
>>>  The state machine that would represent her in the case of injection 
>>> of random noise is a different state machine that would represent 
>>> her normally functioning brain. 
>>
>>
>> Absolutely so.
>>
>>
>
> Bruno,
>
> What about the state machine that included the injection of "lucky" 
> noise from an outside source vs. one in which all information was 
> derived internally from the operation of the state machine itself? 
>  Would those two differently defined machines not differ and compute 
> something different?  Even though the computations are identical the 
> information that is being computed comes from different sources and so 
> carries with it a different "connotation".  Though the bits injected 
> are identical, they inherently imply a different meaning because the 
> state machine in the case of injection has a different structure than 
> that of her normally operating brain.  I believe the brain can be 
> abstracted as a computer/information processing system, but it is not 
> simply the computations and the inputs into the logic gates at each 
> step that are important, but also the source of the input bits, 
> otherwise the computation isn't the same.
>
> Jason
>
>  >
>
http://iridia.ulb.ac.be/~marchal/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: Little exercise

2008-11-21 Thread Kory Heath


On Nov 20, 2008, at 11:43 AM, Bruno Marchal wrote:
> On 20 Nov 2008, at 10:13, Kory Heath wrote:
>> What is your definition of "mathematicalism" here?
>
>
> Strong definition:  the big "everything" is a mathematical object.
> (But perhaps this is asking too much. The whole of math is already not
> a mathematical object). So:
>
> Weak definition: every thing is mathematical, except everything!

Ok. Do you know of anyone else who uses the term in that way? I don't  
even find it in Tegmark's papers. As I said, it only gets a handful of  
hits on Google, and they're basically all us.

I don't like "cognitive immaterialism" (or anything with  
"immaterialism"), because it implies that I don't believe in matter. I  
guess you could say that I don't, but it's closer to the truth to say  
that I think that mathematical facts simply *are* what materialists  
(gropingly, confusedly) call physical matter. It would be like me, as  
an opponent of vitalism, calling myself an "a-lifer". It's not that I  
don't believe in life. I just that I think that molecules, bits,  
patterns, whatever, are the things that play the role that the  
vitalists have (gropingly, confusedly) called the "life-force".

I like "Mathematical Physicalism", if it's possible for me to keep  
that term distinct from your "mathematicalism".

-- Kory


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~--~~~~--~~--~--~---



Re: MGA 1

2008-11-21 Thread Kory Heath


On Nov 20, 2008, at 10:52 AM, Bruno Marchal wrote:
> I am afraid you are already too much suspect of the contradictory
> nature of MEC+MAT.
> Take the reasoning has a game. Try to keep both MEC and MAT, the game
> consists in showing the more clearly as possible what will go wrong.

I understand what you're saying, and I accept the rules of the game. I  
*am* trying to keep both MEC and MAT. But it seems as though we differ  
on how we understand MEC and MAT, because in my understanding,  
mechanist-materialists should say that Bruno's Lucky Alice is not  
conscious (for the same reason that Telmo's Lucky Alice is not  
conscious).

> You mean the ALICE of Telmo's solution of MGA 1bis, I guess. The
> original Alice, well I mean the one in MGA 1, is functionally
> identical at the right level of description (actually she has already
> digital brain). The physical instantiation of a computation is
> completely realized. No neurons can "know" that the info (correct and
> at the right places) does not come from the relevant neurons, but from
> a lucky beam.

I agree that the neurons don't "know" or "care" where their inputs are  
coming from. They just get their inputs, perform their computations,  
and send their outputs. But when it comes to the functional, physical  
behavior of Alice's whole brain, the mechanist-materialist is  
certainly allowed (indeed, forced) to talk about where each neuron's  
input is coming from. That's a part of the computational picture.

I see the point that you're making. Each neuron receives some input,  
performs some computation, and then produces some output. We're  
imagining that every neuron has been disconnected from its inputs, but  
that cosmic rays have luckily produced the exact same input that the  
previously connected neurons would have produced. You're arguing that  
since every neuron is performing the exact same computations that it  
would have performed anyway, the two situations are computationally  
identical.

But I don't think that's correct. I think that plain old, garden  
variety mechanism-materialism has an easy way of saying that Lucky  
Alice's brain, viewed as a whole system, is not performing the same  
computations that fully-functioning Alice's brain is. None of the  
neurons in Lucky Alice's brain are even causally connected to each  
other. That's a pretty big computational difference!

I am arguing, in essence, that for the mechanist-materialist,  
"causality" is an important aspect of computation and consciousness.  
Maybe your goal is to show that there's something deeply wrong with  
that idea, or with the idea of "causality" itself. But we're supposed  
to be starting from a foundation of MEC and MAT.

Are you saying that the mechanist-materialist *does* say that Lucky  
Alice is conscious, or only that the mechanist-materialist *should*  
say it? Because if you're saying the latter, then I'm "playing the  
game" better than you are! I'm pretty sure that Dennett (and the other  
mechanist-materialists I've read) would say that Lucky Alice is not  
conscious, and for them, they have a perfectly straightforward way of  
explaining what they *mean* when they say that she's not conscious.  
They mean (among other things) that the actions of her neurons are not  
being affected at all by the paper lying in front of her on the table,  
or the ball flying at her head. For Dennett, it's practically a non- 
sequitur to say that she's conscious of a ball that's not affecting  
her brain.

> But the physical difference does not play a role.

It depends on what you mean by "play a role". You're right that the  
physical difference (very luckily) didn't change what the neurons did.  
It just so happens that the neurons did exactly what they were going  
to do anyway. But the *cause* of why the neurons did what they did is  
totally different. The action of each individual neuron was caused by  
cosmic rays rather than by neighboring neurons. You seem to be asking,  
"Why should this difference play any role in whether or not Alice was  
conscious?" But for the mechanist-materialist, the difference is  
primary. Those kinds of causal connections are a fundamental part of  
what they *mean* when they say that something is conscious.

> If you invoke it,
> how could you accept saying yes to a doctor, who introduce bigger
> difference?

Do you mean the "teleportation doctor", who makes a copy of me,  
destroys me, and then reconstructs me somewhere else using the copied  
information? That case is not problematic in the way that Lucky Alice  
is, because there is an unbroken causal chain between the "new" me and  
the "old" me. What's problematic about Lucky Alice is the fact that  
her ducking out of the way of the ball (the movements of her eyes, the  
look of surprise, etc.) has nothing to do with the ball, and yet  
somehow she's still supposed to be conscious of the ball.

A much closer analogy to Lucky Alice would be if the doctor  
accidentally d