Bruno Marchal wrote:
> On 29 Nov 2008, at 15:56, Abram Demski wrote:
>> Bruno,
>>> The argument was more of the type : "removal of unnecessay and
>>> unconscious or unintelligent parts. Those parts have just no
>>> perspective. If they have some perpective playing arole in Alice's
>>> consciousness, it would mean we have not well chosen the substitution
>>> level. You are reintroducing some consciousness on the elementary
>>> parts, here, I think.
>> The problem would not be with removing individual elementary parts and
>> replacing them with functionally equivalent pieces; this obviously
>> preserves the whole. Rather with removing whole subgraphs and
>> replacing them with equivalent pieces. As Alice-in-the-cave is
>> supposed to show, this can remove consciousness, at least in the limit
>> when the entire movie is replaced...
> The limit is not relevant. I agrre that if you remove Alice, you  
> remove any possibility for Alice to manifest herself in your most  
> probable histories. The problem is that in the range activity of the  
> projected movie, removing a part of the graph change nothing. It
> changes only the probability of recoevering Alice from her history in,  
> again, your most probable history. 

Isn't this reliance on probable histories assuming some physical theory that is 
no in evidence?

>IThere are no physical causal link  
> between the experience attributed to the physical computation and the  
> "causal history of projecting a movie". 

But there is a causal history for the creation of the movie - it's a recording 
of Alice's brain functions which were causally related to her physical world.

>The incremental removing of  
> the graph hilighted the lack of causality in the movie. 

It seems to me there is still a causal chain - it is indirected via creating 


>Perhaps not in  
> the best clearer way, apparently. Perhaps I should have done the case  
> of a non dream. I will come back on this.
>>> Then you think that if someone is conscious with some brain, which  
>>> for
>>> some reason, does never use some neurons, could loose consciousness
>>> when that never used neuron is removed?
>>> If that were true, how could still be confident with an artificial
>>> digital brain. You may be right, but the MEC hypothesis would be put
>>> in doubt.
>> I am thinking of it as being the same as someone having knowledge
>> which they never actually use. Suppose that the situation is so
>> extreme that if we removed the neurons involved in that knowledge, we
>> will not alter the person's behavior; yet, we will have removed the
>> knowledge. Similarly, if the behavior of Alice in practice comes from
>> a recording, yet a dormant conscious portion is continually ready to
>> intervene if needed, then removing that dormant portion removes her
>> consciousness.
> You should definitely do the removing of the graph in the non-dream  
> situation. Let us do it.
> Let us take a situation without complex inputs. Let us imagine Alice  
> is giving a conference in a big room, so, as input she is just blinded  
> by some projector, + some noise, and she makes a talk on Astronomy (to  
> fix the things). Now from 8h30 to 8H45 pm, she has just no brain, she  
> get the "motor" info from a projected recording of a previous *perfect  
> dream* of that conference, dream done the night before, or send from  
> Platonia (possible in principle). Then, by magic, to simplify, at 8h45  
> she get back the original brain, which by optical means inherits the  
> stage at the end of the conference in that perfect dream. I ask you,  
> would you say Alice was a zombie, during the conference?
> Bruno
> > 

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to