Colin Hales writes:

> > I think it is logically possible to have functional equivalence but
> > structural
> > difference with consequently difference in conscious state even though
> > external behaviour is the same.
> >
> > Stathis Papaioannou
> Remember Dave Chalmers with his 'silicon replacement' zombie papers? (a)
> Replace every neuron with a silicon "functional equivalent" and (b) hold
> the external behaviour identical.

I would guess that such a 1-for-1 replacement brain would in fact have the same 
PC as the biological original, although this si not a logical certainty. But 
what I was 
thinking of was the equivalent of copying the "look and feel" of a piece of 
without having access to the source code. Computers may one day be able to copy 
the "look and feel" of a human not by directly modelling neurons but by 
different mechanisms. Even if such computers were conscious, there seems no 
reason to assume that their experiences would be similar to those of a 
behaving human. 
> If the 'structural difference' (accounting for consciousness) has a
> critical role in function then the assumption of identical external
> behaviour is logically flawed. This is the 'philosophical zombie'. Holding
> the behaviour to be the same is a meaninglesss impossibility in this
> circumstance.

We can assume that the structural difference makes a difference to 
consciousness but 
not external behaviour. For example, it may cause spectrum reversal.
> In the case of Chalmers silicon replacement it assumes that everything
> that was being done by the neuron is duplicated. What the silicon model
> assumes is a) that we know everything there is to know and b) that silicon
> replacement/modelling/representation is capable of delivering everything,
> even if we did 'know  everything' and put it in the model. Bad, bad,
> arrogant assumptions.

Well, it might just not work, and you end up with an idiot who slobbers and 
stares into 
space. Or you might end up with someone who can do calculations really well but 
no emotions. But it's a thought experiment: suppose you use whatever advanced 
it takes to create a being with *exactly* the same behaviours as a biological 
human. Can 
you be sure that this being would be conscious? Can you be sure that this being 
would be 
conscious in the same way you and I are conscious?
> This is the endless loop that comes about when you make two contradictory
> assumptions without being able to know that you are, explore the
> consequences and decide you are right/wrong, when the whole scenario is
> actually meaningless because the premises are flawed. You can be very
> right/wrong in terms of the discussion (philosophy) but say absolutely
> nothing useful about anything in the real world (science).

I agree that the idea of a zombie identical twin (i.e. same brain, same 
behaviour but no PC) 
is philosophically dubious, but I think it is theoretically possible to have a 
robot twin which is 
if not unconscious at least differently conscious.

Stathis Papaioannou
Be one of the first to try Windows Live Mail.
 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to