Stathis Papaioannou wrote:
> Colin Hales writes:
>>> I think it is logically possible to have functional equivalence but
>>> structural
>>> difference with consequently difference in conscious state even though
>>> external behaviour is the same.
>>> Stathis Papaioannou
>> Remember Dave Chalmers with his 'silicon replacement' zombie papers? (a)
>> Replace every neuron with a silicon "functional equivalent" and (b) hold
>> the external behaviour identical.
> I would guess that such a 1-for-1 replacement brain would in fact have the 
> same 
> PC as the biological original, although this si not a logical certainty. But 
> what I was 
> thinking of was the equivalent of copying the "look and feel" of a piece of 
> software 
> without having access to the source code. Computers may one day be able to 
> copy 
> the "look and feel" of a human not by directly modelling neurons but by 
> completely 
> different mechanisms. Even if such computers were conscious, there seems no 
> good 
> reason to assume that their experiences would be similar to those of a 
> similarly 
> behaving human. 
>> If the 'structural difference' (accounting for consciousness) has a
>> critical role in function then the assumption of identical external
>> behaviour is logically flawed. This is the 'philosophical zombie'. Holding
>> the behaviour to be the same is a meaninglesss impossibility in this
>> circumstance.
> We can assume that the structural difference makes a difference to 
> consciousness but 
> not external behaviour. For example, it may cause spectrum reversal.
>> In the case of Chalmers silicon replacement it assumes that everything
>> that was being done by the neuron is duplicated. What the silicon model
>> assumes is a) that we know everything there is to know and b) that silicon
>> replacement/modelling/representation is capable of delivering everything,
>> even if we did 'know  everything' and put it in the model. Bad, bad,
>> arrogant assumptions.
> Well, it might just not work, and you end up with an idiot who slobbers and 
> stares into 
> space. Or you might end up with someone who can do calculations really well 
> but displays 
> no emotions. But it's a thought experiment: suppose you use whatever advanced 
> technology 
> it takes to create a being with *exactly* the same behaviours as a biological 
> human. Can 
> you be sure that this being would be conscious? Can you be sure that this 
> being would be 
> conscious in the same way you and I are conscious?

Consciousness would be supported by the behavioral evidence.  If it were 
functionally similar at a low level I don't see what evidence there would be 
against it. So the best conclusion would be that the being was conscious.

If we knew a lot about the function of the human brain and we created this 
behaviorally identical being but with different functional structure; then we 
would have some evidence against the being having human type consciousness - 
but I think we'd be able to assert that it was not conscious in some way.

Brent Meeker

 You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to