I should have been more precise with the terms "copy" and "emulate". 
What I was asking is whether a robot which experiences something while 
it is shovelling coal (this of course assumes that a robot can have 
would experience the same thing if it were fed input to all its sensors exactly 
the same as if it were doing its job normally, such that it was not aware the 
inputs were in fact a sham. It seems to me that if the answer is "no" the robot 
would need to have some mysterious extra-computational knowledge of the 
world, which I find very difficult to conceptualise if we are talking about a 
digital computer. It is easier to conceptualise that such non-computational 
may be at play in a biological brain, which would then be an argument against 

Stathis Papaioannou

> Stathis:
> let me skip the quoted texts and ask a particular question.
> ----- Original Message -----
> From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
> Sent: Wednesday, October 04, 2006 11:41 PM
> Subject: RE: Maudlin's Demon (Argument)
> You wrote:
> Do you believe it is possible to copy a particular consciousness by
> emulating it, along
> with sham inputs (i.e. in virtual reality), on a general purpose computer?
> Or do you believe
> a coal-shovelling robot could only have the coal-shovelling experience by
> actually shovelling
> coal?
> Stathis Papaioannou
> ---------------------------------
> My question is about 'copy' and 'emulate'.
> Are we considering 'copying' the model and its content (in which case the
> coal shoveling robot last sentence applies) or do we include the
> interconnections unlimited in "experience", beyond the particular model we
> talk about?
> If we go "all the way" and include all input from the unlimited totality
> that may 'format' or 'complete' the model-experience, then we re-create the
> 'real thing' and it is not a copy. If we restrict our copying to the aspect
> in question (model) then we copy only that aspect and should not draw
> conclusions on the total.
> Can we 'emulate' totality? I don't think so. Can we copy the total,
> unlimited wholeness? I don't think so.
> What I feel is a restriction to "think" within a model and draw conclusions
> from it towards beyond it.
> Which looks to me like a category-mistake.
> John Mikes

Be one of the first to try Windows Live Mail.
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to