Stathis Papaioannou wrote:
> Brent meeker writes:
>>> I don't doubt that there is some substitution level that preserves 3rd
>>> behaviour and 1st person experience, even if this turns out to mean copying
>>> a person to the same engineering tolerances as nature has specified for
>>> day to day life. The question is, is there some substitution level which
>>> 3rd person behaviour but not 1st person experience? For example, suppose
>>> you carried around with you a device which monitored all your behaviour in
>>> detail, created predictive models, compared its predictions with your
>>> behaviour, and continuously refined its models. Over time, this device
>>> might be
>>> able to mimic your behaviour closely enough such that it could take over
>>> control of
>>> your body from your brain and no-one would be able to tell that the
>>> had occurred. I don't think it would be unreasonable to wonder whether this
>>> experiences the same thing when it looks at the sky and declares it to be
>>> blue as
>>> you do before the substitution.
>> That's a precis of Greg Egan's short story "The Jewel". I wouldn't call it
>> unreasonable to wonder whether the copy experiences the same qualia, but I'd
>> call it unreasonable to conclude that it did not on the stated evidence. In
>> fact I find it hard to think of what evidence would count against it have
>> some kind of qualia.
> It would be a neat theory if any machine that processed environmental
> in a manner analogous to an animal had some level of conscious experience
> (and consistent
> with Colin's "no zombie scientists" hypothesis, although I don't think it is
> a conclusion he would
> agree with). It would explain consciousness as a corollary of this sort of
> information processing.
> However, I don't know how such a thing could ever be proved or disproved.
> Stathis Papaioannou
Things are seldom proved or disproved in science. Right now I'd say the
evidence favors the no-zombie theory. The only evidence beyond observation of
behavior that I can imagine is to map processes in the brain and determine how
memories are stored and how manipulation of symbolic and graphic
representations are done. It might then be possible to understand how a
computer/robot could achieve the same behavior with a different functional
structure; analogous say to imperative vs functional programs. But then we'd
only be able to infer that the robot might be conscious in a different way. I
don't see how we could infer that it was not conscious.
On a related point, it is often said here that consciousness is ineffable: what
it is like to be someone cannot be communicated. But there's another side to
this: it is exactly the content of consciousness that we can communicate. We
can tell someone how we prove a theorem: we're conscious of those steps. But
we can't tell someone how our brain came up with the proof (the Poincare'
effect) or why it is persuasive.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to firstname.lastname@example.org
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at