If it feels bafflement and confusion, then surely it is conscious :)
An AI that takes information from books might experience similar qualia we
can experience. The AI will be programmed to do certain tasks and it must
thus have a notion of what it is doing is ok., not ok, or completely wrong.
If things are going wrong and it has to revert what it has just done, it may
feel some sort of pain. Just like what happens to us if we pick up something
that is very hot.
So, I think that there will be a mismatch between the qualia the AI
experiences and what "it reads about that we experience". The AI won't read
the information like we read it. I think it will directly experience it as
some qualia, just like we experience information coming in via our senses
into our brain.
The meaning we associate with the text would not be accessible to the AI,
because ultimately that is linked to the qualia we experience.
Perhaps what the AI experiences when it is processing information is similar
to an animal that is moving in some landscape. Maybe when it reads something
then that manifests itself like some object it sees. If it processes
information then that could be like picking up that object putting it next
to a similar looking object.
But if that object represents a text about consciousness then there is no
way for the AI to know that.
----- Original Message -----
From: ""Hal Finney"" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, June 03, 2007 09:52 PM
Subject: Re: How would a computer know if it were conscious?
> Part of what I wanted to get at in my thought experiment is the
> bafflement and confusion an AI should feel when exposed to human ideas
> about consciousness. Various people here have proffered their own
> ideas, and we might assume that the AI would read these suggestions,
> along with many other ideas that contradict the ones offered here.
> It seems hard to escape the conclusion that the only logical response
> is for the AI to figuratively throw up its hands and say that it is
> impossible to know if it is conscious, because even humans cannot agree
> on what consciousness is.
> In particular I don't think an AI could be expected to claim that it
> knows that it is conscious, that consciousness is a deep and intrinsic
> part of itself, that whatever else it might be mistaken about it could
> not be mistaken about being conscious. I don't see any logical way it
> could reach this conclusion by studying the corpus of writings on the
> topic. If anyone disagrees, I'd like to hear how it could happen.
> And the corollary to this is that perhaps humans also cannot legitimately
> make such claims, since logically their position is not so different
> from that of the AI. In that case the seemingly axiomatic question of
> whether we are conscious may after all be something that we could be
> mistaken about.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at