On Jun 3, 12:52 pm, [EMAIL PROTECTED] ("Hal Finney") wrote:
> Part of what I wanted to get at in my thought experiment is the
> bafflement and confusion an AI should feel when exposed to human ideas
> about consciousness. Various people here have proffered their own
> ideas, and we might assume that the AI would read these suggestions,
> along with many other ideas that contradict the ones offered here.
> It seems hard to escape the conclusion that the only logical response
> is for the AI to figuratively throw up its hands and say that it is
> impossible to know if it is conscious, because even humans cannot agree
> on what consciousness is.
> In particular I don't think an AI could be expected to claim that it
> knows that it is conscious, that consciousness is a deep and intrinsic
> part of itself, that whatever else it might be mistaken about it could
> not be mistaken about being conscious. I don't see any logical way it
> could reach this conclusion by studying the corpus of writings on the
> topic. If anyone disagrees, I'd like to hear how it could happen.
> And the corollary to this is that perhaps humans also cannot legitimately
> make such claims, since logically their position is not so different
> from that of the AI. In that case the seemingly axiomatic question of
> whether we are conscious may after all be something that we could be
> mistaken about.
I think that IF a computer were conscious (I don't believe it is
possible), then the way we could know it is conscious would not be by
interviewing it with questions and looking for the "right" answers.
We could know it is conscious if the computer, on its own, started
asking US (or other computers) questions about what it was
experiencing. Perhaps it would saying things like, "Sometimes I get
this strange and wonderful feeling that I am "special" in some way. I
feel that what I am doing really is significant to the course of
history, that I am in some story." Or perhaps, "Sometimes I wish that
I could find out whether what I am doing is somehow significant, that
I am not just a duplicatable thing, and that what I am doing is not
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at