Johnathan Corgan wrote:
> Still, there is a certain appeal to shifting the question from "Why are
> we conscious?" to "Consciousness doesn't exist, so why do we so firmly
> believe that it does?"

It is possible to imagine a machine that doubts (or perhaps I should say
"doubts", i.e. we should not assume that it has doubts in the same way we
do) whether it is conscious.  Imagine a simple theorem-proving machine,
one of Bruno's logic machines, complicated enough to have a representation
of itself.  We want to ask it if it is conscious.  So we have to define
consciousness in logical terms.  That seems quite daunting.  If we allow
room for indeterminacy in our definitions, the machine might also have
indeterminacy in its estimation of whether it is conscious.

Or, imagine we meet aliens.  How do we know if they are conscious?  Or,
turning it around, how would they know if they possess what humans call
"consciousness"?  How would we describe consciousness to them, who have
very different brains and ways of information processing, such that
they can know for sure whether they are conscious in the same way that
humans are?

The question of whether someone is conscious is far more problematic
than is often supposed, given that we cannot even define consciousness!
I tend to think that it is simply a convenient assumption, that everyone
is conscious, to avoid facing up to the overwhelming difficulties that
a true analysis of the question brings.  The mere fact that we cannot
define consciousness ought to be a pretty big red flag that we should
not be making facile assumptions about who has it and who doesn't!

(Or, if you say that we can in fact define consciousness, tell me how
to know which AI programs have it, and which don't?)

Hal Finney

You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to