Hal Finney wrote:
> Johnathan Corgan wrote:
>>Still, there is a certain appeal to shifting the question from "Why are
>>we conscious?" to "Consciousness doesn't exist, so why do we so firmly
>>believe that it does?"
> It is possible to imagine a machine that doubts (or perhaps I should say
> "doubts", i.e. we should not assume that it has doubts in the same way we
> do) whether it is conscious. Imagine a simple theorem-proving machine,
> one of Bruno's logic machines, complicated enough to have a representation
> of itself. We want to ask it if it is conscious. So we have to define
> consciousness in logical terms. That seems quite daunting. If we allow
> room for indeterminacy in our definitions, the machine might also have
> indeterminacy in its estimation of whether it is conscious.
> Or, imagine we meet aliens. How do we know if they are conscious? Or,
> turning it around, how would they know if they possess what humans call
> "consciousness"? How would we describe consciousness to them, who have
> very different brains and ways of information processing, such that
> they can know for sure whether they are conscious in the same way that
> humans are?
> The question of whether someone is conscious is far more problematic
> than is often supposed, given that we cannot even define consciousness!
I think we can define it for AI systems. We know, for example, that the Mars
rover is conscious of its position relative to its landning point, its
of its available power, the slope of the terrain, the temperature, the wind,
ambient light level, which way its pointing,... It's even aware of whether its
programs have satisfied various checksums. Now you may object that I'm using
"conscious" and "aware" in a different sense than you meant - since you meant
to apply to humans. But I would say that I'm using it in the sense that the
rover acts based on these perceptions that it has, including internal
perceptions of its own state and its goals. And this sense applies to humans
> I tend to think that it is simply a convenient assumption, that everyone
> is conscious, to avoid facing up to the overwhelming difficulties that
> a true analysis of the question brings. The mere fact that we cannot
> define consciousness ought to be a pretty big red flag that we should
> not be making facile assumptions about who has it and who doesn't!
> (Or, if you say that we can in fact define consciousness, tell me how
> to know which AI programs have it, and which don't?)
If we define consciousness as have defined it for application to a Mars rover,
then it's clear that humans and all animals are conscious, but to different
extents. But by my defintion something is only conscious if it is conscious of
being an entity within a larger context and is able to act to satisfy some
internal goals. An AI system like the Mars rover may be much more self-aware
than a human being in the sense that it can compare copies of its programs and
do error correction. I think a lot of the difficulty in trying to define
consciousness is that we try to think of a definition that includes both the
Mars rover's consciousness and our own internal narrative stream of thought in
one concise definition - while they are two different kinds of consciousness.
Our internal stream of thoughts is primarily verbal and in terms of a Mars
is a kind of log-book recording things I classify as worth attending to and
putting into at least short-term memory. Experiments point to it being largely
a rationalization and 'after-the-fact' relative to decision making. It's
probably not even the part of the program in the Mars rover that we would point
to as 'what provides the consciousness'.
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to email@example.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at