[EMAIL PROTECTED] wrote:
>YOU (as conventionally interpreted) believe you are conscious, we
>agree on that. The existence of your consciousness under the
>interpretation is objective. And YOU most always (with exceptions
>like unconsciousness and perhaps some meditative states) maintain
>But you don't seem to agree with me that the interpretation itself is
>a subjective matter.
There's some ambiguity in your use of the word "interpretation" here. In
"Robot" you argue that a physical system can be seen as implementing any
possible computation under the right mapping, so that an "interpretation"
might be just another word for a particular choice of mappings. But
"interpretation" can also be used in the more vague sense of attributing
motives and thoughts to a physical system, giving you an explanation for its
behavior in terms of folk-psychology.
And there might also be a hybrid of these two views, which says that in the
Platonic "space" of all possible computations there is always *some*
computation corresponding to a particular set of thoughts and motives, so if
a given system implements all possible computations than no
folk-psychological explanation is ultimately better than any other. For
example, if I am interacting with a character in a video game, I know that
some possible Turing machine simulates a being with a complex brain who
happens to respond in the same way as the character in the game, so maybe
what you're saying is that I'm free to choose the mapping where I'm actually
interacting with such a being (even if I don't bother to figure out the
details of the mapping itself).
Depending on what you mean by "interpretation," it might be possible to have
a view basically similar to yours that still sees first-person experience as
a matter of objective reality (unlike in position #1 of my original post).
There could be a theory of consciousness that says that there is a
one-to-one correspondence between computations and subjective experiences,
even though a given physical system may actually implement all possible
computations so it would still be a matter of "interpretation" as to which
one I'm interacting with.
This is actually somewhat similar to my view. I am doubtful that a finite
physical system actually gives rise to all possible subjective experiences
(see below for why I don't think a theory of consciousness should be based
on 'computations'), but I do think it may give rise to a lot of them--for
example, my right brain and left brain may have their own experiences
distinct from the experience of my brain as a whole, and more generally any
subset of causally connected events in my brain may have its own unique
experience. In this view "I" am just a particular subset, a particular
pattern of activity in the brain, rather than the brain itself.
My hope is that a theory of consciousness would explain why some subsets are
more likely to be experienced than others (like a more detailed version of
the anthropic principle, or the 'copernican anthropic principle' discussed
on www.anthropic-principle.com), so maybe the most probable subsets of my
brain's activity would all have basically similar kinds of experiences. But
every pattern would experience *itself* as real, although many might have a
very simple sort of consciousness (see Chalmers' discussion of the
consciousness associated with a thermostat, for example).
>The reward for legitimizing interpretations in which the humongous
>lookup table is as conscious as any other implementation of a given AI
>program is consistency and evaporation of much of the elusiveness of
>consciousness. I've become very comfortable with that, at first blush
I don't see why we should get any inconsistencies from the assumption that a
lookup table does not have the same sort of experience as other types of
programs. Even if you assume that identical subjective experiences arise
whenever a particular computation is implemented, it's still true that a
Turing machine simulating a lookup table is quite different from a Turing
machine simulating a neural network with the same outward behavior.
My view is that a theory of consciousness should involve a one-to-one
mapping between subjective experiences and *some* set of mathematical
structures, but I don't think the set of all "computations" is a good
candidate. I think we need something more along the lines of "isomorphic
causal structures," but there doesn't yet seem to be a good way to define
this intuition...Chalmers' paper at
discusses his views on the subject.
> > in the same way that a recording of a person
> > probably doesn't have its own subjective experience. Likewise, if I
> > attribute feelings to a stuffed animal, I'm probably wrong.
>Why do you think you're wrong? I argued as forcefully as I could
>about a year ago on this list and also in the last chapter of my
>"Robot" book that allowing interpretationions like these (and
>arbitrarily more extreme cases like Putnam's rock) does not lead to
>intellectual or moral disaster. On the contrary, it makes sense of
>issues that the intuitive positions endlessly muddle.
But I'm not sure how we can get the appearance of living in an orderly
universe with well-defined "laws" in this view. There are a great many
possible computations that would correspond to a brain similar to mine
suddenly seeing the universe around it go haywire--should I be worried? It
seems to me that you really need to assign some kind of "objective
probability" to each member of the set of all possible "interpretations" or
My own pet "theory of everything" is that a unique assignment of objective
probabilities might emerge naturally from a theory of consciousness. As I
said before, my idea is that a theory of consciousness should basically be a
formalization/extension of anthropic reasoning, so that in a universe
containing 999 minds of insect-level complexity and 1 human-level mind, the
probability of being the human might be much greater than 1/1000. If this
is the case, it may be that if you make the Platonic assumption that *all*
possible mental patterns exist (whether 'mental patterns' are defined as
computations or in some other way), then there might be a single
self-consistent way to assign probabilities to each one, just as a large set
of simultaneous equations may have only one solution.
> > But #1 says it's meaningless to talk about the "truth" of these
> > questions, just like it's meaningless to ask whether Shakespeare is
> > "really" a better author than Danielle Steele.
>It's not meaningless, but you have to agree on an interpretation to
>get a meaning.
It's not meaningless to have discussions about aesthetics, but I think it's
meaningless to ask which book is "really" better...I don't think that
reality cares one way or another. It doesn't make much sense to imagine two
universes which are identical in terms of both physical events and mental
experiences (if they exist), but which differ in a single way: in one,
Shakespeare is "really" better, and in the other Danielle Steele is.
On the other hand, if you believe that consciousness is real, it does seem
to make sense to imagine two possible universes which are identical
physically, but in one physical events give rise to qualia and in the other
they don't (a 'zombie universe'). This would show that facts about
consciousness are not fully reducible to third-person facts, even though
there may be a theory of consciousness that defines a one-to-one
relationship between them (also, if something like the pet theory I
mentioned earlier turned out to be correct, it might be possible to 'reduce'
all third-person facts to first-person facts).
My wife and I often discuss the feelings and
>motivations of characters in fiction. These are not meaningless
>discussions: there interpretations (which I can claim to be Platonic
>worlds as solid as any) in which the characters are as real as you or
>I. It is possible to gain emotional therapy by unloading one's
>innermost concerns on a teddy bear interpreted as thoughtful listener.
But consider a "story" like this:
"A man walks into a bar. He says, 'gimme a beer.'"
Certainly there are Platonic worlds where this really happens, and it's
probably happened in 'our' world too. But the story itself doesn't pick out
any particular world, so it doesn't make much sense to say that there is a
"true answer" to questions like, "Was the man married? Was he in a bad mood
at the time? Had he just been fired from his job?"
The best we can do is look at all possible worlds where the events in the
story take place, and then decide which versions are more plausible relative
to a universe like the one we experience. For example, if we ask "Did the
man have purple skin and compound eyes?" it makes sense to answer "probably
> > But #1 doesn't just say that conducting a long personal relationship
> > with a system is a good test of consciousness (which I believe)...it
> > says that there's nothing to test, because attributing consciousness
> > to a system is a purely aesthetic decision. Even an omniscient God
> > could not tell you the "truth" of the matter, if #1 is correct.
>I have come to strongly believe there is no truth to the matter that
>is independent of interpretation. But within an interpretation ...
I believe that my own first-person experience is what it is, and no one
else's opinion can change that. But the physical system that is my brain
probably gives rise to many different kinds of subjective experiences...if
your ideas are right it may even give rise to all possible experiences.
What this shows is that "I" am not identical to my brain, a conclusion that
anyone with a computational view of mind would agree with.
Get your FREE download of MSN Explorer at http://explorer.msn.com