Re: Fwd: Implementation/Relativity

1999-07-29 Thread Hans Moravec

[EMAIL PROTECTED]:
 Earlier I think Hans said that one possible observer was the
 conscious entity himself.  I am an observer of my own consciousness.
 My consciousness (or lack thereof) is subjective, and varies
 depending on the observer, but one of the observers is me.  

 Does this mean that there is a special consciousness, which is that
 consciousness observed by the observer himself?

Under different interpretations, there are many such special internal
observers, all different and mostly unaware of each other, who
each see themselves implemented in the same body and brain. (Same as
for Putnam rocks or my sun creatures.)

What makes the usual you extra special out of all those is that it
is implemented in a way that allows the rest of us to communicate
with it easily.

 Does this self-interpretation have a privileged position, and if so
 could we choose to say that it is the true consciousness of Hans
 himself?

Because it is the communication that selects out the true
consciousness from the myriad alternatives, a Turing test
is the best way to identify it.

But different outside observers, who interpret your stuff in
different ways, won't necessarily register human-talk as meaningful.
They might instead achieve a meaningful conversation with one of
the other self-aware observers in a different interpretation of
your structure.
For them (as for itself) that other internal observer would be the
true you.

We see a hint of this when animals respond to our subconscious
emotions rather than our conscious beliefs and intentions.

There could even be observers that interpret your structure in
enough different ways to find several different consciousnesses
in you to talk to.  They would find the notion of true
consciousness rather pointless.

i.e. which is the true consciousness is observer-relative.




Re: Fwd: Implementation/Relativity

1999-07-28 Thread Russell Standish

 
 Russell Standish [EMAIL PROTECTED]:
  I don't think we ever discussed the concept of attributing
  consciousness to inanimate objects before Hans came along.
 
 But I think you DID agree to attribute consciousness to
 purely abstract entities, notably mathematically defined
 universes containing SASes.

Correct. This is the definition of SAS. That we don't yet have a
reasonable definition of either SAS or consciousness is not a cause of
concern. It just means there is more work to be done.

 
 I merely pointed out that it is possible, even natural
 and common, to map such abstractions containing
 self-aware systems onto many things we commonly encounter.
 

Anthropomorphism may be common, but this doesn't mean it is correct,
nor useful.

 This violates some reflexive assumptions you carry, many
 instilled by a western education.
 Those assumptions badly need to be violated.
 
 They may have been good during our recent naive materialist
 phase of development, but that phase is ending.
 This list's discussion topic is one symptom of that end, as
 are looming questions about conscious machines.
 
 Other traditions have no problem seeing minds in
 inanimate objects, when such interpretation facilitates
 interaction.  That acceptance has much to do with the
 Japanese comfortable acceptance of robots.
 
 Western stinginess in attributing minds, on the other
 hand, is becoming a Luddite-rousing impediment to progress.
 

How so?


Dr. Russell StandishDirector
High Performance Computing Support Unit,
University of NSW   Phone 9385 6967
Sydney 2052 Fax   9385 6965
Australia   [EMAIL PROTECTED]
Room 2075, Red Centre   http://parallel.hpc.unsw.edu.au/rks





Re: Fwd: Implementation/Relativity

1999-07-28 Thread hal

Hans Moravec, [EMAIL PROTECTED], writes:
 Christopher Maloney [EMAIL PROTECTED]:
  If our tools were sophisticated enough, we could figure out what
  that creature was experiencing at that moment, independent of his or
  her report.

 NO!  We may determine the full physical structure of an organism well
 enough to simulate it faithfully as a purely physical object.

 However, any experiences we impute to it will remain a subjective
 matter with different answers for different observers.  Some observers
 will be content to say there are no experiences in any case, including
 when they simulate you or me.

In trying to understand these ideas, I have a question.

Earlier I think Hans said that one possible observer was the
conscious entity himself.  I am an observer of my own consciousness.
My consciousness (or lack thereof) is subjective, and varies depending
on the observer, but one of the observers is me.

Does this mean that there is a special consciousness, which is that
consciousness observed by the observer himself?

In other words, I may impute a certain consciousness to Hans, and someone
else may interpret his actions as caused by a different consciousness,
but Hans himself interprets his consciousness in a certain way as well.
Does this self-interpretation have a privileged position, and if so could
we choose to say that it is the true consciousness of Hans himself?

Hal




Re: Fwd: Implementation/Relativity

1999-07-27 Thread Russell Standish

 
 Russell Standish [EMAIL PROTECTED]:
  conciousness we experience directly ... generated by some kind of
  self-referential process ... is intrinsically a different to
  the Turing type tests we perform to attribute conciousness in
  external objects.
  ...
  nor do I think it a particularly useful way of
  thinking.
 
 But it is enormously useful for deciding whether to deal with
 particular robots as conscious!
 

I don't see any problem in attributing consciousness to a robot that
convinces me that it is conscious, in just the same way as I attribute
consciousness to a dog. Animal consciousness such as a dogs only
appear to differ in degree rather than in kind to me. On the other
hand a supposed conscious rock would truly differ in kind, as the
attribution of consciousness gives us no predictive power on their
properties.

I also agree with the idea that consciousness is a relative property,
one that is in the eye of the beholder. In the eye of this beholder,
free will is an essential property of consciousness, and its hard
for me to see how a Turing machine could have free will. Of course, it
is not necessary to construct robots from Turing machines, but most
likely they will be able to simulate a Turing machine, as the human
brain can do. I really suspect that the human brain is capable of more
than a Turing machine can do.

The simplest operation I can think of that Turing machines can't do is
generate true random numbers (real computers can do this, albeit in
usually in very kludgy ways). I'm not entirely sure that the human
brain can generate truly random numbers either, but probably it
can. This is why I speculate that the random number generator may be
necessary and sufficient for free will.


 Yours isn't.  Your quest already has a few centuries of western
 philosophy of mind under its belt, and is no closer to finding the
 objective qualities that constitute consciousness.  Like the effort
 to define the properties of phlogiston or the luminiferous ether,
 it doesn't work because its subject matter is an abstraction that
 changes with viewpoint.
 
 

And you a proposing that considering rocks as conscious will help find
these qualities too?


Dr. Russell StandishDirector
High Performance Computing Support Unit,
University of NSW   Phone 9385 6967
Sydney 2052 Fax   9385 6965
Australia   [EMAIL PROTECTED]
Room 2075, Red Centre   http://parallel.hpc.unsw.edu.au/rks