Re: Fwd: Implementation/Relativity

1999-07-31 Thread Jacques M Mallah

On Fri, 30 Jul 1999, someone wrote:
> This is from Tegmark's paper (although I think he was paraphrasing
> Tipler from Physics of Immortality):
> 
>   In fact, since we can choose to picture our Universe
>   not as a 3D world where things happen, but as a 4D world that merely
>   is, there is no need for the computer to compute anything at all --
>   it could simply store all the 4D data, and the "simulated" world 
>   would still have PE.

I haven't read that much of Tegmarks paper.  Obviously he's not a
computationalist, but so far sounds like a structuralist.

>   Clearly the way in which the data is stored
>   should not matter, so the amount of PE we attribute to the stored
>   Universe should be invariant under data compression.

But at this point he no longer sounds like a regular stucturalist.
What he says above seems silly.

>   Now the ultimate question forces itself 
>   upon us:  for this Universe to have PE, is the CD-ROM really needed
>   at all?  If this magic CD-ROM could be contained within the simulated
>   Universe itself, then it would "recursively" support its own PE.  
>   This would not involve any catch-22 "hen-and-egg" problem regarding 
>   whether the CD-ROM or the Universe existed first, since the Universe 
>   is a 4D structure which just is ("creation" is of course only a 
>   meaningul notion within a spacetime).

Here he doesn't answer his own question.

>   In summary, a mathemtaical 
>   structure with SASs would have PE if it could be described purely 
>   formally (to a computer, say) -- and this is of course little else 
>   than having mathematical existence.

A case for that can be made and has been on this list, but not in
the quotes from his paper above.

 - - - - - - -
  Jacques Mallah ([EMAIL PROTECTED])
   Graduate Student / Many Worlder / Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
My URL: http://pages.nyu.edu/~jqm1584/




Re: Fwd: Implementation/Relativity

1999-07-29 Thread Russell Standish

> 
> Russell Standish <[EMAIL PROTECTED]>:
> > The converse question one could ask is whether a mathematical system
> > with a SAS could be embedded in our world.
> 
> A simulation containing an aware AI is a simple example of this.
> You'll be seeing them before too long in video games everywhere.
> 
> 

But then you would just have another SAS embedded within our
mathematical structure. We already have these, namely homo sapiens,
such an entity is not too surprising. What I was asking was whether it
was possible to embed a mathematical structure - eg Conway's Game of
Life - that contained a SAS. It for from obvious that this is
possible, and if it were, it would imply that we were not in a minimal
information world.

Cheers


Dr. Russell StandishDirector
High Performance Computing Support Unit,
University of NSW   Phone 9385 6967
Sydney 2052 Fax   9385 6965
Australia   [EMAIL PROTECTED]
Room 2075, Red Centre   http://parallel.hpc.unsw.edu.au/rks





Re: Fwd: Implementation/Relativity

1999-07-29 Thread Hans Moravec

[EMAIL PROTECTED]:
> Earlier I think Hans said that one possible observer was the
> conscious entity himself.  I am an observer of my own consciousness.
> My consciousness (or lack thereof) is subjective, and varies
> depending on the observer, but one of the observers is me.  
>
> Does this mean that there is a special consciousness, which is that
> consciousness observed by the observer himself?

Under different interpretations, there are many such special internal
observers, all different and mostly unaware of each other, who
each see themselves implemented in the same body and brain. (Same as
for Putnam rocks or my sun creatures.)

What makes the usual "you" extra special out of all those is that it
is implemented in a way that allows the rest of us to communicate
with it easily.

> Does this self-interpretation have a privileged position, and if so
> could we choose to say that it is the "true" consciousness of Hans
> himself?

Because it is the communication that selects out the "true"
consciousness from the myriad alternatives, a Turing test
is the best way to identify it.

But different outside observers, who interpret your stuff in
different ways, won't necessarily register human-talk as meaningful.
They might instead achieve a meaningful conversation with one of
the other self-aware observers in a different interpretation of
your structure.
For them (as for itself) that other internal observer would be the
"true" you.

We see a hint of this when animals respond to our subconscious
emotions rather than our conscious beliefs and intentions.

There could even be observers that interpret your structure in
enough different ways to find several different consciousnesses
in you to talk to.  They would find the notion of "true"
consciousness rather pointless.

i.e. which is the "true" consciousness is observer-relative.




Re: Fwd: Implementation/Relativity

1999-07-28 Thread Hans Moravec

Russell Standish <[EMAIL PROTECTED]>:
> Anthropomorphism may be common, but this doesn't mean it is correct,
> nor useful.

It is very useful in making sense of fiction (anthropomorphising words
on paper, pictures on film, etc.), therapeutic in communicating with a
diary or a teddy bear.  It is ecologically beneficial when applied to
trees and animals, as was done by native Americans.  It is
historically beneficial when applied to structures and artifacts.  It
benefits industrial progress when it is applied to machines, as often
in Japan, making them a cherished part of the family rather than a
threatening soulless force.

And I've argued hard that it is an interpretation, as correct as any
other.


>> Western stinginess in attributing minds, on the other
>> hand, is becoming a Luddite-rousing impediment to progress.
>> 
> How so?

About half the press advanced robots get here plays up the
Frankenstein analogy.  I encounter it a lot because of my books.  It's
even worse in Germany.  Asimov noted the reaction, and in his robot
books laws are passed keeping robots out of many occupations, as well
as the famous three laws to keep robots in their place.  It is a
subliminal bias that allows only human beings have real souls, and
fears anything else that acts like a human but is different as a
soulless inhuman menace.

Japan's Buddhist and Shinto traditions routinely assign souls to all
kinds
of objects, animal, vegetable, mineral, geographic, architectural and
mechanical, and granting them to robots was natural.  There is no
Frankenstein complex in Japan, and despite its smaller economy, Japan
uses over half the robots in the world.




Re: Fwd: Implementation/Relativity

1999-07-28 Thread hal

Hans Moravec, <[EMAIL PROTECTED]>, writes:
> Christopher Maloney <[EMAIL PROTECTED]>:
> > If our tools were sophisticated enough, we could figure out what
> > that creature was experiencing at that moment, independent of his or
> > her report.
>
> NO!  We may determine the full physical structure of an organism well
> enough to simulate it faithfully as a purely physical object.
>
> However, any experiences we impute to it will remain a subjective
> matter with different answers for different observers.  Some observers
> will be content to say there are no experiences in any case, including
> when they simulate you or me.

In trying to understand these ideas, I have a question.

Earlier I think Hans said that one possible observer was the
conscious entity himself.  I am an observer of my own consciousness.
My consciousness (or lack thereof) is subjective, and varies depending
on the observer, but one of the observers is me.

Does this mean that there is a special consciousness, which is that
consciousness observed by the observer himself?

In other words, I may impute a certain consciousness to Hans, and someone
else may interpret his actions as caused by a different consciousness,
but Hans himself interprets his consciousness in a certain way as well.
Does this self-interpretation have a privileged position, and if so could
we choose to say that it is the "true" consciousness of Hans himself?

Hal




Re: Fwd: Implementation/Relativity

1999-07-28 Thread Russell Standish

> 
> Russell Standish <[EMAIL PROTECTED]>:
> > I don't think we ever discussed the concept of attributing
> > consciousness to inanimate objects before Hans came along.
> 
> But I think you DID agree to attribute consciousness to
> purely abstract entities, notably mathematically defined
> universes containing SASes.

Correct. This is the definition of SAS. That we don't yet have a
reasonable definition of either SAS or consciousness is not a cause of
concern. It just means there is more work to be done.

> 
> I merely pointed out that it is possible, even natural
> and common, to map such abstractions containing
> self-aware systems onto many things we commonly encounter.
> 

Anthropomorphism may be common, but this doesn't mean it is correct,
nor useful.

> This violates some reflexive assumptions you carry, many
> instilled by a western education.
> Those assumptions badly need to be violated.
> 
> They may have been good during our recent naive materialist
> phase of development, but that phase is ending.
> This list's discussion topic is one symptom of that end, as
> are looming questions about conscious machines.
> 
> Other traditions have no problem seeing minds in
> inanimate objects, when such interpretation facilitates
> interaction.  That acceptance has much to do with the
> Japanese comfortable acceptance of robots.
> 
> Western stinginess in attributing minds, on the other
> hand, is becoming a Luddite-rousing impediment to progress.
> 

How so?


Dr. Russell StandishDirector
High Performance Computing Support Unit,
University of NSW   Phone 9385 6967
Sydney 2052 Fax   9385 6965
Australia   [EMAIL PROTECTED]
Room 2075, Red Centre   http://parallel.hpc.unsw.edu.au/rks





Re: Fwd: Implementation/Relativity

1999-07-27 Thread Russell Standish

> 
> Russell Standish <[EMAIL PROTECTED]>:
> > conciousness we experience directly ... generated by some kind of
> > self-referential process ... is intrinsically a different to
> > the Turing type tests we perform to attribute conciousness in
> > external objects.
> > ...
> > nor do I think it a particularly useful way of
> > thinking.
> 
> But it is enormously useful for deciding whether to deal with
> particular robots as conscious!
> 

I don't see any problem in attributing consciousness to a robot that
convinces me that it is conscious, in just the same way as I attribute
consciousness to a dog. Animal consciousness such as a dogs only
appear to differ in degree rather than in kind to me. On the other
hand a supposed conscious rock would truly differ in kind, as the
attribution of consciousness gives us no predictive power on their
properties.

I also agree with the idea that consciousness is a relative property,
one that is in the eye of the beholder. In the eye of this beholder,
"free will" is an essential property of consciousness, and its hard
for me to see how a Turing machine could have free will. Of course, it
is not necessary to construct robots from Turing machines, but most
likely they will be able to simulate a Turing machine, as the human
brain can do. I really suspect that the human brain is capable of more
than a Turing machine can do.

The simplest operation I can think of that Turing machines can't do is
generate true random numbers (real computers can do this, albeit in
usually in very kludgy ways). I'm not entirely sure that the human
brain can generate truly random numbers either, but probably it
can. This is why I speculate that the random number generator may be
necessary and sufficient for "free will".


> Yours isn't.  Your quest already has a few centuries of western
> philosophy of mind under its belt, and is no closer to finding the
> objective qualities that constitute consciousness.  Like the effort
> to define the properties of phlogiston or the luminiferous ether,
> it doesn't work because its subject matter is an abstraction that
> changes with viewpoint.
> 
> 

And you a proposing that considering rocks as conscious will help find
these qualities too?


Dr. Russell StandishDirector
High Performance Computing Support Unit,
University of NSW   Phone 9385 6967
Sydney 2052 Fax   9385 6965
Australia   [EMAIL PROTECTED]
Room 2075, Red Centre   http://parallel.hpc.unsw.edu.au/rks