Trent Waddington wrote:
Richard,

  After reading your paper and contemplating the implications, I
believe you have done a good job at describing the intuitive notion of
"consciousness" that many lay-people use the word to refer to.  I
don't think your explanation is fleshed out enough for those
lay-people, but its certainly sufficient for most the people on this
list.  I would recommend that anyone who hasn't read the paper, and
has an interest in this whole consciousness business, give it a read.

I especially liked the bit where you describe how the model of self
can't be defined in terms of anything else.. as it is inherently
recursive.  I wonder whether the dynamic updating of the model of self
may well be exactly the subjective experience of "consciousness" that
people describe.  If so, the notion of a p-zombie is not impossible,
as you suggest in your conclusions, but simply an AGI without a
self-model.

This is something that does intrigue me (the different kinds of self-model that could be in there), but I come to slightly different conclusions.

I think someone (Putnam, IIRC) pointed out that you could still have consciousness without the equivalent of any references to self and others, because such a creature would still be experiencing qualia.

But, that aside, do you not think that a creature with absolutely no self model at all woudl have some troubles? It woudl not be able to represent itself in the context of the world, so it would be purely reactive. But wait: come to think of it, could it actually control any limbs if it did not have some kind of model of itself?

Now, suppose you grant me that all AGIs would have at least some model of self (if only to control a single robot arm): then, if the rest of the cognitive mechanism allows it to think in a powerful and recursive way about the contents of its own thought processes (which I have suggested is one of the main preconditions for being conscious, or even being AG-Intelligent), would it not be difficult to stop it from developing a more general model of itself than just the simple self model needed to control the robot arm? We might find that any kind of self model would be a slippery slope toward a bigger self model.....

Finally, consider the case of humans with severe Autism. One suggestion is that they have a very poorly developed, or suppressed self model. I would be *extremely* reluctant to think that these humans are p-zombies, just because of that. I know that is a gut feeling, but even so.




Finally, the introduction says:

  "Given  the  strength  of  feeling on  these matters - for  example,
 the widespread belief  that AGIs  would  be  dangerous  because,  as
conscious  beings, they would inevitably rebel against their lack of
freedom - it  is  incumbent upon  the AGI  community  to  resolve
these questions  as  soon  as  possible."

I was really looking forward to seeing you address this widespread
belief, but unfortunately you declined.  Seems a bit of a tease.

Trent

Oh, I apologize. :-(

I started out with the intention of squeezing into the paper a description of the concsiousness proposal PLUS my parallel proposal about AGI motivation and emotion.

It became obvious toward the end that I would not be able to say anything about the latter (I barely had enough room for a terse description of the former). But then I explained instead that this was part of a larger research program to cover issues of motivation, emotion and friendliness. I guess that wording did not really make up for the initial tease, so I'll try to rephrase that in the edited version

And I will also try to get the motivation and friendliness paper written asap, to complement this one.




Richard Loosemore









-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to