Hi,

Well, your point is a good one, and a different one.

The specific qualities of an AGI's self will doubtless be very
different from that of a human being's.  This will depend not only on
its emotional makeup but also on the nature of its embodiment, for
example.  Much of the nature of human self is tied the localized
nature of our physical embodiment.  An AGI with a distributed
embodiment, with sensors and actuators all around the world or beyond,
would have a very different kind of self-model than any human....  And
a human hooked into the Net with VR technology and able to sense and
act remotely via sensors and actuators all over the world, might also
develop a different flavor of self not so closely tied to localized
physical embodiment.

But all that is a different sort of point....  My point was that an
AGI that was very rapidly undergoing a series of profound changes
might never develop a stable self-model at all, because as soon as the
model came about, it would be rendered irrelevant.

Imagine going through the amount of change in the human life course
(infant --> child --> teen --> young adult --> middle aged adult -->
old person) within, say, a couple days.  Your self model wouldn't
really have time to catch up.  You'd have no time to be a stable
"you."  Even if there were (as intended e.g. in Friendly AI designs) a
stable core of supergoals throughout all the changes

-- Ben G


On 10/11/06, Chris Norwood <[EMAIL PROTECTED]> wrote:
How much of our "selves" are driven by biological
processes that an AI would not have to begin with, for
example...fear? I would think that the AI's self would
be fundamentaly different to begin with due to this.
It may never have to modify itself to achieve the new
type of self that you are describing.

--- Ben Goertzel <[EMAIL PROTECTED]> wrote:

> In something I was writing today, for a
> semi-academic publication, I
> found myself inserting a paragraph about how
> unlikely it is that
> superhuman AI's after the Singularity will possess
> "selves" in
> anything like the sense that we humans do.
>
> It's a bit long and out of context, but the passage
> in which this
> paragraph occurred may be of some interest to some
> folks here....  The
> last paragraph cited here is the one that mentions
> future AI's...
>
> -- Ben
>
> ******
>
>
> "
> The "self" in the present context refers to the
> "phenomenal self"
> (Metzinger, XX) or "self-model" (Epstein, XX).  That
> is, the self is
> the model that a system builds internally,
> reflecting the patterns
> observed in the (external and internal) world that
> directly pertain to
> the system itself.  As is well known in everyday
> human life,
> self-models need not be completely accurate to be
> useful; and in the
> presence of certain psychological factors, a more
> accurate self-model
> may not necessarily be advantageous.  But a
> self-model that is too
> badly inaccurate will lead to a badly-functioning
> system that is
> unable to effectively act toward the achievement of
> its own goals.
>
> "
> The value of a self-model for any intelligent system
> carrying out
> embodied agentive cognition is obvious.  And beyond
> this, another
> primary use of the self is as a foundation for
> metaphors and analogies
> in various domains.  Patterns recognized pertaining
> the self are
> analogically extended to other entities.  In some
> cases this leads to
> conceptual pathologies, such as the
> anthropomorphization of trees,
> rocks and other such objects that one sees in some
> precivilized
> cultures.  But in other cases this kind of analogy
> leads to robust
> sorts of reasoning – for instance, in reading Lakoff
> and Nunez's (XX)
> intriguing explorations of the cognitive foundations
> of mathematics,
> it is pretty easy to see that most of the metaphors
> on which they
> hypothesize mathematics to be based, are grounded in
> the mind's
> conceptualization of itself as a spatiotemporally
> embedded entity,
> which in turn is predicated on the mind's having a
> conceptualization
> of itself (a self) in the first place.
>
> "
> A self-model can in many cases form a
> self-fulfilling prophecy (to
> make an obvious double-entendre'!).   Actions are
> generated based on
> one's model of what sorts of actions one can and/or
> should take; and
> the results of these actions are then incorporated
> into one's
> self-model.  If a self-model proves a generally bad
> guide to action
> selection, this may never be discovered, unless said
> self-model
> includes the knowledge that semi-random
> experimentation is often
> useful.
>
> "
> In what sense, then, may it be said that self is an
> attractor of
> iterated forward-backward inference?  Backward
> inference infers the
> self from observations of system behavior.  The
> system asks: What kind
> of system might I be, in order to give rise to these
> behaviors that I
> observe myself carrying out?   Based on asking
> itself this question,
> it constructs a model of itself, i.e. it constructs
> a self.  Then,
> this self guides the system's behavior: it builds
> new logical
> relationships between its self-model and various
> other entities, in
> order to guide its future actions oriented toward
> achieving its goals.
>  Based on the behaviors new induced via this
> constructive,
> forward-inference activity, the system may then
> engage in backward
> inference again and ask: What must I be now, in
> order to have carried
> out these new actions?  And so on.
>
> "
> My hypothesis is that after repeated iterations of
> this sort, in
> infancy, finally during early childhood a kind of
> self-reinforcing
> attractor occurs, and we have a self-model that is
> resilient and
> doesn't change dramatically when new instances of
> action- or
> explanation-generation occur.   This is not strictly
> a mathematical
> attractor, though, because over a long period of
> time the self may
> well shift significantly.  But, for a mature self,
> many hundreds of
> thousands or millions of forward-backward inference
> cycles may occur
> before the self-model is dramatically modified.  For
> relatively long
> periods of time, small changes within the context of
> the existing self
> may suffice to allow the system to control itself
> intelligently.
>
> "
> Finally, it is interesting to speculate regarding
> how self may differ
> in future AI systems as opposed to in humans.  The
> relative stability
> we see in human selves may not exist in AI systems
> that can
> self-improve and change more fundamentally and
> rapidly than humans
> can.  There may be a situation in which, as soon as
> a system has
> understood itself decently, it radically modifies
> itself and hence
> violates its existing self-model.  Thus:
> intelligence without a
> long-term stable self.  In this case the
> "attractor-ish" nature of the
> self holds only over much shorter time scales than
> for human minds or
> human-like minds.  But the alternating process of
> forward and backward
> inference for self-construction is still critical,
> even though no
> reasonably stable self-constituting attractor ever
> emerges.  The
> psychology of such intelligent systems will almost
> surely be beyond
> human beings' capacity for comprehension and
> empathy.
> "
>
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/[EMAIL PROTECTED]
>


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to