I mostly agree with you, but I suspect that a young AGI growing up in a
community of humans could still develop a fully-functional self-model and
ascend the Maslow hierarchy... I don't see why an AGI society would be
required...

-- Ben G

On Fri, May 3, 2013 at 3:19 PM, Chris Nolan <[email protected]>wrote:

> >Our plan with OpenCog is to embody it in video game agents and mobile
> robotic agents, so that it would be an "agent" from the start....
>
> Maybe I should separate out what I mean as a difference between a simple
> agent and one that is a complex, self-referential agent. A complex,
> self-referential agent would be one that is operates within a complex
> environment, understands itself as an actual entity within that
> environment, and seeks fulfillment of the upper levels of the Maslow 
> hierarchy of
> needs (belonging, self-esteem, self-actualization). What I mean by a
> "simple agent" would be one that is able to function in an environment in
> autonomous way but does not necessarily view itself as as existing as an
> individual creature. It might have a high-level reasoning capacity and even
> operate within a complex environment, but does it go beyond the lower-level
> hierarchy of needs towards the upper "human-level" needs, and possibly
> beyond?
>
> I think that is how I would separate out between sub human-level AGI and
> full human-level AGI and beyond. My intuition in that regard is it will
> require the evolution from lower level to higher levels within a complex
> Multi-Agent System; i.e. co-evolving an AGI society along with a higher
> levels of AGI. The reason I have that intuition is that I think the complex
> nature of "self-identity" for humans is intrinsically tied into our
> interactions with society, even if at a subconscious level, and can't be
> separated out from our growth in society from childhood to adulthood. The
> dynamics of that interaction wouldn't be that much different for AGI. The
> complexity of that is such that it couldn't be programmed but would instead
> have to be evolved and grown. Maybe that would be virtual, maybe that would
> be the interaction of many robots in human culture, I don't know. But I
> think it would require that interaction to go from sub human-level to full
> and post human-level AGI.
>
> Maybe I'm wrong though. As I said, just my intuition...
>
>
>   ------------------------------
>  *From:* Ben Goertzel <[email protected]>
> *To:* AGI <[email protected]>
> *Sent:* Friday, May 3, 2013 12:23 AM
>
> *Subject:* Re: [agi] Toward enlightened AGIs
>
>
> An AGI society would be awesome... however, I doubt it's critical...
>
> Our plan with OpenCog is to embody it in video game agents and mobile
> robotic agents, so that it would be an "agent" from the start....  But
> various OpenCog components are now being used as tools...
>
> -- Ben G
>
> On Fri, May 3, 2013 at 8:14 AM, Chris Nolan <[email protected]>wrote:
>
> Ben,
>
> >I think that humans have an evolved tendency to be overly egocentric, and
> (esp. in modern Western culture) to model themselves as isolated, separate
> beings much >more than is really the case....  So compassion meditation is
> in part a way of overcoming this particular human propensity....  OTOH, AGI
> systems would not >necessarily have that sort of propensity in the first
> place...
>
> I don't necessarily disagree. A slight difference I would state is that
> humans have evolved the natural ability to feel compassion, however it's
> often linked to group identity; i.e. we naturally feel compassion for
> family and those we consider members of our "tribe" but not necessarily for
> those outside. Compassion meditation is then a way to change that neural
> wiring to create that natural compassion for those whom we might normally
> consider "outside our tribe," or even for the "enemies of our tribe." AGI
> systems might not have the bias to tribal identity and creating the
> "other," but I wonder if the natural bias might be towards creating no
> group identity at all and so the unstated bias would then be for all to be
> "other"?
>
> >We definitely would, however, want our AGIs to have an initial bias to
> model others and see the world from other beings' views....  This militates
> toward a kind of >non-attachment to one's own
> self/self-model/individual-perspective...
>
> Yeah, I think that's a very good point. There's a very big difference
> between non-attachment to self because of a viewpoint of being connected
> with a larger whole and non-attachment to, or really detachment from, self
> because of a lack of connection to any group or identity. How do you see
> this being implemented within an AGI system? Would it be necessary to
> have it explicitly designed within OpenCog's framework or would it be a
> part of the implicationLinks that are learned through experience?
>
> This also brings up another question which is what role will the creation
> of multi-agent systems play in the evolution of AGI? It seems likely to me
> that either OpenCog or some other AGI system will eventually create a
> general AI reasoning capacity, an AGI tool, that is human-level. But, will
> that be the same as the creation of an AGI agent, or will it be necessary
> to then evolve the AGI from a tool-level to an agent-level within some
> larger multi-agent system (likely a combination of humans and other AGIs).
> My intuition would be yes because I think that is how you would evolve the
> AGI's understanding of itself within the larger group context, creating the
> experience and implied understanding of behavior within the larger group
> (which then necessitates a model of self). The next question begged is then
> what size magnitude of multi-agent system? Would only a few AGI agents
> interacting with humans be enough, or would it be more beneficial to have a
> very large number of AGI agents (an AGI society)? Or, stated another way,
> is it necessary to co-evolve AGI society with the AGI agent?
>
> It is a part of how human intelligence has evolved...
>
>   ------------------------------
>  *From:* Ben Goertzel <[email protected]>
> *To:* AGI <[email protected]>
> *Sent:* Thursday, May 2, 2013 3:38 AM
> *Subject:* Re: [agi] Toward enlightened AGIs
>
>
> Interesting response, thanks...
>
> I think that humans have an evolved tendency to be overly egocentric, and
> (esp. in modern Western culture) to model themselves as isolated, separate
> beings much more than is really the case....  So compassion meditation is
> in part a way of overcoming this particular human propensity....  OTOH, AGI
> systems would not necessarily have that sort of propensity in the first
> place...
>
> We definitely would, however, want our AGIs to have an initial bias to
> model others and see the world from other beings' views....  This militates
> toward a kind of non-attachment to one's own
> self/self-model/individual-perspective...
>
> ben
>
>
>
> On Thu, May 2, 2013 at 8:39 AM, Chris Nolan <[email protected]>wrote:
>
> Ben,
>
> that's an interesting concept. From my reading of Buddhism is also seems
> like non-attachment in meditation is also often linked with metta practice,
> or compassion meditation to state it in a simplified way. Have you looked
> at any of the neuroscience papers on that practice? In the simple example
> you supply of "Bob" detaching from his girlfriend the practice would be not
> just "letting go of the suffering of the break-up" but also adding a
> compassion practice for ex-girlfriend, in this way the Buddhist practice
> would be developing non-attachment in conjunction with compassion for the
> individual and their choice. In this way an individual avoids just falling
> into the trap of avoiding suffering and so getting caught by it even more.
> Side note: as someone having been in a number of break-ups I've found it
> works better than just trying to detach, haha...
>
> I bring it up because I wonder if the concept could be informative for the
> goal of creating a Friendly AI? In this way OpenCog's system of balancing
> attachment and experience could also be linked with broader compassion.
> Possibly in the implication links, while disassociating happiness with
> "put_arm_around_girlfriend"
> and adding an implication that happiness for girlfriend includes
> separation from Bob. That possibly hints at way of formulating ethics for
> A.I.
>
> -Chris
>
>
>   ------------------------------
>  *From:* Ben Goertzel <[email protected]>
> *To:* AGI <[email protected]>
> *Sent:* Wednesday, May 1, 2013 1:24 AM
> *Subject:* [agi] Toward enlightened AGIs
>
> For your general amusement, here is a blog post I rote on
>
> "The dynamics of attachment and non-attachment in human and AGI minds":
>
>
> http://multiverseaccordingtoben.blogspot.hk/2013/05/the-dynamics-of-attachment-and-non.html
>
> :)
> ben
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/20347893-f72b365c
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>
>
>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/20347893-f72b365c> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com/>
>
>
>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>
>
>
>
> --
> Ben Goertzel, PhD
> http://goertzel.org
>
> "My humanity is a constant self-overcoming" -- Friedrich Nietzsche
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/20347893-f72b365c> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com/>
>
>
>     *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to