--- On Wed, 11/5/08, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:

> On Wed, Nov 5, 2008 at 7:35 AM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> >
> >> Personally, I'm not making an AGI that has emotions, and I doubt if
> >> emotions are generally desirable in AGIs, except when the goal is to
> >> make human companions (and I wonder why people need them anyway, given
> >> that there're so many -- *too* many -- human beings around already).
> >
> > People may want to simulate loved ones who have died,
> if the simulation is accurate enough to be
> indistinguishable. People may also want to simulate
> themselves in the same way, in the belief it will make them
> immortal.
> 
> 
> Yeah, I should qualify my statement:  different people will want
> different things out of AGI technology.  Some want brain emulation of
> themselves or loved ones, some want android companions, etc.  All
> these things take up free energy (a scarce resource on earth), so it
> is just a new form of the overpopulation problem.  I am not against
> any particular form of AGI application;  I just want to point out that
> AGI-with-emotions is not necessary goal of AGI.

I agree. My own AGI design does not require emotion, assuming the goal is to 
automate the economy. My proposed solution is a decentralized message passing 
network that implements distributed compression of the world's knowledge by 
trading in an economy where information has negative value. Peers mutually 
benefit by trading messages that are hard to compress by the sender and easy to 
compress by the receiver. This has the effect that peers tend to specialize and 
that messages get routed to the right experts. If our language model is a 
simple unigram word model, then we have a distributed implementation of 
Salton's tf-idf information retrieval model.

A language model uses three types of learning: eidetic (short term) memory, 
association of concepts (e.g. words) in eidetic memory, and learning new 
concepts by clustering in context space. Vision is learned the same way. In 
both cases, reinforcement learning (a prerequisite of emotion) is not required.

If the goal of AGI is uploading or simulating humans, then of course it is 
necessary to simulate human emotions. Also if we allow agents to modify 
themselves and reproduce, then evolution will favor emotions such as fear of 
death, greed, tribal altruism, and the desire to reproduce.

-- Matt Mahoney, [EMAIL PROTECTED]



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to