--- On Wed, 11/19/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> >My definition of pain is negative reinforcement in a system that learns.
> 
> IMO, pain is more like a data with the potential to cause disorder in
> hard-wired algorithms. I'm not saying this fully covers it but it's
> IMO already out of the Autobliss scope.

You might be thinking of continuous or uncontrollable pain. Like when a rat is 
shocked and can stop the shock by turning a paddle wheel, and a second rat 
receives identical shocks to the first but its paddle wheel has no effect. Only 
the second rat develops stomach ulcers.

When autobliss is run with two negative arguments so that it is punished no 
matter what it does, the neural network weights take on random values and it 
never learns a function. It also dies, but only because I programmed it that 
way.

-- Matt Mahoney, [EMAIL PROTECTED]





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to