Generally, we reward good behavior and punish bad behavior. 

Doing so with AI would seem to be to be most wise to direct the learning 
and development to maximize knowing what is good and what is bad. 

Other wise the AI system does not know what a bad module might be for not 
being informed by the scoring/evaluation system, or what needs 
improvement. 

Also the removal of low or non-performing models. 
Even program size has limits. 

Does anyone have a feel for the basic size of a starting Seed AI? 

What are the minimum modules needed for Starting Seed AI?

Like playing chess. We don’t know how good players are, all we know is 
that one player is better than another. 

Dan Goe


----------------------------------------------------
>From : William Pearson <[EMAIL PROTECTED]>
To : agi@v2.listbox.com
Subject : Re: [agi] Reward versus Punishment? .... Motivational system
Date : Mon, 12 Jun 2006 19:46:08 +0100
> On 12/06/06, James Ratcliff <[EMAIL PROTECTED]> wrote:
> > Will,
> >   Right now I would think that a negative reward would be usable for 
this 
> > aspect.
> 
> I agree it is usable. But I am not sure it necessary, you can just
> normalise the reward value.
> 
> Let say for most states you normally give 0 for a satiated entity, the
> best state is 100 and the worst and the worst -100. You can just
> transform that to 0 for the worst state 100 for everyday satiated
> state and 200 for the best state, without affecting the choices that
> most reinforcement systems would make.
> 
> So pain would be a below baseline reward.
> 
> > I am using the positive negative reward system right now for
> > motivational/planning aspects for the AGI.
> > So if sitting at a desk considering a plan of action that might hurt 
himself 
> > or another, the plan would have a negative rating, where another safer 
plan 
> > may have a higher rating.
> 
> Heh.  Well I expect an AI system that worked like a human would have a
> very tenuous link between the motivation and planning systems.
> 
> The tenuous link is ably shown by my own actions. I have stated that I
> think the plausible genetically specified positive motivations are to
> do with food, sex and positive social interaction. Yet I tend to plan
> how to create interesting computer systems, which isn't the best route
> to any of the above....
> 
> More later...
> 
>   Will Pearson
> 
> -------
> To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
> please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to