Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Ok, Alot has been thrown around here about Top-Level goals, but no real definition has been given, and I am confused as it seems to be covering alot of ground for some people. What 'level' and what are these top level goals for humans/AGI's? It seems that Staying Alive is a big one, but that

Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel
Regarding the definition of goals and supergoals, I have made attempts at: http://www.agiri.org/wiki/index.php/Goal http://www.agiri.org/wiki/index.php/Supergoal The scope of human supergoals has been moderately well articulated by Maslow IMO:

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel
The statement, You cannot turn off hunger or pain is sensible. In fact, it's one of the few statements in the English language that is LITERALLY so. Philosophically, it's more certain than I think, therefore I am. If you maintain your assertion, I'll put you in my killfile, because we cannot

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz
On 12/4/06, Ben Goertzel [EMAIL PROTECTED] wrote: The statement, You cannot turn off hunger or pain is sensible. In fact, it's one of the few statements in the English language that is LITERALLY so. Philosophically, it's more certain than I think, therefore I am. If you maintain your

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Mark Waser
Message - From: Philip Goetz [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Monday, December 04, 2006 2:01 PM Subject: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?] On 12/4/06, Ben Goertzel [EMAIL PROTECTED] wrote: The statement, You cannot turn

Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Ok, That is a start, but you dont have a difference there between externally required goals, and internally created goals. And what smallest set of external goals do you expect to give? Would you or not force as Top Level the Physiological (per wiki page you cited) goals from signals,

Re: Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel
For a baby AGI, I would force the physiological goals, yeah. In practice, baby Novamente's only explicit goal is getting rewards from its teacher Its other goals, such as learning new information, are left implicit in the action of the system's internal cognitive processes It's

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Ben Goertzel
IMO, humans **can** reprogram their top-level goals, but only with difficulty. And this is correct: a mind needs to have a certain level of maturity to really reflect on its own top-level goals, so that it would be architecturally foolish to build a mind that involved revision of supergoals at

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread James Ratcliff
Also could both or any of you describe a little bit more the idea or your goal-stacks and how they should/would function? James David Hart [EMAIL PROTECTED] wrote: On 11/30/06, Ben Goertzel [EMAIL PROTECTED] wrote: Richard, This is certainly true, and is why in Novamente we use a goal stack

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread Ben Goertzel
Richard, This is certainly true, and is why in Novamente we use a goal stack only as one aspect of cognitive control... ben On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote: On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote: The goal-stack AI might very well turn out simply not to be

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread David Hart
On 11/30/06, Ben Goertzel [EMAIL PROTECTED] wrote: Richard, This is certainly true, and is why in Novamente we use a goal stack only as one aspect of cognitive control... Ben, Could you elaborate for the list some of the nuances between [explicit] cognitive control and [implicit]