Ok,
Alot has been thrown around here about Top-Level goals, but no real
definition has been given, and I am confused as it seems to be covering alot of
ground for some people.
What 'level' and what are these top level goals for humans/AGI's?
It seems that Staying Alive is a big one, but that
Regarding the definition of goals and supergoals, I have made attempts at:
http://www.agiri.org/wiki/index.php/Goal
http://www.agiri.org/wiki/index.php/Supergoal
The scope of human supergoals has been moderately well articulated by
Maslow IMO:
The statement, You cannot turn off hunger or pain is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so. Philosophically, it's more certain than
I think, therefore I am.
If you maintain your assertion, I'll put you in my killfile, because
we cannot
On 12/4/06, Ben Goertzel [EMAIL PROTECTED] wrote:
The statement, You cannot turn off hunger or pain is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so. Philosophically, it's more certain than
I think, therefore I am.
If you maintain your
Message -
From: Philip Goetz [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, December 04, 2006 2:01 PM
Subject: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is
it and how fast?]
On 12/4/06, Ben Goertzel [EMAIL PROTECTED] wrote:
The statement, You cannot turn
Ok,
That is a start, but you dont have a difference there between externally
required goals, and internally created goals.
And what smallest set of external goals do you expect to give?
Would you or not force as Top Level the Physiological (per wiki page you cited)
goals from signals,
For a baby AGI, I would force the physiological goals, yeah.
In practice, baby Novamente's only explicit goal is getting rewards
from its teacher Its other goals, such as learning new
information, are left implicit in the action of the system's internal
cognitive processes It's
IMO, humans **can** reprogram their top-level goals, but only with
difficulty. And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at
Also could both or any of you describe a little bit more the idea or your
goal-stacks and how they should/would function?
James
David Hart [EMAIL PROTECTED] wrote: On 11/30/06, Ben Goertzel [EMAIL
PROTECTED] wrote: Richard,
This is certainly true, and is why in Novamente we use a goal stack
Richard,
This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...
ben
On 11/29/06, Philip Goetz [EMAIL PROTECTED] wrote:
On 11/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The goal-stack AI might very well turn out simply not to be
On 11/30/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Richard,
This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...
Ben,
Could you elaborate for the list some of the nuances between [explicit]
cognitive control and [implicit]
11 matches
Mail list logo