Ok,
One more problem I have with goals and autonomous AGI, is in humans it appears
that we really have 2 major motivational factors, physilogical needs, and
personal 'likes'.
If you are working on an AGI that will truly be autonomous, what are its base
motivations? Most AGI's will have
Initially, the Novamente system's motivations will be
-- please its human teachers
-- make sure its goal system maintains certain desirable meta-goal properties
-- learn and create new information
Designing the right initial goal system for the representationally
explicit portion of the
On 12/8/06, James Ratcliff [EMAIL PROTECTED] wrote:
What are the meta-goal properties there defined?
For example:
-- have as few distinct supergoals as possible
-- keep the supergoals as simple as possible
-- avoid logical contradiction between supergoals
-- minimize pragmatic, probabilistic
I intend to start at a bit higher age level of teen / reduced knowledge
adult,
That is not possible in an approach that, like Novamente, is primarily
experiential-learning-based...
-- Ben
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options,
Right now, the only representationally explicit goal is please the
teacher. Learning/creating information is as of now left as an
implicit goal. But once the system has reached Piaget's formal stage,
it will be useful to make learning/creating information a reflectively
(and possibly
I think that separating language learning from commonsense learning as
you're doing is a possibly viable option, but a tricky one, as in
humans the two kinds of learning are tightly bound together...
ben g
On 12/8/06, James Ratcliff [EMAIL PROTECTED] wrote:
Well, badly worded then. I can't
Humans give subtler rewards to each other (not just one-dimensional
rewards) because we share a complex emotional/social system.
Potentially, AGIs could learn to accept complex, nuanced rewards from
humans via interacting with them in a sim world for a while, in a
variety of situations...
This
Yeah, I am trying to be careful to skirt the bounds of many of the fields of
AI, and not get stuck in the full-cmplexity of any of them :} Tight rope to
walk, but I believe that if you have an AGI that can communicate effectively at
a minimum then it will be ok.
And this of course does not
Another aspect I have had to handle is the different temperal aspects of
goals/states, like immediate gains vs short term and long terms goals and
how they can coexist together. This is difficult to grasp as well.
In Novamente, this is dealt with by having goals explicitly refer to time-scope.
Hi Richard,
Once again, I have to say that this characterization ignores the
distinctions I have been making between goal-stack (GS) systems and
diffuse motivational constraint (DMC) systems. As such, it only
addresses one set of possibilities for how to drive the behavior of an AGI.
And
I believe that the human mind incorporates **both(( a set of goal
stacks (mainly useful in deliberative thought), *and* a major role for
diffuse motivational constraints (guiding most mainly-unconscious
thought). I suggest that functional AGI systems will have to do so,
also.
Also, I believe
Pei,
As usual, comparing my views to yours reveals subtle differences in terminology!
I can see now that my language of implicit versus explicit goals is
confusing in a non-Novamente context, and actually even in a Novamente
context. Let me try to rephrase the distinction
IMPLICIT GOAL: a
On 12/7/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Pei,
As usual, comparing my views to yours reveals subtle differences in terminology!
It surely does, though this time there seems to be more than terminology.
There are two issues:
(1) the implicit goals vs. explicit goals issue --- we
13 matches
Mail list logo