The topic of the relation between rationality and goals came up on the
extropy-chat list recently, and I wrote a long post about it, which I
think is also relevant to some recent discussions on this list...
-- Ben
***
SUPERGOALS VERSUS SUBGOALS
That sounds good so far.
Now how can we program all of that :}
Another aspect I have had to handle is the different temperal aspects of
goals/states, like immediate gains vs short term and long terms goals and how
they can coexist together. This is difficult to grasp as well.
Your baby AGI
Another aspect I have had to handle is the different temperal aspects of
goals/states, like immediate gains vs short term and long terms goals and
how they can coexist together. This is difficult to grasp as well.
In Novamente, this is dealt with by having goals explicitly refer to time-scope.
Ben Goertzel wrote:
The topic of the relation between rationality and goals came up on the
extropy-chat list recently, and I wrote a long post about it, which I
think is also relevant to some recent discussions on this list...
-- Ben
Once again, I have to say that this characterization
Ben,
Very nice --- we do need to approach this topic in a systematic manner.
In the following, I'll first make some position statements, then
comment on your email.
Position statements:
(1) The system's behaviors are driven by its existing tasks/goals.
(2) At any given time, there are
Ben Goertzel wrote:
The relationship between rationality and goals is fairly
subtle, and something I have been thinking about recently
Ben, as you know, I admire and appreciate your thinking but have always
perceived an inside-outness with your approach (which we have
discussed before)
Hi,
It seems to me that discussing AI or human thought in terms of goals and
subgoals is a very narrow-AI approach and destined to fail in general
application.
I think it captures a certain portion of what occurs in the human
mind. Not a large portion, perhaps, but an important portion.
Hi Richard,
Once again, I have to say that this characterization ignores the
distinctions I have been making between goal-stack (GS) systems and
diffuse motivational constraint (DMC) systems. As such, it only
addresses one set of possibilities for how to drive the behavior of an AGI.
And
I believe that the human mind incorporates **both(( a set of goal
stacks (mainly useful in deliberative thought), *and* a major role for
diffuse motivational constraints (guiding most mainly-unconscious
thought). I suggest that functional AGI systems will have to do so,
also.
Also, I believe
Jef Allbright wrote:
Ben Goertzel wrote:
The relationship between rationality and goals is fairly
subtle, and something I have been thinking about recently
Ben, as you know, I admire and appreciate your thinking but have always
perceived an inside-outness with your approach (which we
Ben Goertzel wrote:
Hi Richard,
Once again, I have to say that this characterization ignores the
distinctions I have been making between goal-stack (GS) systems and
diffuse motivational constraint (DMC) systems. As such, it only
addresses one set of possibilities for how to drive the behavior
Pei,
As usual, comparing my views to yours reveals subtle differences in terminology!
I can see now that my language of implicit versus explicit goals is
confusing in a non-Novamente context, and actually even in a Novamente
context. Let me try to rephrase the distinction
IMPLICIT GOAL: a
On 12/7/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Pei,
As usual, comparing my views to yours reveals subtle differences in terminology!
It surely does, though this time there seems to be more than terminology.
There are two issues:
(1) the implicit goals vs. explicit goals issue --- we
Brian Atkins wrote:
J. Storrs Hall wrote:
Actually the ability to copy skills is the key item, imho, that
separates humans from the previous smart animals. It made us a
memetic substrate. In terms of the animal kingdom, we do it very,
very well. I'm sure that AIs will be able to as well,
sam kayley wrote:
'integrable on the other end'.is a rather large issue to shove under the
carpet in five words ;)
Indeed :-)
For two AIs recently forked from a common parent, probably, but for AIs
with different 'life experiences' and resulting different conceptual
structures, why
15 matches
Mail list logo