Re: [agi] RSI - What is it and how fast?

2006-12-07 Thread Brian Atkins

sam kayley wrote:


'integrable on the other end'.is a rather large issue to shove under the 
carpet in five words ;)


Indeed :-)



For two AIs recently forked from a common parent, probably, but for AIs 
with different 'life experiences' and resulting different conceptual 
structures, why should a severed mindpart be meaningful without 
translation into a common representation, i.e. a language?


If the language could describe things that are not introspectable in 
humans, it would help, but there would still be a process of translation 
which I see no reason to think would be anything like as fast and 
lossless as copying a file.


And as Hall points out, even if direct transfer is possible, it may 
often be better not to do so to make improvement of the skill possible.




Well, one relatively easy way to get at least part way around this would be for 
the two AGIs to define beforehand a common format for the sharing of skill data. 
This might allow for defining lots of things such as labeling inputs/outputs, 
what formats of input/output this skill module uses, etc. If then one AGI 
exported the skill to this format, and the other wrote an import function then I 
think this should be plausible. Or if an import function is too hard for some 
reason it could run the skill format on a skill format virtual machine and just 
feed it the right inputs and collect and use the outputs.


Would such a chunk of bits also be able to be further learned/improved and not 
just used as an external tool by the new AGI? I'm not sure, but I would lean 
towards saying yes if the import code used by the 2nd AGI takes the skill format 
bits and uses those to generate an integrated mind module of its own special 
internal format.


I think getting too far into these technical details is going beyond my own 
skills, so I should stop here and just retreat to my original idea: because the 
bits for a skill are there in the first AGI, and because two AGIs can transmit 
lossless data bits directly between themselves quickly (compared to humans), 
this could create at least hypothetically a "direct" skill sharing functionality 
which humans do not have.


We all have a lot of hypotheses at this point in history, I am just trying to 
err towards caution rather than ones that could be dangerous if proven wrong.

--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RSI - What is it and how fast?

2006-12-07 Thread sam kayley



Brian Atkins wrote:

J. Storrs Hall wrote:


Actually the ability to copy skills is the key item, imho, that 
separates humans from the previous smart animals. It made us a 
memetic substrate. In terms of the animal kingdom, we do it very, 
very well.  I'm sure that AIs will be able to as well, but probably 
it's not quite as simple as simply copying a subroutine library from 
one computer to another.


The reason is learning. If you keep the simple-copy semantics, no 
learning happens when skills are transferred. In humans, a learning 
step is forced, contributing to the memetic evolution of the skill. 
IMO, AGIs plausibly could actually transfer full, complete skills 
including whatever learning is part of it. It's all computer bits 
sitting somewhere, and they should be transferable and then integrable 
on the other end.


If so, this is far more powerful, new, and distinct than a newbie 
tennis player watching a pro, and trying to learn how to serve that 
well over a period of years, or a math student trying to learn 
calculus. Even aside from the dramatic time scale difference, humans 
can never transfer their skills fully exactly in a lossless-esque 
fashion.
'integrable on the other end'.is a rather large issue to shove under the 
carpet in five words ;)


For two AIs recently forked from a common parent, probably, but for AIs 
with different 'life experiences' and resulting different conceptual 
structures, why should a severed mindpart be meaningful without 
translation into a common representation, i.e. a language?


If the language could describe things that are not introspectable in 
humans, it would help, but there would still be a process of translation 
which I see no reason to think would be anything like as fast and 
lossless as copying a file.


And as Hall points out, even if direct transfer is possible, it may 
often be better not to do so to make improvement of the skill possible.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: [agi] Goals and subgoals

2006-12-07 Thread Ben Goertzel

(2) the "supergoals vs. subgoals" issue --- this is where I disagree
with what you said. Even though you mentioned topics like "goal
alienation", you still suggest that to a large extent it is the
"supergoals" that determine the system's goal-oriented activities,
while I believe the system's behaviors are to a larger extent driven
by the derived goals, or your "subgoals".


No, I don't disagree with you on this point at all, I must have just
phrased things misleadingly somehow...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Goals and subgoals

2006-12-07 Thread Pei Wang

On 12/7/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:

Pei,

As usual, comparing my views to yours reveals subtle differences in terminology!


It surely does, though this time there seems to be more than terminology.

There are two issues:

(1) the "implicit goals vs. explicit goals" issue --- we don't differ
too much here. Now I see what you mean, which is like the "automatic
behavior vs. deliberate behavior" distinction in psychology. In NARS,
there are also "goals" (regulated activities) that the system doesn't
represent or explicitly control.

(2) the "supergoals vs. subgoals" issue --- this is where I disagree
with what you said. Even though you mentioned topics like "goal
alienation", you still suggest that to a large extent it is the
"supergoals" that determine the system's goal-oriented activities,
while I believe the system's behaviors are to a larger extent driven
by the derived goals, or your "subgoals".

Pei




I can see now that my language of implicit versus explicit goals is
confusing in a non-Novamente context, and actually even in a Novamente
context.  Let me try to rephrase the distinction

IMPLICIT GOAL: a function that, from the perspective of an "objective
observer" that is not the system, the system **appears to be**
striving to maximize

REFLECTIVELY EXPLICIT GOAL: a function that the system conceptualizes,
to itself, as something it is trying to maximize

REPRESENTATIONALLY EXPLICIT GOAL: a function that is explicitly
represented in the system's knowledge representation, and that is (at
least weakly) an implicit goal


What you call "goals" in NARS, I believe, are what I would call
"representationally explicit goals," which are generally going to be
implicit goals only weakly.  I.e., they are cognitive phenomena that
the system

-- may or may not explicitly conceptualize to itself, via its
reflective awareness, as being goals (i.e. they may or may not be
reflectively explicit goals)

-- may or may not actually act as though it is maximizing, from a
casual external observer's perspective

-- However, the system **does** try to maximize them, from the
perspective of an observer who is looking at the dynamics going on
inside the software system.

A difference between NARS and Novamente is that in Novamente many
implicit goals will **not** be representationally explicit goals.

What Richard seems to be arguing against is the assignment of a major
role to representationally explicit goals.  Because he does not like
explicit knowledge representation

I am not sure if Richard is claiming that "implicit goals" as I define
them (which may be manifested inside a system as diffuse motivational
processes -- e.g. a dog's goal of getting exercise, which it may well
never explicitly represent in its mind) are not a useful tool for
analyzing the behaviors of minds.

-- Ben

> Position statements:
>
> (1) The system's behaviors are driven by its existing tasks/goals.
> (2) At any given time, there are multiple goals in the system.
> (3) A goal is meant to be achieved by the execution of certain
> operations. Usually, a goal is never fully achieved, but to a certain
> degree.
> (4) According to their origin, there are roughly two types of goals:
> given and derived. The former either come from the system's initial
> design (genetic code), or are imposed by the environment, while the
> latter are derived from the former and the system's beliefs.
> (5) A goal G1 and a belief B derives a goal G2, if the belief B states
> that achieving G2 can help, more or less, the achieving of G1.
> (6) Derived goals are "functionally autonomous", in the sense that the
> system treat them in the same way as given goals.
> (7) Goals are not necessarily consistent with each other. Even derived
> goal G2 can turn out to be inconsistent with its parent goal G1, if
> the parent belief B is wrong (which is always possible in a system
> with insufficient knowledge).
> (8) Existing goals compete for the system's available resources. The
> system does not process goals in sequential order, but in parallel,
> but distributing the resources among existing goals unevenly.
> (9) The resources (mainly time, but also space) a goal gets is
> determined by its priority, which depends on several factors, and
> change from time to time. A goal is removed when its priority is too
> low.
> (10) In general, given goals have higher priority, and exist for
> longer time, but it is not always the case.
>
> All of the above are already implemented in NARS.
>
> Now let's see our agreements and disagreements.
>
> > SUPERGOALS VERSUS SUBGOALS
> > ---
> >
> > A supergoal is defined as a goal of a system that is not a subgoal of
> > any other goal of that system, to a significant extent.
>
> This distinction is the same as my "given/derived" distinction. I
> don't use your terms because it is often interpreted by people as
> meaning that a "subgoal" is always taken to be a means to achieve a
> su

Re: Re: [agi] Goals and subgoals

2006-12-07 Thread Ben Goertzel

Pei,

As usual, comparing my views to yours reveals subtle differences in terminology!

I can see now that my language of implicit versus explicit goals is
confusing in a non-Novamente context, and actually even in a Novamente
context.  Let me try to rephrase the distinction

IMPLICIT GOAL: a function that, from the perspective of an "objective
observer" that is not the system, the system **appears to be**
striving to maximize

REFLECTIVELY EXPLICIT GOAL: a function that the system conceptualizes,
to itself, as something it is trying to maximize

REPRESENTATIONALLY EXPLICIT GOAL: a function that is explicitly
represented in the system's knowledge representation, and that is (at
least weakly) an implicit goal


What you call "goals" in NARS, I believe, are what I would call
"representationally explicit goals," which are generally going to be
implicit goals only weakly.  I.e., they are cognitive phenomena that
the system

-- may or may not explicitly conceptualize to itself, via its
reflective awareness, as being goals (i.e. they may or may not be
reflectively explicit goals)

-- may or may not actually act as though it is maximizing, from a
casual external observer's perspective

-- However, the system **does** try to maximize them, from the
perspective of an observer who is looking at the dynamics going on
inside the software system.

A difference between NARS and Novamente is that in Novamente many
implicit goals will **not** be representationally explicit goals.

What Richard seems to be arguing against is the assignment of a major
role to representationally explicit goals.  Because he does not like
explicit knowledge representation

I am not sure if Richard is claiming that "implicit goals" as I define
them (which may be manifested inside a system as diffuse motivational
processes -- e.g. a dog's goal of getting exercise, which it may well
never explicitly represent in its mind) are not a useful tool for
analyzing the behaviors of minds.

-- Ben


Position statements:

(1) The system's behaviors are driven by its existing tasks/goals.
(2) At any given time, there are multiple goals in the system.
(3) A goal is meant to be achieved by the execution of certain
operations. Usually, a goal is never fully achieved, but to a certain
degree.
(4) According to their origin, there are roughly two types of goals:
given and derived. The former either come from the system's initial
design (genetic code), or are imposed by the environment, while the
latter are derived from the former and the system's beliefs.
(5) A goal G1 and a belief B derives a goal G2, if the belief B states
that achieving G2 can help, more or less, the achieving of G1.
(6) Derived goals are "functionally autonomous", in the sense that the
system treat them in the same way as given goals.
(7) Goals are not necessarily consistent with each other. Even derived
goal G2 can turn out to be inconsistent with its parent goal G1, if
the parent belief B is wrong (which is always possible in a system
with insufficient knowledge).
(8) Existing goals compete for the system's available resources. The
system does not process goals in sequential order, but in parallel,
but distributing the resources among existing goals unevenly.
(9) The resources (mainly time, but also space) a goal gets is
determined by its priority, which depends on several factors, and
change from time to time. A goal is removed when its priority is too
low.
(10) In general, given goals have higher priority, and exist for
longer time, but it is not always the case.

All of the above are already implemented in NARS.

Now let's see our agreements and disagreements.

> SUPERGOALS VERSUS SUBGOALS
> ---
>
> A supergoal is defined as a goal of a system that is not a subgoal of
> any other goal of that system, to a significant extent.

This distinction is the same as my "given/derived" distinction. I
don't use your terms because it is often interpreted by people as
meaning that a "subgoal" is always taken to be a means to achieve a
supergoal, and is therefore is inferior. As I said above, this is
indeed the case at the time of derivation, but not necessarily
afterwards --- this is what "functional autonomy" means.

> With this in mind, regarding creation and erasure of goals, there are
> two aspects which I prefer to separate:
>
> 1) optimizing the set of subgoals chosen in pursuit of a given set of
> supergoals.  This is well-studied in computer science and operations
> research.  Not easy computationally or emotionally, but conceptually
> straightforward to understand.

This is the case if we ignore the insufficiency of knowledge/resources
(which we should in CS and OR, but shouldn't in AI).

> 2) optimizing the set of supergoals.  This is a far, far subtler thing.
>
> Supergoal optimization must be understood from a perspective of
> dynamical systems theory, not from a perspective of logic.

Agree. However, the set of subgoals by itself does 

Re: [agi] Goals and subgoals

2006-12-07 Thread Richard Loosemore

Ben Goertzel wrote:

Hi Richard,


Once again, I have to say that this characterization ignores the
distinctions I have been making between "goal-stack" (GS) systems and
"diffuse motivational constraint" (DMC) systems.  As such, it only
addresses one set of possibilities for how to drive the behavior of an 
AGI.


And once again, I have to say that I reject this crude binary
classification of minds ;-)


Oh, come now, that was not by any means a "crude binary classification 
of minds"!! ;-)


For one thing, I said quite clearly (later in the message) that *both* 
types are needed.  (And I am sure I have said that before, too.)


My point was, and is, that most of the discussion that I ever see about 
"AGI goals" implicitly assumes the goal stack approach.  It is still 
happening now:  virtually all the discussion on this list is about 
points that are just downright meaningless in the context of a diffuse 
constraint driven system.  My only reason for continually harping on 
about the issue is that it is frustrating to see so much wasted thought.


What I tried to show (or at least to *claim*), by going through your 
post, was that several of the statements that you made about goals just 
did not make any sense when viewed from anything except a GS 
perspective.  I am glad that you accept my point about the need for a 
diffuse constraint driven system, in your summary below, but I have to 
say that this did not seem to square with the analysis given in your 
original post.



Richard Loosemore.






By discussing goals, I was not trying to imply that all aspect of a
mind (or even most) need to, or should, operate according to an
explicit goal hierarchy.

I believe that the human mind incorporates **both(( a set of goal
stacks (mainly useful in deliberative thought), *and* a major role for
diffuse motivational constraints (guiding most mainly-unconscious
thought).  I suggest that functional AGI systems will have to do so,
also.

-- Ben G



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RE: [extropy-chat] Criticizing One's Own Goals---Rational?

2006-12-07 Thread Richard Loosemore

Jef Allbright wrote:

Ben Goertzel wrote:

The relationship between rationality and goals is fairly 
subtle, and something I have been thinking about recently 


Ben, as you know, I admire and appreciate your thinking but have always
perceived an "inside-outness" with your approach (which we have
discussed before) in that your descriptions of mind always seem (to me)
to begin from a point of pre-existing awareness.  (I can think of
immediate specific objections to the preceding statement, but in the
interest of expediency in this low-bandwidth discussion medium, I would
ask that you suspend immediate objections and look for the general point
I am trying to make clear.)

It seems to me that discussing AI or human thought in terms of goals and
subgoals is a very "narrow-AI" approach and destined to fail in general
application.  Why?  Because to conceive of a goal requires a perspective
outside of and encompassing the goal system.  We can speak in a valid
way about the goals of a system, or the goals of a person, but it is
always from a perspective outside of that system.

It seems to me that a better functional description is based on
"values", more specifically the eigenvectors and eigenvalues of a highly
multidimensional model *inside the agent* which drive its behavior in a
very simple way:  It acts to reduce the difference between the internal
model and perceived reality. [The hard part is how to evolve these
recursively self-modifying patterns of behavior, without requiring
natural evolutionary time scale.]  Goals thus emerge as third-party
descriptions of behavior, or even as post hoc internal explanations or
rationalizations of its own behavior, but don't merit the status of
fundamental drivers of the behavior.

Does this make sense to you?  I've been saying this for years, but have
never gotten even a "huh?", let alone a "duh."  ;-)

- Jef


This is identical to one of the points I was making when talking about 
diffuse contraint-driven motivational systems, though we are phrasing it 
differently.


A system can be *relaxation* driven - it changes its state according to 
a large number of constraints that are always trying to do local 
gradient descent - in such a way that it looks approximately as if it 
were pursuing a kind of goal-seeking behavior.


Thus:  a boltzmann machine does not explicitly try to retrieve a 
previously matched associate of a pattern, it just relaxes its 
constraints until the pattern comes out.


If a system had several relaxation mechanisms working simultaneously, 
each of these might seem to be a "goal".  I dislike that word, as I have 
said before, precisely because it has connotations of explicitness that 
I don't buy, and because there is something else that really is an 
explicit goal (I intend to get in the car and go home later today:  this 
is a real "goal").


Your point about people taking a perspective "outside" or "inside" the 
system is the same as saying that we should not be interpreting 
behavioral characteristics (in this case, movement towards "goals") as 
if they are directly represented inside the system by a mechanisms that 
explicitly encodes the goal and explicitly tries to achieve it.


The early connectionists made this one of their big issues.  (See the 
two PDP volumes for hundreds of repetitions of the same ideological 
statement).



Richard Loosemore.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: [agi] Goals and subgoals

2006-12-07 Thread Ben Goertzel

I believe that the human mind incorporates **both(( a set of goal
stacks (mainly useful in deliberative thought), *and* a major role for
diffuse motivational constraints (guiding most mainly-unconscious
thought).  I suggest that functional AGI systems will have to do so,
also.


Also, I believe that diffuse motivational constraints will often lead
to what I call "implicit goals" as emergent patterns.  Thus for
instance animals with very limited deliberative, reflective awareness
may still have powerfully clear implicit goals.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Goals and subgoals

2006-12-07 Thread Ben Goertzel

Hi Richard,


Once again, I have to say that this characterization ignores the
distinctions I have been making between "goal-stack" (GS) systems and
"diffuse motivational constraint" (DMC) systems.  As such, it only
addresses one set of possibilities for how to drive the behavior of an AGI.


And once again, I have to say that I reject this crude binary
classification of minds ;-)

By discussing goals, I was not trying to imply that all aspect of a
mind (or even most) need to, or should, operate according to an
explicit goal hierarchy.

I believe that the human mind incorporates **both(( a set of goal
stacks (mainly useful in deliberative thought), *and* a major role for
diffuse motivational constraints (guiding most mainly-unconscious
thought).  I suggest that functional AGI systems will have to do so,
also.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] RE: [extropy-chat] Criticizing One's Own Goals---Rational?

2006-12-07 Thread Ben Goertzel

Hi,


It seems to me that discussing AI or human thought in terms of goals and
subgoals is a very "narrow-AI" approach and destined to fail in general
application.


I think it captures a certain portion of what occurs in the human
mind.  Not a large portion, perhaps, but an important portion.


Why?  Because to conceive of a goal requires a perspective
outside of and encompassing the goal system.  We can speak in a valid
way about the goals of a system, or the goals of a person, but it is
always from a perspective outside of that system.


But, the essence of human reflective, deliberative awareness is
precisely our capability to view ourselves from a "perspective outside
ourselves." ... and then use this view to model ourselves and
ultimately change ourselves, iteratively...


It seems to me that a better functional description is based on
"values", more specifically the eigenvectors and eigenvalues of a highly
multidimensional model *inside the agent* which drive its behavior in a
very simple way:  It acts to reduce the difference between the internal
model and perceived reality.


I wouldn't frame it in terms of eigenvectors and eigenvalues, because
I don't know how you are defining addition or scalar multiplication on
this space of "mental models."

But I agree that the operation of "acting to reduce the difference
between internal models and perceived reality" is an important part of
cognition.

It is different from explicit goal-seeking, which IMO is also important.


Goals thus emerge as third-party
descriptions of behavior, or even as post hoc internal explanations or
rationalizations of its own behavior, but don't merit the status of
fundamental drivers of the behavior.


I distinguished "explicit goals" from "implicit goals."  I believe
that in your comments you are using the term "goal" to mean what I
term "explicit goal."

I think that some human behavior is driven by explicit goals, and some is not.

I agree that the identification of the **implicit goals** of a system
S (the functions the system S acts like it is seeking to maximize) is
often best done by another system outside the system S.  Nevertheless,
I think that implicit goals are worth talking about, and can
meaningfully be placed in a hierarchy.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] RE: [extropy-chat] Criticizing One's Own Goals---Rational?

2006-12-07 Thread Jef Allbright
Ben Goertzel wrote:

> The relationship between rationality and goals is fairly 
> subtle, and something I have been thinking about recently 

Ben, as you know, I admire and appreciate your thinking but have always
perceived an "inside-outness" with your approach (which we have
discussed before) in that your descriptions of mind always seem (to me)
to begin from a point of pre-existing awareness.  (I can think of
immediate specific objections to the preceding statement, but in the
interest of expediency in this low-bandwidth discussion medium, I would
ask that you suspend immediate objections and look for the general point
I am trying to make clear.)

It seems to me that discussing AI or human thought in terms of goals and
subgoals is a very "narrow-AI" approach and destined to fail in general
application.  Why?  Because to conceive of a goal requires a perspective
outside of and encompassing the goal system.  We can speak in a valid
way about the goals of a system, or the goals of a person, but it is
always from a perspective outside of that system.

It seems to me that a better functional description is based on
"values", more specifically the eigenvectors and eigenvalues of a highly
multidimensional model *inside the agent* which drive its behavior in a
very simple way:  It acts to reduce the difference between the internal
model and perceived reality. [The hard part is how to evolve these
recursively self-modifying patterns of behavior, without requiring
natural evolutionary time scale.]  Goals thus emerge as third-party
descriptions of behavior, or even as post hoc internal explanations or
rationalizations of its own behavior, but don't merit the status of
fundamental drivers of the behavior.

Does this make sense to you?  I've been saying this for years, but have
never gotten even a "huh?", let alone a "duh."  ;-)

- Jef


>  To address the issue I will introduce a series of concepts 
> related to goals.
> 
> SUPERGOALS VERSUS SUBGOALS
> ---
> 
> A supergoal is defined as a goal of a system that is not a 
> subgoal of any other goal of that system, to a significant extent.
> 
> With this in mind, regarding creation and erasure of goals, 
> there are two aspects which I prefer to separate:
> 
> 1) optimizing the set of subgoals chosen in pursuit of a 
> given set of supergoals.  This is well-studied in computer 
> science and operations research.  Not easy computationally or 
> emotionally, but conceptually straightforward to understand.
> 
> 2) optimizing the set of supergoals.  This is a far, far 
> subtler thing.
> 
> Supergoal optimization must be understood from a perspective 
> of dynamical systems theory, not from a perspective of logic.
> 
> A strongly self-modifying AI system will be able to alter its 
> own supergoals  So can a human, to an extent, with a lot 
> of effort
> 
> EXPLICIT VERSUS IMPLICIT GOALS
> 
> 
> Next, I think it is worthwhile to distinguish two kinds of goals
> -- explicit goals: those that a system believes it is pursuing
> -- implicit goals: those a system acts like it is pursuing
> 
> Definition: a "coherent goal achiever" is one whose implicit 
> goals and explicit goals are basically the same
> 
> What is interesting, then, is the dynamics of coherent goal 
> achievers that are also strongly enough self-modifying to 
> modify their supergoals  In this case, what properties 
> control the evolution of the supergoal-set over time?  This 
> is closely related to Friendly AI, of course
> 
> META-GOALS
> 
> 
> Next, there is the notion of a "meta-goal", a supergoal  
> designed to coexist with other supergoals and to regulate the 
> process of supergoal creation/erasure/modification.
> 
> For instance, a friend of mine has a metagoal of streamlining 
> and simplifying his set of supergoals.  I have a metagoal of 
> making sure my various sometimes-contradictory supergoals all 
> cooperate with each other in an open and friendly way, rather 
> than being competitive and adversarial.
> 
> RATIONALITY AND GOALS
> ---
> 
> To me, rationality has two aspects:
> 
> 1) how effectively one achieves one's explicit goals, given 
> the constraints imposed by the resources at one's disposal.
> 
> 2) how coherent one is as a goal-achiever (implicit goals = 
> explicit goals)
> 
> IMO, revising one's supergoal set is a complex dynamic process that is
> **orthogonal** to rationality.  I suppose that Nietzsche 
> understood this, though he phrased it quite differently.  His 
> notion of "revaluation of all values" is certainly closely 
> tied to the notion of supergoal-set refinement/modification
> 
> Refining the goal hierarchy underlying a given set of 
> supergoals is a necessary part of rationality, but IMO that's 
> a different sort of process...
> 
> In general, it would seem important to be aware of when

Re: [agi] Goals and subgoals

2006-12-07 Thread Pei Wang

Ben,

Very nice --- we do need to approach this topic in a systematic manner.

In the following, I'll first make some position statements, then
comment on your email.

Position statements:

(1) The system's behaviors are driven by its existing tasks/goals.
(2) At any given time, there are multiple goals in the system.
(3) A goal is meant to be achieved by the execution of certain
operations. Usually, a goal is never fully achieved, but to a certain
degree.
(4) According to their origin, there are roughly two types of goals:
given and derived. The former either come from the system's initial
design (genetic code), or are imposed by the environment, while the
latter are derived from the former and the system's beliefs.
(5) A goal G1 and a belief B derives a goal G2, if the belief B states
that achieving G2 can help, more or less, the achieving of G1.
(6) Derived goals are "functionally autonomous", in the sense that the
system treat them in the same way as given goals.
(7) Goals are not necessarily consistent with each other. Even derived
goal G2 can turn out to be inconsistent with its parent goal G1, if
the parent belief B is wrong (which is always possible in a system
with insufficient knowledge).
(8) Existing goals compete for the system's available resources. The
system does not process goals in sequential order, but in parallel,
but distributing the resources among existing goals unevenly.
(9) The resources (mainly time, but also space) a goal gets is
determined by its priority, which depends on several factors, and
change from time to time. A goal is removed when its priority is too
low.
(10) In general, given goals have higher priority, and exist for
longer time, but it is not always the case.

All of the above are already implemented in NARS.

Now let's see our agreements and disagreements.


SUPERGOALS VERSUS SUBGOALS
---

A supergoal is defined as a goal of a system that is not a subgoal of
any other goal of that system, to a significant extent.


This distinction is the same as my "given/derived" distinction. I
don't use your terms because it is often interpreted by people as
meaning that a "subgoal" is always taken to be a means to achieve a
supergoal, and is therefore is inferior. As I said above, this is
indeed the case at the time of derivation, but not necessarily
afterwards --- this is what "functional autonomy" means.


With this in mind, regarding creation and erasure of goals, there are
two aspects which I prefer to separate:

1) optimizing the set of subgoals chosen in pursuit of a given set of
supergoals.  This is well-studied in computer science and operations
research.  Not easy computationally or emotionally, but conceptually
straightforward to understand.


This is the case if we ignore the insufficiency of knowledge/resources
(which we should in CS and OR, but shouldn't in AI).


2) optimizing the set of supergoals.  This is a far, far subtler thing.

Supergoal optimization must be understood from a perspective of
dynamical systems theory, not from a perspective of logic.


Agree. However, the set of subgoals by itself does not fully determine
the system's future motivational structure, because it also heavily
depends on the system's experience, which will determine the goals
derived from the initial goal set.

That is why I said in the AGIRI Workshop that "friendly AI" is mostly
an education issue (experience control), rather than a pure design
issue (initial goal selection).


A strongly self-modifying AI system will be able to alter its own
supergoals  So can a human, to an extent, with a lot of effort


Maybe, but since a mature AI system will mostly driven be derived
goals, it is not always necessary to modify "supergoal", even if it is
possible.


EXPLICIT VERSUS IMPLICIT GOALS


Next, I think it is worthwhile to distinguish two kinds of goals

-- explicit goals: those that a system believes it is pursuing (in the
sense of reflective, deliberative self-knowledge)

-- implicit goals: those a system acts like it is pursuing (in the
judgment of highly intelligent, unbiased observers)


Do you mean something like intention vs. reality?


Definition: a "coherent goal achiever" is one whose implicit goals and
explicit goals are basically the same

What is interesting, then, is the dynamics of coherent goal achievers
that are also strongly enough self-modifying to modify their
supergoals  In this case, what properties control the evolution of
the supergoal-set over time?  This is closely related to Friendly AI,
of course


The definition looks fine, though I don't think such a system can be
built --- no one can prove what he want will be what he get.


META-GOALS


Next, there is the notion of a "meta-goal", a supergoal  designed to
coexist with other supergoals and to regulate the process of supergoal
creation/erasure/modification.

For instance,

Re: [agi] Goals and subgoals

2006-12-07 Thread Richard Loosemore

Ben Goertzel wrote:

The topic of the relation between rationality and goals came up on the
extropy-chat list recently, and I wrote a long post about it, which I
think is also relevant to some recent discussions on this list...

-- Ben


Once again, I have to say that this characterization ignores the 
distinctions I have been making between "goal-stack" (GS) systems and 
"diffuse motivational constraint" (DMC) systems.  As such, it only 
addresses one set of possibilities for how to drive the behavior of an AGI.


My point in writing this reply is to make it clear that not everyone 
thinks that cognitive systems work this way.


I have sketched some of the detailed differences below.  Sorry if they 
are too compactly worded at times.




***

SUPERGOALS VERSUS SUBGOALS
---

A supergoal is defined as a goal of a system that is not a subgoal of
any other goal of that system, to a significant extent.

With this in mind, regarding creation and erasure of goals, there are
two aspects which I prefer to separate:

1) optimizing the set of subgoals chosen in pursuit of a given set of
supergoals.  This is well-studied in computer science and operations
research.  Not easy computationally or emotionally, but conceptually
straightforward to understand.


When the target of the subgoals is defined as a set of complex 
constraints operating over a large portion of the system's knowledge, 
and when the system's knowledge is (at least sometimes) encoded in 
non-explicit ways (i.e. cannot be exactly stated in predicate form), the 
"optimization" of the subgoals becomes a type of problem that has not 
been studied by computer science and operations research.  (We are 
talking about serious amounts of complex systems behavior here).




2) optimizing the set of supergoals.  This is a far, far subtler thing.


Not only subtler, but bordering on the incoherent.  "Optimization" 
implies "making it work better," which in turn implies that the system 
has some concept of criteria that can be used to judge what counts as 
"better" and what counts as "worse".  But if you had such criteria in 
hand, you could rephrase them like this:  "I think that better behavior 
would result if I changed my aspirations consistent with these criteria" 
which could then be rephrased as "I aspire to improve my aspirations 
using these criteria" . and THAT, of course, is just a supergoal by 
any other name.


But if that is a supergoal, is it higher than the others?  And can it be 
optimized?  And what then happens if you try to optimize that one?


The concept of optimizing a supergoal seems to me to break down.

[See also comments on "meta goal" below]



Supergoal optimization must be understood from a perspective of
dynamical systems theory, not from a perspective of logic.


In the case of the DMC approach, I don't see dynamical systems theory 
getting any traction either.



A strongly self-modifying AI system will be able to alter its own
supergoals  So can a human, to an extent, with a lot of effort


In general, probably true.  But the ease and extent of the self 
modification differs greatly between GS and DMC approaches.  This is 
very much an open question.




EXPLICIT VERSUS IMPLICIT GOALS


Next, I think it is worthwhile to distinguish two kinds of goals

-- explicit goals: those that a system believes it is pursuing (in the
sense of reflective, deliberative self-knowledge)

-- implicit goals: those a system acts like it is pursuing (in the
judgment of highly intelligent, unbiased observers)


This is something I agree with, but that is because I would see a 
cognitive system as having two separate systems:  the motivational 
system, which uses DMC ideas, and a more mundane GS type of mechanism 
that deals with the day to day process of tracking explicit goals.


In other words, I don't disagree with the need for something like a GS 
mechanism, but it plays a secondary role to the motivational system.


One problem arises, however:  there is not necessarily a sharp cutoff 
between the two, so goals can have different degrees of explictness. 
(See next comment for an illustration).



Definition: a "coherent goal achiever" is one whose implicit goals and
explicit goals are basically the same


The "coherent goal achiever" concept would simply not make any sense in 
the DMC type of motivational system I have been describing.  The two 
types of goals are implemented in completely different ways, and cannot 
be integrated to become one thing.


However, if what is meant by this concept is a minimization of the 
inconsistencies between sets of goals (some of which are explicit and 
some less so), then, yes, a DMC system would be trying to do this as 
best it could.




What is interesting, then, is the dynamics of coherent goal achievers
that are also strongly enough self-modifying to modify their
supergoals  In this ca

Re: Re: [agi] Goals and subgoals

2006-12-07 Thread Ben Goertzel

Another aspect I have had to handle is the different temperal aspects of
goals/states, like immediate gains vs short term and long terms goals and
how they can coexist together.  This is difficult to grasp as well.


In Novamente, this is dealt with by having goals explicitly refer to time-scope.

But indeed, supergoals with different time-scopes are prime examples
of supergoals that may contradict each other in practice (in terms of
the subgoals they generate), though being in-principle consistent with
each other.


Your baby AGI currently is pursuing only goals externally given to it, but
soon it would need to handle things like limited resources over time, and
deciding on better goals for a longer term vs short term, and balancing the
two.


Agree ... we are not dealing with those things yet...


Also how is your AGI handling the reward mechanism, is it just a simple
additive  number property that you are increasing via a 'pat on the head' or
'good boy' reward mechanism, or is it something internally created or?


At the moment it's just a 'good boy' reward mechanism, which rewards
concrete behaviors in the sim world.  Most of the system's internal
activities are not regulated by specific goal-achievement-seeking, but
rather by the intrinsic activities of the system's cognitive
processes.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Goals and subgoals

2006-12-07 Thread James Ratcliff
That sounds good so far.

Now how can we program all of that  :}

Another aspect I have had to handle is the different temperal aspects of 
goals/states, like immediate gains vs short term and long terms goals and how 
they can coexist together.  This is difficult to grasp as well.

Your baby AGI currently is pursuing only goals externally given to it, but soon 
it would need to handle things like limited resources over time, and deciding 
on better goals for a longer term vs short term, and balancing the two.

Also how is your AGI handling the reward mechanism, is it just a simple 
additive  number property that you are increasing via a 'pat on the head' or 
'good boy' reward mechanism, or is it something internally created or?

James

Ben Goertzel <[EMAIL PROTECTED]> wrote: The topic of the relation between 
rationality and goals came up on the
extropy-chat list recently, and I wrote a long post about it, which I
think is also relevant to some recent discussions on this list...

-- Ben

***

SUPERGOALS VERSUS SUBGOALS
---

A supergoal is defined as a goal of a system that is not a subgoal of
any other goal of that system, to a significant extent.

With this in mind, regarding creation and erasure of goals, there are
two aspects which I prefer to separate:

1) optimizing the set of subgoals chosen in pursuit of a given set of
supergoals.  This is well-studied in computer science and operations
research.  Not easy computationally or emotionally, but conceptually
straightforward to understand.

2) optimizing the set of supergoals.  This is a far, far subtler thing.

Supergoal optimization must be understood from a perspective of
dynamical systems theory, not from a perspective of logic.

A strongly self-modifying AI system will be able to alter its own
supergoals  So can a human, to an extent, with a lot of effort

EXPLICIT VERSUS IMPLICIT GOALS


Next, I think it is worthwhile to distinguish two kinds of goals

-- explicit goals: those that a system believes it is pursuing (in the
sense of reflective, deliberative self-knowledge)

-- implicit goals: those a system acts like it is pursuing (in the
judgment of highly intelligent, unbiased observers)

Definition: a "coherent goal achiever" is one whose implicit goals and
explicit goals are basically the same

What is interesting, then, is the dynamics of coherent goal achievers
that are also strongly enough self-modifying to modify their
supergoals  In this case, what properties control the evolution of
the supergoal-set over time?  This is closely related to Friendly AI,
of course

META-GOALS


Next, there is the notion of a "meta-goal", a supergoal  designed to
coexist with other supergoals and to regulate the process of supergoal
creation/erasure/modification.

For instance, one metagoal would be to streamline and simplify one's
set of supergoals.

Another  metagoal would be to make one's various
sometimes-contradictory supergoals all cooperate with each other in an
open and friendly way, rather than being competitive and adversarial.

RATIONALITY AND GOALS
---

To me, rationality has two aspects:

1) how effectively one achieves one's explicit goals, given the
constraints imposed by the resources at one's disposal.

2) how coherent one is as a goal-achiever (implicit goals = explicit goals)

IMO, revising one's supergoal set is a complex dynamic process that is
**orthogonal** to rationality.  I suppose that Nietzsche understood
this, though he phrased it quite differently.  His notion of
"revaluation of all values" is certainly closely tied to the notion of
supergoal-set refinement/modification

Refining the goal hierarchy underlying a given set of supergoals is a
necessary part of rationality, but IMO that's a different sort of
process...

In general, it would seem important to be aware of when you are
non-rationally revising a supergoal versus "merely" rationally
modifying the set of subgoals used to achieve some supergoal.  And
yet, the two processes are very closely tied together.

SUBGOAL PROMOTION AND ALIENATION


One very common phenomenon is when a supergoal is erased, but one of
its subgoals is promoted to the level of supergoal.  For instance,
originally one may become interested in science as a subgoal of
achieving greatness, but later on one may decide seeking greatness is
childish and silly, but retain the goal of advancing science as
valuable in itself (now as a supergoal rather than a subgoal).

When subgoal promotion happens unintentionally it is called subgoal
"alienation."  This happens because minds are not fully self-aware.  A
supergoal may be erased without all subgoals that it spawned being
erased along with it.  So, e.g. even though you give up your supergoal
of drinking yourself to death, you may

[agi] Goals and subgoals

2006-12-07 Thread Ben Goertzel

The topic of the relation between rationality and goals came up on the
extropy-chat list recently, and I wrote a long post about it, which I
think is also relevant to some recent discussions on this list...

-- Ben

***

SUPERGOALS VERSUS SUBGOALS
---

A supergoal is defined as a goal of a system that is not a subgoal of
any other goal of that system, to a significant extent.

With this in mind, regarding creation and erasure of goals, there are
two aspects which I prefer to separate:

1) optimizing the set of subgoals chosen in pursuit of a given set of
supergoals.  This is well-studied in computer science and operations
research.  Not easy computationally or emotionally, but conceptually
straightforward to understand.

2) optimizing the set of supergoals.  This is a far, far subtler thing.

Supergoal optimization must be understood from a perspective of
dynamical systems theory, not from a perspective of logic.

A strongly self-modifying AI system will be able to alter its own
supergoals  So can a human, to an extent, with a lot of effort

EXPLICIT VERSUS IMPLICIT GOALS


Next, I think it is worthwhile to distinguish two kinds of goals

-- explicit goals: those that a system believes it is pursuing (in the
sense of reflective, deliberative self-knowledge)

-- implicit goals: those a system acts like it is pursuing (in the
judgment of highly intelligent, unbiased observers)

Definition: a "coherent goal achiever" is one whose implicit goals and
explicit goals are basically the same

What is interesting, then, is the dynamics of coherent goal achievers
that are also strongly enough self-modifying to modify their
supergoals  In this case, what properties control the evolution of
the supergoal-set over time?  This is closely related to Friendly AI,
of course

META-GOALS


Next, there is the notion of a "meta-goal", a supergoal  designed to
coexist with other supergoals and to regulate the process of supergoal
creation/erasure/modification.

For instance, one metagoal would be to streamline and simplify one's
set of supergoals.

Another  metagoal would be to make one's various
sometimes-contradictory supergoals all cooperate with each other in an
open and friendly way, rather than being competitive and adversarial.

RATIONALITY AND GOALS
---

To me, rationality has two aspects:

1) how effectively one achieves one's explicit goals, given the
constraints imposed by the resources at one's disposal.

2) how coherent one is as a goal-achiever (implicit goals = explicit goals)

IMO, revising one's supergoal set is a complex dynamic process that is
**orthogonal** to rationality.  I suppose that Nietzsche understood
this, though he phrased it quite differently.  His notion of
"revaluation of all values" is certainly closely tied to the notion of
supergoal-set refinement/modification

Refining the goal hierarchy underlying a given set of supergoals is a
necessary part of rationality, but IMO that's a different sort of
process...

In general, it would seem important to be aware of when you are
non-rationally revising a supergoal versus "merely" rationally
modifying the set of subgoals used to achieve some supergoal.  And
yet, the two processes are very closely tied together.

SUBGOAL PROMOTION AND ALIENATION


One very common phenomenon is when a supergoal is erased, but one of
its subgoals is promoted to the level of supergoal.  For instance,
originally one may become interested in science as a subgoal of
achieving greatness, but later on one may decide seeking greatness is
childish and silly, but retain the goal of advancing science as
valuable in itself (now as a supergoal rather than a subgoal).

When subgoal promotion happens unintentionally it is called subgoal
"alienation."  This happens because minds are not fully self-aware.  A
supergoal may be erased without all subgoals that it spawned being
erased along with it.  So, e.g. even though you give up your supergoal
of drinking yourself to death, you may involuntarily retain your
subgoal of drinking (even though you started doing it only out of a
desire to drink yourself to death).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303