Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-21 Thread Charles D Hixson

Joel Pitt wrote:

On 12/21/06, Philip Goetz <[EMAIL PROTECTED]> wrote:

That in itself is quite bad.  But what proves to me that Gould had no
interest in the scientific merits of the book is that, if he had, he
could at any time during those months have walked down one flight of
stairs and down a hall to E. O. Wilson's office, and asked him about
it.  He never did.  He never even told him they were meeting each week
to condemn it.

This one act, in my mind, is quite damning to Gould.


Definitely. I strongly dislike academics that behave like that.

Have open communication between individuals and groups instead of
running around stabbing each other's theories in the back. It just
common courtesy. Unless of course they slept with your wife or
something, in which case such behaviour could possibly be excused
(even if it is scientifically/rationally the wrong way to go, we're
still slave to our emotions).

You might check into the history of Russel's Principia Mathematica.  
Such activities are unpleasant, but have long been a part of the 
scientific community's politics.  (I'd be more explicit, but I'm not 
totally sure of the name of the mathematician who knew for long before 
Russel's publication that the work was flawed in it's basic principles, 
and don't want to slander a named individual out of carelessness.  [And 
I could be mis-remembering the details...it's the kind of activity I 
generally ignore.])


That someone is a politician and manipulative does not mean that they 
aren't a good scientist...or we'd have very few good scientists.  If you 
aren't a politician, you can't rise in a bureaucracy.  Merit doesn't 
suffice.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-20 Thread Joel Pitt

On 12/21/06, Philip Goetz <[EMAIL PROTECTED]> wrote:

That in itself is quite bad.  But what proves to me that Gould had no
interest in the scientific merits of the book is that, if he had, he
could at any time during those months have walked down one flight of
stairs and down a hall to E. O. Wilson's office, and asked him about
it.  He never did.  He never even told him they were meeting each week
to condemn it.

This one act, in my mind, is quite damning to Gould.


Definitely. I strongly dislike academics that behave like that.

Have open communication between individuals and groups instead of
running around stabbing each other's theories in the back. It just
common courtesy. Unless of course they slept with your wife or
something, in which case such behaviour could possibly be excused
(even if it is scientifically/rationally the wrong way to go, we're
still slave to our emotions).

--
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-20 Thread Philip Goetz

On 12/13/06, Charles D Hixson <[EMAIL PROTECTED]> wrote:


To speak of evolution as being "forward" or "backward" is to impose upon
it our own preconceptions of the direction in which it *should* be
changing.  This seems...misguided.


Evolution includes random mutation, and natural selection.  It is
meaningful to talk of the relative strength of these effects.  If you
write a genetic algorithm, for example, you must set the mutation
rate, and the selection pressure.  Everyone who has ever run a GA has
had to make choices about that.

If you set the mutation rate too high relative to selection pressure,
you get devolution.  It is wrong to call it "evolution in a different
direction that does not appeal to your subjective values".


Stephen J. Gould may well have been more of a populizer than a research
scientist, but I feel that your criticisms of his presentations are
unwarranted and made either in ignorance or malice.


Gould did in fact do significant research as well as produce a good
textbook.  But I've read many of his books, and I believe they are all
slanted towards his social agenda, which is a strong form of
relativism.

When E. O. Wilson published Sociobiology, Stephen J. Gould helped lead
a book discussion group that took several months to study the book,
and write a damning response to it.  The response did not criticize
the science, but essentially said that it was socially irresponsible
to ask the sorts of questions that Wilson asked.

That in itself is quite bad.  But what proves to me that Gould had no
interest in the scientific merits of the book is that, if he had, he
could at any time during those months have walked down one flight of
stairs and down a hall to E. O. Wilson's office, and asked him about
it.  He never did.  He never even told him they were meeting each week
to condemn it.

This one act, in my mind, is quite damning to Gould.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-19 Thread Joel Pitt

On 12/14/06, Charles D Hixson <[EMAIL PROTECTED]> wrote:

To speak of evolution as being "forward" or "backward" is to impose upon
it our own preconceptions of the direction in which it *should* be
changing.  This seems...misguided.


IMHO Evolution tends to increase extropy and self-organisation. Thus
there is direction to evolution. There is no direction to the random
mutations, or direction to the changes within an individual - only to
the system of evolving agents.

--
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-13 Thread Charles D Hixson

Philip Goetz wrote:

...
The "disagreement" here is a side-effect of postmodern thought.
Matt is using "evolution" as the opposite of "devolution", whereas
Eric seems to be using it as meaning "change, of any kind, via natural
selection".

We have difficulty because people with political agendas - notably
Stephen J. Gould - have brainwashed us into believing that we must
never speak of evolution being "forward" or "backward", and that
change in any direction is equally valuable.  With such a viewpoint,
though, it is impossible to express concern about the rising incidence
of allergies, genetic diseases, etc.

...
To speak of evolution as being "forward" or "backward" is to impose upon 
it our own preconceptions of the direction in which it *should* be 
changing.  This seems...misguided.


To claim that because all changes in the gene pool are evolution, that 
therefore they are all equally valuable is to conflate two (orthogonal?) 
assertions.  Value is inherently subjective to the entity doing the 
evaluation.  Evolution, interpreted as statistical changes in the gene 
pool, in inherently objective (though, of course, measurements of it may 
well be biased).


Stephen J. Gould may well have been more of a populizer than a research 
scientist, but I feel that your criticisms of his presentations are 
unwarranted and made either in ignorance or malice.  This is not a 
strong belief, and were evidence presented I would be willing to change 
it, but I've seen such assertions made before with equal lack of 
evidential backing, and find them distasteful.


That Stephen J. Gould had some theories of how evolution works that are 
not universally accepted by those skilled in the field does not warrant 
your comments.  Many who are skilled in the field find them either 
intriguing or reasonable.  Some find them the only reasonable proposal.  
I can't speak for "most", as I am not a professional evolutionary 
biologist, and don't know that many folk who are, but it would not 
surprise me to find that most evolutionary biologists found his 
arguments reasonable and unexceptional, if not convincing.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-13 Thread Philip Goetz

On 12/5/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:


--- Eric Baum <[EMAIL PROTECTED]> wrote:
> Matt> We have slowed evolution through medical advances, birth control
> Matt> and genetic engineering, but I don't think we have stopped it
> Matt> completely yet.
>
> I don't know what reason there is to think we have slowed
> evolution, rather than speeded it up.
>
> I would hazard to guess, for example, that since the discovery of
> birth control, we have been selecting very rapidly for people who
> choose to have more babies. In fact, I suspect this is one reason
> why the US (which became rich before most of the rest of the world)
> has a higher birth rate than Europe.


...


The main effect of medical advances is to keep children alive who would
otherwise have died from genetic weaknesses, allowing these weaknesses to be
propagated.


The "disagreement" here is a side-effect of postmodern thought.
Matt is using "evolution" as the opposite of "devolution", whereas
Eric seems to be using it as meaning "change, of any kind, via natural
selection".

We have difficulty because people with political agendas - notably
Stephen J. Gould - have brainwashed us into believing that we must
never speak of evolution being "forward" or "backward", and that
change in any direction is equally valuable.  With such a viewpoint,
though, it is impossible to express concern about the rising incidence
of allergies, genetic diseases, etc.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-05 Thread Matt Mahoney

--- Eric Baum <[EMAIL PROTECTED]> wrote:

> 
> Matt> --- Hank Conn <[EMAIL PROTECTED]> wrote:
> 
> >> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: > The "goals
> >> of humanity", like all other species, was determined by >
> >> evolution.  > It is to propagate the species.
> >> 
> >> 
> >> That's not the goal of humanity. That's the goal of the evolution
> >> of humanity, which has been defunct for a while.
> 
> Matt> We have slowed evolution through medical advances, birth control
> Matt> and genetic engineering, but I don't think we have stopped it
> Matt> completely yet.
> 
> I don't know what reason there is to think we have slowed
> evolution, rather than speeded it up.
> 
> I would hazard to guess, for example, that since the discovery of 
> birth control, we have been selecting very rapidly for people who 
> choose to have more babies. In fact, I suspect this is one reason
> why the US (which became rich before most of the rest of the world)
> has a higher birth rate than Europe.

Yes, but actually most of the population increase in the U.S. is from
immigration.  Population is growing the fastest in the poorest countries,
especially Africa.

> Likewise, I expect medical advances in childbirth etc are selecting
> very rapidly for multiple births (which once upon a time often killed 
> off mother and child.) I expect this, rather than or in addition to
> the effects of fertility drugs, is the reason for the rise in 
> multiple births.

The main effect of medical advances is to keep children alive who would
otherwise have died from genetic weaknesses, allowing these weaknesses to be
propagated.

Genetic engineering has not yet had much effect on human evolution, as it has
in agriculture.  We have the technology to greatly speed up human evolution,
but it is suppressed for ethical reasons.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

For a baby AGI, I would force the physiological goals, yeah.

In practice, baby Novamente's only explicit goal is getting rewards
from its teacher  Its other goals, such as learning new
information, are left implicit in the action of the system's internal
cognitive processes  It's simulation world is "friendly" in the
sense that it doesn't currently need to take any specific actions in
order just to stay alive...

-- Ben

On 12/4/06, James Ratcliff <[EMAIL PROTECTED]> wrote:

Ok,
  That is a start, but you dont have a difference there between externally
required goals, and internally created goals.
  And what smallest set of external goals do you expect to give?
Would you or not force as Top Level the Physiological (per wiki page you
cited) goals from signals, presumably for a robot AGI.

What other goals are easily definable, and necessary for an AGI, and how do
we model them in such a way that they coexist with the internally created
goals.

I have worked on the rudiments of an AGI system, but am having trouble
defining its internal goal systems.

James Ratcliff


Ben Goertzel <[EMAIL PROTECTED]> wrote:
 Regarding the definition of goals and supergoals, I have made attempts at:

http://www.agiri.org/wiki/index.php/Goal

http://www.agiri.org/wiki/index.php/Supergoal

The scope of human supergoals has been moderately well articulated by
Maslow IMO:

http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs

BTW, I have borrowed from Stan Franklin the use of the term "drive" to
denote a built-in rather than learned supergoal:

http://www.agiri.org/wiki/index.php/Drive

-- Ben G


On 12/4/06, James Ratcliff wrote:
> Ok,
> Alot has been thrown around here about "Top-Level" goals, but no real
> definition has been given, and I am confused as it seems to be covering
alot
> of ground for some people.
>
> What 'level' and what are these top level goals for humans/AGI's?
>
> It seems that "Staying Alive" is a big one, but that appears to contain
> hunger/sleep/ and most other body level needs.
>
> And how hard-wired are these goals, and how (simply) do we really
hard-wire
> them atall?
>
> Our goal of staying alive appears to be "biologically preferred" or
> something like that, but can definetly be overridden by depression /
saving
> a person in a burning building.
>
> James Ratcliff
>
>
> Ben Goertzel wrote:
> IMO, humans **can** reprogram their top-level goals, but only with
> difficulty. And this is correct: a mind needs to have a certain level
> of maturity to really reflect on its own top-level goals, so that it
> would be architecturally foolish to build a mind that involved
> revision of supergoals at the infant/child phase.
>
> However, without reprogramming our top-level goals, we humans still
> have a lot of flexibility in our ultimate orientation. This is
> because we are inconsistent systems: our top-level goals form a set of
> not-entirely-consistent objectives... so we can shift from one
> wired-in top-level goal to another, playing with the inconsistency.
> (I note that, because the logic of the human mind is probabilistically
> paraconsistent, the existence of inconsistency does not necessarily
> imply that "all things are derivable" as it would in typical predicate
> logic.)
>
> Those of us who seek to become "as logically consistent as possible,
> given the limitations of our computational infrastructure" have a
> tough quest, because the human mind/brain is not wired for
> consistency; and I suggest that this inconsistency pervades the human
> wired-in supergoal set as well...
>
> Much of the inconsistency within the human wired-in supergoal set has
> to do with time-horizons. We are wired to want things in the short
> term that contradict the things we are wired to want in the
> medium/long term; and each of our mind/brains' self-organizing
> dynamics needs to work out these evolutionarily-supplied
> contradictions on its own One route is to try to replace our
> inconsistent initial wiring with a more consistent supergoal set; the
> more common route is to oscillate chaotically from one side of the
> contradiction to the other...
>
> (Yes, I am speaking loosely here rather than entirely rigorously; but
> formalizing all this stuff would take a lot of time and space...)
>
> -- Ben F
>
>
> On 12/3/06, Matt Mahoney wrote:
> >
> > --- Mark Waser wrote:
> >
> > > > You cannot turn off hunger or pain. You cannot
> > > > control your emotions.
> > >
> > > Huh? Matt, can you really not ignore hunger or pain? Are you really
100%
> > > at the mercy of your emotions?
> >
> > Why must you argue with everything I say? Is this not a sensible
> statement?
> >
> > > > Since the synaptic weights cannot be altered by
> > > > training (classical or operant conditioning)
> > >
> > > Who says that synaptic weights cannot be altered? And there's endless
> > > irrefutable evidence that the sum of synaptic weights is certainly
> > > constantly altering by the directed die-off of neurons.
> >
> > But not b

Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Ok,
  That is a start, but you dont have a difference there between externally 
required goals, and internally created goals.
  And what smallest set of external goals do you expect to give?
Would you or not force as Top Level the Physiological (per wiki page you cited) 
goals from signals, presumably for a robot AGI.

What other goals are easily definable, and necessary for an AGI, and how do we 
model them in such a way that they coexist with the internally created goals.

I have worked on the rudiments of an AGI system, but am having trouble defining 
its internal goal systems.

James Ratcliff


Ben Goertzel <[EMAIL PROTECTED]> wrote: Regarding the definition of goals and 
supergoals, I have made attempts at:

http://www.agiri.org/wiki/index.php/Goal

http://www.agiri.org/wiki/index.php/Supergoal

The scope of human supergoals has been moderately well articulated by
Maslow IMO:

http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs

BTW, I have borrowed from Stan Franklin the use of the term "drive" to
denote a built-in rather than learned supergoal:

http://www.agiri.org/wiki/index.php/Drive

-- Ben G


On 12/4/06, James Ratcliff  wrote:
> Ok,
>   Alot has been thrown around here about "Top-Level" goals, but no real
> definition has been given, and I am confused as it seems to be covering alot
> of ground for some people.
>
> What 'level' and what are these top level goals for humans/AGI's?
>
> It seems that "Staying Alive" is a big one, but that appears to contain
> hunger/sleep/ and most other body level needs.
>
> And how hard-wired are these goals, and how (simply) do we really hard-wire
> them atall?
>
> Our goal of staying alive appears to be "biologically preferred" or
> something like that, but can definetly be overridden by depression / saving
> a person in a burning building.
>
> James Ratcliff
>
>
> Ben Goertzel  wrote:
>  IMO, humans **can** reprogram their top-level goals, but only with
> difficulty. And this is correct: a mind needs to have a certain level
> of maturity to really reflect on its own top-level goals, so that it
> would be architecturally foolish to build a mind that involved
> revision of supergoals at the infant/child phase.
>
> However, without reprogramming our top-level goals, we humans still
> have a lot of flexibility in our ultimate orientation. This is
> because we are inconsistent systems: our top-level goals form a set of
> not-entirely-consistent objectives... so we can shift from one
> wired-in top-level goal to another, playing with the inconsistency.
> (I note that, because the logic of the human mind is probabilistically
> paraconsistent, the existence of inconsistency does not necessarily
> imply that "all things are derivable" as it would in typical predicate
> logic.)
>
> Those of us who seek to become "as logically consistent as possible,
> given the limitations of our computational infrastructure" have a
> tough quest, because the human mind/brain is not wired for
> consistency; and I suggest that this inconsistency pervades the human
> wired-in supergoal set as well...
>
> Much of the inconsistency within the human wired-in supergoal set has
> to do with time-horizons. We are wired to want things in the short
> term that contradict the things we are wired to want in the
> medium/long term; and each of our mind/brains' self-organizing
> dynamics needs to work out these evolutionarily-supplied
> contradictions on its own One route is to try to replace our
> inconsistent initial wiring with a more consistent supergoal set; the
> more common route is to oscillate chaotically from one side of the
> contradiction to the other...
>
> (Yes, I am speaking loosely here rather than entirely rigorously; but
> formalizing all this stuff would take a lot of time and space...)
>
> -- Ben F
>
>
> On 12/3/06, Matt Mahoney wrote:
> >
> > --- Mark Waser wrote:
> >
> > > > You cannot turn off hunger or pain. You cannot
> > > > control your emotions.
> > >
> > > Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
> > > at the mercy of your emotions?
> >
> > Why must you argue with everything I say? Is this not a sensible
> statement?
> >
> > > > Since the synaptic weights cannot be altered by
> > > > training (classical or operant conditioning)
> > >
> > > Who says that synaptic weights cannot be altered? And there's endless
> > > irrefutable evidence that the sum of synaptic weights is certainly
> > > constantly altering by the directed die-off of neurons.
> >
> > But not by training. You don't decide to be hungry or not, because animals
> > that could do so were removed from the gene pool.
> >
> > Is this not a sensible way to program the top level goals for an AGI?
> >
> >
> > -- Matt Mahoney, [EMAIL PROTECTED]
> >
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?list_id=303
> >
>
> -
> This list is sponsored by AGIRI: http:/

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz

On 12/4/06, Philip Goetz <[EMAIL PROTECTED]> wrote:

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.


Richard Loosemore told me that I'm overreacting.  I can tell that I'm
overly emotional over this, so it might be true.  Sorry for flaming.
I am bewildered by Mark's statement, but I will look for a
less-inflammatory way of saying so next time.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Eric Baum

Matt> --- Hank Conn <[EMAIL PROTECTED]> wrote:

>> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: > The "goals
>> of humanity", like all other species, was determined by >
>> evolution.  > It is to propagate the species.
>> 
>> 
>> That's not the goal of humanity. That's the goal of the evolution
>> of humanity, which has been defunct for a while.

Matt> We have slowed evolution through medical advances, birth control
Matt> and genetic engineering, but I don't think we have stopped it
Matt> completely yet.

I don't know what reason there is to think we have slowed
evolution, rather than speeded it up.

I would hazard to guess, for example, that since the discovery of 
birth control, we have been selecting very rapidly for people who 
choose to have more babies. In fact, I suspect this is one reason
why the US (which became rich before most of the rest of the world)
has a higher birth rate than Europe.

Likewise, I expect medical advances in childbirth etc are selecting
very rapidly for multiple births (which once upon a time often killed 
off mother and child.) I expect this, rather than or in addition to
the effects of fertility drugs, is the reason for the rise in 
multiple births.

etc.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Mark Waser

To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.


The first sentence of the proposition was exactly "You cannot turn off 
hunger." (i.e. not that not everyone can turn them off)


My response is "I certainly can -- not permanently, but certainly so 
completely that I am not aware of it for hours at a time" and further that I 
don't believe that I am at all unusual in this regard.



- Original Message - 
From: "Philip Goetz" <[EMAIL PROTECTED]>

To: 
Sent: Monday, December 04, 2006 2:01 PM
Subject: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is 
it and how fast?]




On 12/4/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:

> The statement, "You cannot turn off hunger or pain" is sensible.
> In fact, it's one of the few statements in the English language that
> is LITERALLY so.  Philosophically, it's more certain than
> "I think, therefore I am".
>
> If you maintain your assertion, I'll put you in my killfile, because
> we cannot communicate.

It is reported that, with sufficiently advanced training in
appropriate mind-control arts (e.g. some Oriental ones), something
accurately describable as "turning off hunger or pain" becomes
possible, from a subjective experiential perspective.


To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Mark Waser
   Can you not concentrate on something else enough that you no longer feel 
hunger?  How many people do you know that have "forgotten to eat" for hours 
at a time when sucked into computer games or other activities?


   Is the same not true of pain?  Have you not heard of yogis that have 
trained their minds to concentrate strongly enough that even the most severe 
of discomfort is ignored?  How is this not "turning off pain"?  If you're 
going to argue that the nerves are still firing and further that the mere 
fact of nerves firing is relevant to the original argument, then  feel free 
to killfile me.  The original point was that humans are *NOT* absolute 
slaves to hunger and pain.


   Are you
   a) arguing that humans *ARE* absolute slaves to hunger and pain
   OR
   b) are you beating me up over a trivial sub-point that isn't 
connected back to the original argument?


- Original Message - 
From: "Philip Goetz" <[EMAIL PROTECTED]>

To: 
Sent: Monday, December 04, 2006 1:38 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]




On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote:

> Why must you argue with everything I say?  Is this not a sensible
> statement?

I don't argue with everything you say.  I only argue with things that I
believe are wrong.  And no, the statements "You cannot turn off hunger or
pain.  You cannot control your emotions are *NOT* sensible at all.


Mark -

The statement, "You cannot turn off hunger or pain" is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so.  Philosophically, it's more certain than
"I think, therefore I am".

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Charles D Hixson

Consider as a possible working definition:
A goal is the target state of a homeostatic system.  (Don't take 
homeostatic too literally, though.)


Thus, if one sets a thermostat to 70 degrees Fahrenheit, then it's goal 
is to change to room temperature to be not less than 67 degrees 
Fahrenheit.  (I'm assuming that the thermostat allows a 6 degree heat 
swing, heats until it senses 73 degrees, then turns off the heater until 
the temperature drops below 67 degrees.)


Thus, the goal is the target at which a system (or subsystem) is aimed.

Note that with this definition goals do not imply intelligence of more 
than the most very basic level.  (The thermostat senses it's environment 
and reacts to adjust it to suit it's goals, but it has no knowledge of 
what it is doing or why, or even THAT it is doing it.)  One could 
reasonably assert that the intelligence of the thermostat is, or at 
least has been, embodied outside the thermostat.  I'm not certain that 
this is useful, but it's reasonable, and if you need to tie goals into 
intelligence, then adopt that model.



James Ratcliff wrote:
Can we go back to a simpler distictintion then, what are you defining 
"Goal" as?


I see the goal term, as a higher level reasoning 'tool'
Wherin the body is constantly sending signals to our minds, but the 
goals are all created consciously or semi-conscisly.


Are you saying we should partition the "Top-Level" goals into some 
form of physical body - imposed goals and other types, or
do you think we should leave it up to a single Constroller to 
interpret the signals coming from teh body and form the goals.


In humans it looks to be the one way, but with AGI's it appears it 
would/could be another.


James

*/Charles D Hixson <[EMAIL PROTECTED]>/* wrote:

J...
Goals are important. Some are built-in, some are changeable. Habits
are also important, perhaps nearly as much so. Habits are initially
created to satisfy goals, but when goals change, or circumstances
alter,
the habits don't automatically change in synchrony.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php



Everyone is raving about the all-new Yahoo! Mail beta. 
 



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Yes, that is what I am aiming towards here, so do we have any Top-Level goals 
of this type, or are most all of these things merely signals, and the goal 
creation is at another level, and how do we deifn the top level goals.

James

Ben Goertzel <[EMAIL PROTECTED]> wrote: > To allow that somewhere in the 
Himalayas, someone may be able,
> with years of training, to lessen the urgency of hunger and
> pain,

An understatement: **I** can dramatically lessen the urgency of hunger
and pain.

What the right sort of training can do is to teach you to come very
close to not attending to them at all...

But I bet these gurus are not stopping the "pain" signals from
propagating from body to brain; they are likely "just" radically
altering the neural interpretation of these signals...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Need a quick answer? Get one in minutes from people who know. Ask your question 
on Yahoo! Answers.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Can we go back to a simpler distictintion then, what are you defining "Goal" as?

I see the goal term, as a higher level reasoning 'tool' 
Wherin the body is constantly sending signals to our minds, but the goals are 
all created consciously or semi-conscisly.

Are you saying we should partition the "Top-Level" goals into some form of 
physical body - imposed goals and other types, or 
do you think we should leave it up to a single Constroller to interpret the 
signals coming from teh body and form the goals.

In humans it looks to be the one way, but with AGI's it appears it would/could 
be another.

James

Charles D Hixson <[EMAIL PROTECTED]> wrote: James Ratcliff wrote:
> There is a needed distinctintion that must be made here about hunger 
> as a goal stack motivator.
>
> We CANNOT change the hunger sensation, (short of physical 
> manipuations, or mind-control "stuff") as it is a given sensation that 
> comes directly from the physical body.
>
> What we can change is the placement in the goal stack, or the priority 
> position it is given.  We CAN choose to put it on the bottom of our 
> list of goals, or remove it from teh list and try and starve ourselves 
> to death.
>   Our body will then continuosly send the hunger signals to us, and we 
> must decide what how to handle that signal.
>
> So in general, the Signal is there, but the goal is not, it is under 
> our control.
>
> James Ratcliff
That's an important distinction, but I would assert that although one 
can insert goals "above" a built-in goal (hunger, e.g.), one cannot 
remove that goal.  There is a very long period when someone on a hunger 
strike must continually reinforce the goal of not-eating.  The goal of 
"satisfy hunger" is only removed when the body decides that it is 
unreachable (at the moment). 

The goal cannot be removed by intention, it can only be overridden and 
suppressed.  Other varieties of goal, volitionally chosen ones, can be 
volitionally revoked.  Even in such cases habit can cause the "automatic 
execution of tasks required to achieve the goal" to be continued.  I 
retired years ago, and although I no longer automatically get up at 5:30 
each morning, I still tend to arise before 8:00.  This is quite a 
contrast from my time in college when I would rarely arise before 9:00, 
and always felt I was getting up too early.  It's true that with a 
minimal effort I can change things so that I get up a (nearly?) any 
particular time...but as soon as I relax it starts drifting back to 
early morning.

Goals are important.  Some are built-in, some are changeable.  Habits 
are also important, perhaps nearly as much so.  Habits are initially 
created to satisfy goals, but when goals change, or circumstances alter, 
the habits don't automatically change in synchrony.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Everyone is raving about the all-new Yahoo! Mail beta.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain,


An understatement: **I** can dramatically lessen the urgency of hunger
and pain.

What the right sort of training can do is to teach you to come very
close to not attending to them at all...

But I bet these gurus are not stopping the "pain" signals from
propagating from body to brain; they are likely "just" radically
altering the neural interpretation of these signals...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz

On 12/4/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:

> The statement, "You cannot turn off hunger or pain" is sensible.
> In fact, it's one of the few statements in the English language that
> is LITERALLY so.  Philosophically, it's more certain than
> "I think, therefore I am".
>
> If you maintain your assertion, I'll put you in my killfile, because
> we cannot communicate.

It is reported that, with sufficiently advanced training in
appropriate mind-control arts (e.g. some Oriental ones), something
accurately describable as "turning off hunger or pain" becomes
possible, from a subjective experiential perspective.


To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Charles D Hixson

James Ratcliff wrote:
There is a needed distinctintion that must be made here about hunger 
as a goal stack motivator.


We CANNOT change the hunger sensation, (short of physical 
manipuations, or mind-control "stuff") as it is a given sensation that 
comes directly from the physical body.


What we can change is the placement in the goal stack, or the priority 
position it is given.  We CAN choose to put it on the bottom of our 
list of goals, or remove it from teh list and try and starve ourselves 
to death.
  Our body will then continuosly send the hunger signals to us, and we 
must decide what how to handle that signal.


So in general, the Signal is there, but the goal is not, it is under 
our control.


James Ratcliff
That's an important distinction, but I would assert that although one 
can insert goals "above" a built-in goal (hunger, e.g.), one cannot 
remove that goal.  There is a very long period when someone on a hunger 
strike must continually reinforce the goal of not-eating.  The goal of 
"satisfy hunger" is only removed when the body decides that it is 
unreachable (at the moment). 

The goal cannot be removed by intention, it can only be overridden and 
suppressed.  Other varieties of goal, volitionally chosen ones, can be 
volitionally revoked.  Even in such cases habit can cause the "automatic 
execution of tasks required to achieve the goal" to be continued.  I 
retired years ago, and although I no longer automatically get up at 5:30 
each morning, I still tend to arise before 8:00.  This is quite a 
contrast from my time in college when I would rarely arise before 9:00, 
and always felt I was getting up too early.  It's true that with a 
minimal effort I can change things so that I get up a (nearly?) any 
particular time...but as soon as I relax it starts drifting back to 
early morning.


Goals are important.  Some are built-in, some are changeable.  Habits 
are also important, perhaps nearly as much so.  Habits are initially 
created to satisfy goals, but when goals change, or circumstances alter, 
the habits don't automatically change in synchrony.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

The statement, "You cannot turn off hunger or pain" is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so.  Philosophically, it's more certain than
"I think, therefore I am".

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.


It is reported that, with sufficiently advanced training in
appropriate mind-control arts (e.g. some Oriental ones), something
accurately describable as "turning off hunger or pain" becomes
possible, from a subjective experiential perspective.

I don't know if the physiological correlates of such experiences have
been studied.

Relatedly, though, I do know that physiological correlates of the
experience of "stopping breathing" that many meditators experience
have been found -- and the correlates were simple: when they thought
they were stopping breathing, the meditators were, in fact, either
stopping or drastically slowing their breathing...

Human potential goes way beyond what is commonly assumed based on our
ordinary states of mind ;-)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Philip Goetz

On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote:

> Why must you argue with everything I say?  Is this not a sensible
> statement?

I don't argue with everything you say.  I only argue with things that I
believe are wrong.  And no, the statements "You cannot turn off hunger or
pain.  You cannot control your emotions are *NOT* sensible at all.


Mark -

The statement, "You cannot turn off hunger or pain" is sensible.
In fact, it's one of the few statements in the English language that
is LITERALLY so.  Philosophically, it's more certain than
"I think, therefore I am".

If you maintain your assertion, I'll put you in my killfile, because
we cannot communicate.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Ben Goertzel

Regarding the definition of goals and supergoals, I have made attempts at:

http://www.agiri.org/wiki/index.php/Goal

http://www.agiri.org/wiki/index.php/Supergoal

The scope of human supergoals has been moderately well articulated by
Maslow IMO:

http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs

BTW, I have borrowed from Stan Franklin the use of the term "drive" to
denote a built-in rather than learned supergoal:

http://www.agiri.org/wiki/index.php/Drive

-- Ben G


On 12/4/06, James Ratcliff <[EMAIL PROTECTED]> wrote:

Ok,
  Alot has been thrown around here about "Top-Level" goals, but no real
definition has been given, and I am confused as it seems to be covering alot
of ground for some people.

What 'level' and what are these top level goals for humans/AGI's?

It seems that "Staying Alive" is a big one, but that appears to contain
hunger/sleep/ and most other body level needs.

And how hard-wired are these goals, and how (simply) do we really hard-wire
them atall?

Our goal of staying alive appears to be "biologically preferred" or
something like that, but can definetly be overridden by depression / saving
a person in a burning building.

James Ratcliff


Ben Goertzel <[EMAIL PROTECTED]> wrote:
 IMO, humans **can** reprogram their top-level goals, but only with
difficulty. And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at the infant/child phase.

However, without reprogramming our top-level goals, we humans still
have a lot of flexibility in our ultimate orientation. This is
because we are inconsistent systems: our top-level goals form a set of
not-entirely-consistent objectives... so we can shift from one
wired-in top-level goal to another, playing with the inconsistency.
(I note that, because the logic of the human mind is probabilistically
paraconsistent, the existence of inconsistency does not necessarily
imply that "all things are derivable" as it would in typical predicate
logic.)

Those of us who seek to become "as logically consistent as possible,
given the limitations of our computational infrastructure" have a
tough quest, because the human mind/brain is not wired for
consistency; and I suggest that this inconsistency pervades the human
wired-in supergoal set as well...

Much of the inconsistency within the human wired-in supergoal set has
to do with time-horizons. We are wired to want things in the short
term that contradict the things we are wired to want in the
medium/long term; and each of our mind/brains' self-organizing
dynamics needs to work out these evolutionarily-supplied
contradictions on its own One route is to try to replace our
inconsistent initial wiring with a more consistent supergoal set; the
more common route is to oscillate chaotically from one side of the
contradiction to the other...

(Yes, I am speaking loosely here rather than entirely rigorously; but
formalizing all this stuff would take a lot of time and space...)

-- Ben F


On 12/3/06, Matt Mahoney wrote:
>
> --- Mark Waser wrote:
>
> > > You cannot turn off hunger or pain. You cannot
> > > control your emotions.
> >
> > Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
> > at the mercy of your emotions?
>
> Why must you argue with everything I say? Is this not a sensible
statement?
>
> > > Since the synaptic weights cannot be altered by
> > > training (classical or operant conditioning)
> >
> > Who says that synaptic weights cannot be altered? And there's endless
> > irrefutable evidence that the sum of synaptic weights is certainly
> > constantly altering by the directed die-off of neurons.
>
> But not by training. You don't decide to be hungry or not, because animals
> that could do so were removed from the gene pool.
>
> Is this not a sensible way to program the top level goals for an AGI?
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads!
http://www.falazar.com/projects/Torrents/tvtorrents_show.php

 
Need a quick answer? Get one in minutes from people who know. Ask your
question on Yahoo! Answers.
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
Ok,
  Alot has been thrown around here about "Top-Level" goals, but no real 
definition has been given, and I am confused as it seems to be covering alot of 
ground for some people.

What 'level' and what are these top level goals for humans/AGI's?

It seems that "Staying Alive" is a big one, but that appears to contain 
hunger/sleep/ and most other body level needs.

And how hard-wired are these goals, and how (simply) do we really hard-wire 
them atall?

Our goal of staying alive appears to be "biologically preferred" or something 
like that, but can definetly be overridden by depression / saving a person in a 
burning building.

James Ratcliff

Ben Goertzel <[EMAIL PROTECTED]> wrote: IMO, humans **can** reprogram their 
top-level goals, but only with
difficulty.  And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at the infant/child phase.

However, without reprogramming our top-level goals, we humans still
have a lot of flexibility in our ultimate orientation.  This is
because we are inconsistent systems: our top-level goals form a set of
not-entirely-consistent objectives... so we can shift from one
wired-in top-level goal to another, playing with the inconsistency.
(I note that, because the logic of the human mind is probabilistically
paraconsistent, the existence of inconsistency does not necessarily
imply that "all things are derivable" as it would in typical predicate
logic.)

Those of us who seek to become "as logically consistent as possible,
given the limitations of our computational infrastructure" have a
tough quest, because the human mind/brain is not wired for
consistency; and I suggest that this inconsistency pervades the human
wired-in supergoal set as well...

Much of the inconsistency within the human wired-in supergoal set has
to do with time-horizons.  We are wired to want things in the short
term that contradict the things we are wired to want in the
medium/long term; and each of our mind/brains' self-organizing
dynamics needs to work out these evolutionarily-supplied
contradictions on its own  One route is to try to replace our
inconsistent initial wiring with a more consistent supergoal set; the
more common route is to oscillate chaotically from one side of the
contradiction to the other...

(Yes, I am speaking loosely here rather than entirely rigorously; but
formalizing all this stuff would take a lot of time and space...)

-- Ben F


On 12/3/06, Matt Mahoney  wrote:
>
> --- Mark Waser  wrote:
>
> > > You cannot turn off hunger or pain.  You cannot
> > > control your emotions.
> >
> > Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100%
> > at the mercy of your emotions?
>
> Why must you argue with everything I say?  Is this not a sensible statement?
>
> > > Since the synaptic weights cannot be altered by
> > > training (classical or operant conditioning)
> >
> > Who says that synaptic weights cannot be altered?  And there's endless
> > irrefutable evidence that the sum of synaptic weights is certainly
> > constantly altering by the directed die-off of neurons.
>
> But not by training.  You don't decide to be hungry or not, because animals
> that could do so were removed from the gene pool.
>
> Is this not a sensible way to program the top level goals for an AGI?
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Need a quick answer? Get one in minutes from people who know. Ask your question 
on Yahoo! Answers.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread James Ratcliff
There is a needed distinctintion that must be made here about hunger as a goal 
stack motivator.

We CANNOT change the hunger sensation, (short of physical manipuations, or 
mind-control "stuff") as it is a given sensation that comes directly from the 
physical body. 

What we can change is the placement in the goal stack, or the priority position 
it is given.  We CAN choose to put it on the bottom of our list of goals, or 
remove it from teh list and try and starve ourselves to death.
  Our body will then continuosly send the hunger signals to us, and we must 
decide what how to handle that signal.

So in general, the Signal is there, but the goal is not, it is under our 
control.

James Ratcliff


Matt Mahoney <[EMAIL PROTECTED]> wrote: 
--- Mark Waser  wrote:

> > You cannot turn off hunger or pain.  You cannot
> > control your emotions.
> 
> Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100% 
> at the mercy of your emotions?

Why must you argue with everything I say?  Is this not a sensible statement?

> > Since the synaptic weights cannot be altered by
> > training (classical or operant conditioning)
> 
> Who says that synaptic weights cannot be altered?  And there's endless 
> irrefutable evidence that the sum of synaptic weights is certainly 
> constantly altering by the directed die-off of neurons.

But not by training.  You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.

Is this not a sensible way to program the top level goals for an AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Any questions?  Get answers on any topic at Yahoo! Answers. Try it now.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-04 Thread Mark Waser
Why must you argue with everything I say?  Is this not a sensible 
statement?


I don't argue with everything you say.  I only argue with things that I 
believe are wrong.  And no, the statements "You cannot turn off hunger or 
pain.  You cannot control your emotions are *NOT* sensible at all.



You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.


Funny, I always thought that it was the animals that continued eating while 
being stalked were the ones that were removed from the gene pool (suddenly 
and bloodily).  Yes, you eventually have to feed yourself or you die and 
animals mal-adapted enough to not feed themselves will no longer contribute 
to the gene pool, but can you disprove the equally likely contention that 
animals eat because it is very pleasurable to them and that they never feel 
hunger (or do you only have sex because it hurts when you don't)?



Is this not a sensible way to program the top level goals for an AGI?


No.  It's a terrible way to program the top level goals for an AGI.  It 
leads to wireheading, short-circuiting of true goals for faking out the 
evaluation criteria, and all sorts of other problems.


- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Sunday, December 03, 2006 10:19 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]





--- Mark Waser <[EMAIL PROTECTED]> wrote:


> You cannot turn off hunger or pain.  You cannot
> control your emotions.

Huh?  Matt, can you really not ignore hunger or pain?  Are you really 
100%

at the mercy of your emotions?


Why must you argue with everything I say?  Is this not a sensible 
statement?



> Since the synaptic weights cannot be altered by
> training (classical or operant conditioning)

Who says that synaptic weights cannot be altered?  And there's endless
irrefutable evidence that the sum of synaptic weights is certainly
constantly altering by the directed die-off of neurons.


But not by training.  You don't decide to be hungry or not, because 
animals

that could do so were removed from the gene pool.

Is this not a sensible way to program the top level goals for an AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Ben Goertzel

IMO, humans **can** reprogram their top-level goals, but only with
difficulty.  And this is correct: a mind needs to have a certain level
of maturity to really reflect on its own top-level goals, so that it
would be architecturally foolish to build a mind that involved
revision of supergoals at the infant/child phase.

However, without reprogramming our top-level goals, we humans still
have a lot of flexibility in our ultimate orientation.  This is
because we are inconsistent systems: our top-level goals form a set of
not-entirely-consistent objectives... so we can shift from one
wired-in top-level goal to another, playing with the inconsistency.
(I note that, because the logic of the human mind is probabilistically
paraconsistent, the existence of inconsistency does not necessarily
imply that "all things are derivable" as it would in typical predicate
logic.)

Those of us who seek to become "as logically consistent as possible,
given the limitations of our computational infrastructure" have a
tough quest, because the human mind/brain is not wired for
consistency; and I suggest that this inconsistency pervades the human
wired-in supergoal set as well...

Much of the inconsistency within the human wired-in supergoal set has
to do with time-horizons.  We are wired to want things in the short
term that contradict the things we are wired to want in the
medium/long term; and each of our mind/brains' self-organizing
dynamics needs to work out these evolutionarily-supplied
contradictions on its own  One route is to try to replace our
inconsistent initial wiring with a more consistent supergoal set; the
more common route is to oscillate chaotically from one side of the
contradiction to the other...

(Yes, I am speaking loosely here rather than entirely rigorously; but
formalizing all this stuff would take a lot of time and space...)

-- Ben F


On 12/3/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:


--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > You cannot turn off hunger or pain.  You cannot
> > control your emotions.
>
> Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100%
> at the mercy of your emotions?

Why must you argue with everything I say?  Is this not a sensible statement?

> > Since the synaptic weights cannot be altered by
> > training (classical or operant conditioning)
>
> Who says that synaptic weights cannot be altered?  And there's endless
> irrefutable evidence that the sum of synaptic weights is certainly
> constantly altering by the directed die-off of neurons.

But not by training.  You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.

Is this not a sensible way to program the top level goals for an AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Matt Mahoney

--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > You cannot turn off hunger or pain.  You cannot
> > control your emotions.
> 
> Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100% 
> at the mercy of your emotions?

Why must you argue with everything I say?  Is this not a sensible statement?

> > Since the synaptic weights cannot be altered by
> > training (classical or operant conditioning)
> 
> Who says that synaptic weights cannot be altered?  And there's endless 
> irrefutable evidence that the sum of synaptic weights is certainly 
> constantly altering by the directed die-off of neurons.

But not by training.  You don't decide to be hungry or not, because animals
that could do so were removed from the gene pool.

Is this not a sensible way to program the top level goals for an AGI?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Mark Waser

You cannot turn off hunger or pain.  You cannot
control your emotions.


Huh?  Matt, can you really not ignore hunger or pain?  Are you really 100% 
at the mercy of your emotions?



Since the synaptic weights cannot be altered by
training (classical or operant conditioning)


Who says that synaptic weights cannot be altered?  And there's endless 
irrefutable evidence that the sum of synaptic weights is certainly 
constantly altering by the directed die-off of neurons.




- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, December 02, 2006 9:42 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]




--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

I am disputing the very idea that monkeys (or rats or pigeons or humans)
have a "part of the brain which generates the reward/punishment signal
for operant conditioning."

This is behaviorism.  I find myself completely at a loss to know where
to start, if I have to explain what is wrong with behaviorism.


Call it what you want.  I am arguing that there are parts of the brain 
(e.g.

the nucleus accumbens) responsible for reinforcement learning, and
furthermore, that the synapses along the input paths to these regions are 
not

trainable.  I argue this has to be the case because an intelligent system
cannot be allowed to modify its motivational system.  Our most fundamental
models of intelligent agents require this (e.g. AIXI -- the reward signal 
is
computed by the environment).  You cannot turn off hunger or pain.  You 
cannot

control your emotions.  Since the synaptic weights cannot be altered by
training (classical or operant conditioning), they must be hardwired as
determined by your DNA.

Do you agree?  If not, what part of this argument do you disagree with?

That reward and punishment exist and result in learning in humans?

That there are neurons dedicated to computing reinforcement signals?

That the human motivational system (by which I mean the logic of computing 
the

reinforcement signals from sensory input) is not trainable?

That the motivational system is completely specified by DNA?

That all human learning can be reduced to classical and operant 
conditioning?


That humans are animals that differ only in the ability to learn language?

That models of goal seeking agents like AIXI are realistic models of
intelligence?

Do you object to behavioralism because of their view that consciousness 
and

free will do not exist, except as beliefs?

Do you object to the assertion that the brain is a computer with finite 
memory
and speed?  That your life consists of running a program?  Is this wrong, 
or

just uncomfortable?




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Charles D Hixson

Mark Waser wrote:

...
For me, yes, all of those things are good since they are on my list of 
goals *unless* the method of accomplishing them steps on a higher goal 
OR a collection of goals with greater total weight OR violates one of 
my limitations (restrictions).

...

If you put every good thing on your "list of goals", then you will have 
a VERY long list.
I would propose that most of those items listed should be derived goals 
rather than anything primary.  And that the primary goals should be 
rather few.  I'm certain that three is too few.  Probably it should be 
fewer than 200.  The challenge is so phrasing them that they:

1) cover every needed situation
2) are short enough to be debugable
They should probably be divided into two sets.  One set would be a list 
of goals to be aimed for, and the other would be a list of filters that 
had to be passed.


Think of these as the axioms on which the mind is being erected.  Axioms 
need to be few and simple, it's the theorums that are derived from them 
that get complicated.


N.B.:  This is an ANALOGY.  I'm not proposing a theorum prover as the 
model of an AI.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-03 Thread Philip Goetz

On 12/2/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

I am disputing the very idea that monkeys (or rats or pigeons or humans)
have a "part of the brain which generates the reward/punishment signal
for operant conditioning."


Well, there is a part of the brain which generates a
temporal-difference signal for reinforcement learning.  Not so very
different.  At least, not different enough for this brain mechanism to
escape having Richard's scorn heaped upon it.

http://www.iro.umontreal.ca/~lisa/pointeurs/RivestNIPS2004.pdf

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> I am disputing the very idea that monkeys (or rats or pigeons or humans) 
> have a "part of the brain which generates the reward/punishment signal 
> for operant conditioning."
> 
> This is behaviorism.  I find myself completely at a loss to know where 
> to start, if I have to explain what is wrong with behaviorism.

Call it what you want.  I am arguing that there are parts of the brain (e.g.
the nucleus accumbens) responsible for reinforcement learning, and
furthermore, that the synapses along the input paths to these regions are not
trainable.  I argue this has to be the case because an intelligent system
cannot be allowed to modify its motivational system.  Our most fundamental
models of intelligent agents require this (e.g. AIXI -- the reward signal is
computed by the environment).  You cannot turn off hunger or pain.  You cannot
control your emotions.  Since the synaptic weights cannot be altered by
training (classical or operant conditioning), they must be hardwired as
determined by your DNA.

Do you agree?  If not, what part of this argument do you disagree with?

That reward and punishment exist and result in learning in humans?

That there are neurons dedicated to computing reinforcement signals?

That the human motivational system (by which I mean the logic of computing the
reinforcement signals from sensory input) is not trainable?

That the motivational system is completely specified by DNA?

That all human learning can be reduced to classical and operant conditioning?

That humans are animals that differ only in the ability to learn language?

That models of goal seeking agents like AIXI are realistic models of
intelligence?

Do you object to behavioralism because of their view that consciousness and
free will do not exist, except as beliefs?

Do you object to the assertion that the brain is a computer with finite memory
and speed?  That your life consists of running a program?  Is this wrong, or
just uncomfortable?




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Ben Goertzel

David...

On 11/29/06, David Hart <[EMAIL PROTECTED]> wrote:

On 11/30/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Richard,
>
> This is certainly true, and is why in Novamente we use a goal stack
> only as one aspect of cognitive control...

Ben,

Could you elaborate for the list some of the nuances between [explicit]
cognitive control and [implicit] cognitive bias, either theoretically or
within Novamente?

David


Well, there is nothing  too obscure in the way explicit
goal-achievement dynamics and implicit goal-achievement dynamics
co-exist in Novamente...

Quite simply, in the NM system as it is now (and as it is planned to
be in the reasonably near future), explicit goal achievement is one
among many dynamics.  There are also many "ambient" cognitive
processes that the system just does "because that's the way it was
created."  These include a certain level of reasoning, concept
formation, procedure learning, etc.

It is anticipated that ultimately, once a NM system becomes
sufficiently advanced, explicit goal-achievement may be allowed to
extend across all aspects of the system.  But this does not make sense
initially for the reason Richard Loosemore pointed out: A baby does
not have the knowledge to reason "If I don't act nice to my mommy, I
may be neglected and and die, therefore I should be nice to my mommy
because it is a subgoal of my supergoal of staying alive."  It doesn't
have the knowledge to figure out precisely how to be nice to its mommy
either.  Instead, a baby needs to be preprogrammed with the desire to
be nice to its mommy, and with specific behaviors along these lines.
A lot of preprogrammed stuff -- including preprogrammed **learning
dynamics** -- seem to be necessary to get a realistic mind to the
level where it can achieve complex goals with a reasonable degree of
flexibility and effectiveness.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Richard Loosemore



--- Richard Loosemore <[EMAIL PROTECTED]> wrote:


Matt Mahoney wrote:
> I guess we are arguing terminology.  I mean that the part of the brain
which
> generates the reward/punishment signal for operant conditioning is not
> trainable.  It is programmed only through evolution.

There is no such thing.  This is the kind of psychology that died out at
least thirty years ago (with the exception of a few diehards in North
Wales and Cambridge).


Are we arguing terminology again, or are you really saying that 
animals cannot
be trained using reward and punishment?  By "operant conditioning", I 
mean

reinforcement learning.

I realize that monkeys can be trained to work for tokens that can be 
exchanged
for food.  When I say that the motivational logic cannot be trained, I 
mean
the connection from food to reward, not from tokens to reward.  What 
you are

training is the association of tokens to food and work to tokens.


Mark Waser wrote:
> He's arguing with the phrase "It is programmed only through evolution."
>
> If I'm wrong and he is not, I certainly am.

I certainly agree with Mark on this point (I dispute Matt's contention 
that "It is programmed only through evolution") but that was only a 
subset of my main disagreement.


I am disputing the very idea that monkeys (or rats or pigeons or humans) 
have a "part of the brain which generates the reward/punishment signal 
for operant conditioning."


This is behaviorism.  I find myself completely at a loss to know where 
to start, if I have to explain what is wrong with behaviorism.


This is the 21st century, and we have had cognitive psychology for, 
what?, fifty years now?  Cognitive psychology was born when people 
suddenly realized that the behaviorist conception of the mechanisms of 
mind was ridiculously stupid.


As a superficial model of how to control the behavior of rats, it works 
great.  (It even works for some aspects of the behavior of children). 
But as a model of the *mechanisms* that make up a thinking system?  I 
can't think of words expressive enough to convey my contempt for the 
idea.  The people who invented behaviorism managed to shut the science 
of psychology down for about three or four decades, so I don't look very 
charitably on what they did.


If someone said to you that "All computers in the world today actually 
work by having a mechanism inside them that looks at the current inputs 
(from mice, keyboard, etc) and then produces an output by getting the 
correct response for that input from an internal lookup table" you would 
be at a loss to know how to go about fixing that person's broken 
conception of the machinery of computation.  You would probably tell 
them to go read some *really* basic textbooks about computers, then get 
a degree in the subject if necessary, then come back when their ideas 
had gotten straightened out.


[Don't take the above analogy too literally, btw:  I know about the 
differences between look up tables and literal behaviorism I was 
merely conveying the depth of ignorance of mechanism that the two sets 
of ideas share].


And, no, this is nothing whatsoever to do with "terminology" (nor have I 
ever simply argued about terminology in the past, as you imply).



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Richard Loosemore

Philip Goetz wrote:

On 12/1/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Some people would call it "repeating the same mistakes I already dealt 
with".

Some people would call it "continuing to disagree".  :)


Some people would call it "continuing to disagree because they haven't 
yet figured out that their argument has been undermined."


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Mark Waser

He's arguing with the phrase "It is programmed only through evolution."

If I'm wrong and he is not, I certainly am.

- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Saturday, December 02, 2006 4:26 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]





--- Richard Loosemore <[EMAIL PROTECTED]> wrote:


Matt Mahoney wrote:
> I guess we are arguing terminology.  I mean that the part of the brain
which
> generates the reward/punishment signal for operant conditioning is not
> trainable.  It is programmed only through evolution.

There is no such thing.  This is the kind of psychology that died out at
least thirty years ago (with the exception of a few diehards in North
Wales and Cambridge).


Are we arguing terminology again, or are you really saying that animals 
cannot

be trained using reward and punishment?  By "operant conditioning", I mean
reinforcement learning.

I realize that monkeys can be trained to work for tokens that can be 
exchanged
for food.  When I say that the motivational logic cannot be trained, I 
mean
the connection from food to reward, not from tokens to reward.  What you 
are

training is the association of tokens to food and work to tokens.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Matt Mahoney

--- Hank Conn <[EMAIL PROTECTED]> wrote:

> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> >
> > --- Hank Conn <[EMAIL PROTECTED]> wrote:
> >
> > > On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > > I suppose the alternative is to not scan brains, but then you still
> > have
> > > > death, disease and suffering.  I'm sorry it is not a happy picture
> > either
> > > > way.
> > >
> > >
> > > Or you have no death, disease, or suffering, but not wireheading.
> >
> > How do you propose to reduce the human mortality rate from 100%?
> 
> 
> Why do you ask?

You seemed to imply you knew an alternative to brain scanning, or did I
misunderstand?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Matt Mahoney

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > I guess we are arguing terminology.  I mean that the part of the brain
> which
> > generates the reward/punishment signal for operant conditioning is not
> > trainable.  It is programmed only through evolution.
> 
> There is no such thing.  This is the kind of psychology that died out at 
> least thirty years ago (with the exception of a few diehards in North 
> Wales and Cambridge).

Are we arguing terminology again, or are you really saying that animals cannot
be trained using reward and punishment?  By "operant conditioning", I mean
reinforcement learning.

I realize that monkeys can be trained to work for tokens that can be exchanged
for food.  When I say that the motivational logic cannot be trained, I mean
the connection from food to reward, not from tokens to reward.  What you are
training is the association of tokens to food and work to tokens.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Philip Goetz

On 12/2/06, Mark Waser <[EMAIL PROTECTED]> wrote:


> Philip Goetz snidely responded
> Some people would call it "repeating the same mistakes I already dealt
> with".
> Some people would call it "continuing to disagree".  :)

Richard's point was that the poster was simply repeating previous points
without responding to Richard's arguments.  Responsible "contining to
disagree" would have included, at least, acknowledging and responding to or
arguing with Richard's points.


It would have, if Matt were replying to Richard.  However, he was
replying to Hank.

However, I'll make my snide remarks directly to the poster in the future.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Mark Waser

On 12/1/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.



Philip Goetz snidely responded
Some people would call it "repeating the same mistakes I already dealt 
with".

Some people would call it "continuing to disagree".  :)


   Richard's point was that the poster was simply repeating previous points 
without responding to Richard's arguments.  Responsible "contining to 
disagree" would have included, at least, acknowledging and responding to or 
arguing with Richard's points.  Not doing so is simply an "is too/is not" 
argument. 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Mark Waser
I'd be interested in knowing if anyone else on this list has had any 
experience with policy-based governing . . . .


Questions like

Are the following things good?
- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.
can only be properly answered by reference to your *ordered* list of goals 
*WITH* reference to your list of limitations (restrictions, to use the 
lingo).


For me, yes, all of those things are good since they are on my list of goals 
*unless* the method of accomplishing them steps on a higher goal OR a 
collection of goals with greater total weight OR violates one of my 
limitations (restrictions).


As long as I'm intelligent enough to put things like "don't do anything to 
me without my informed consent" on my limitations list, I don't expect too 
many problems (and certainly not any of the problems that were brought up 
later in the "Questions" post).


Personally, I find the level of many of these morality discussions 
ridiculous.  It is relatively easy for any competent systems architect to 
design complete, internally consistent systems of morality from sets of 
goals and restrictions.  The problem is that any such system is just not 
going to match what everybody wants since everybody embodies conflicting and 
irreconcilable requirements.


Richard's system of evolved morality through a large number of diffuse 
constraints is a good attempt at creating a morality system that is unlikely 
to offend anyone while still making "reasonable" decisions about contested 
issues.  The problem with Richard's system is that it may well make 
decisions like outlawing stem cell research since so many people are against 
it (or maybe, if it is sufficiently intelligent, it's internal consistency 
routines may reduce the weight of the arguments from people who insist upon 
conflicting priorities like "I want the longest life and best possible 
medical care" and "I don't want stem cell research").


The good point about an internally consistent system designed by me is that 
it's not going to outlaw stem cell research.  The bad point about my system 
is that it's going to offend *a lot* of people and if it's someone else's 
system, it may well offend (or exterminate) me.  And, I must say, based upon 
the level of many of these discussions, the thought of a lot of you 
designing a morality system is *REALLY* frightening.


- Original Message ----- 
From: "Richard Loosemore" <[EMAIL PROTECTED]>

To: 
Sent: Friday, December 01, 2006 11:56 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it 
and how fast?]




Matt Mahoney wrote:

--- Hank Conn <[EMAIL PROTECTED]> wrote:
The further the actual target goal state of that particular AI is away 
from

the actual target goal state of humanity, the worse.

The goal of ... humanity... is that the AGI implemented that will have 
the
strongest RSI curve also will be such that its actual target goal state 
is

exactly congruent to the actual target goal state of humanity.


This was discussed on the Singularity list.  Even if we get the 
motivational
system and goals right, things can still go badly.  Are the following 
things

good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into 
a
computer and making many redundant backups, that you become immortal. 
Furthermore, once your consciousness becomes a computation in silicon, 
your

universe can be simulated to be anything you want it to be.


See my previous lengthy post on the subject of motivational systems vs 
"goal stack" systems.


The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Hank Conn

On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:



--- Hank Conn <[EMAIL PROTECTED]> wrote:

> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > The "goals of humanity", like all other species, was determined by
> > evolution.
> > It is to propagate the species.
>
>
> That's not the goal of humanity. That's the goal of the evolution of
> humanity, which has been defunct for a while.

We have slowed evolution through medical advances, birth control and
genetic
engineering, but I don't think we have stopped it completely yet.

> You are confusing this abstract idea of an optimization target with the
> actual motivation system. You can change your motivation system all you
> want, but you woulnd't (intentionally) change the fundamental
specification
> of the optimization target which is maintained by the motivation system
as a
> whole.

I guess we are arguing terminology.  I mean that the part of the brain
which
generates the reward/punishment signal for operant conditioning is not
trainable.  It is programmed only through evolution.

>   To some extent you can do this.  When rats can
> > electrically stimulate their nucleus accumbens by pressing a lever,
they
> > do so
> > nonstop in preference to food and water until they die.
> >
> > I suppose the alternative is to not scan brains, but then you still
have
> > death, disease and suffering.  I'm sorry it is not a happy picture
either
> > way.
>
>
> Or you have no death, disease, or suffering, but not wireheading.

How do you propose to reduce the human mortality rate from 100%?



Why do you ask?

-hank


-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread Philip Goetz

On 12/1/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Some people would call it "repeating the same mistakes I already dealt with".
Some people would call it "continuing to disagree".  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-02 Thread J. Storrs Hall, PhD.
On Friday 01 December 2006 23:42, Richard Loosemore wrote:

> It's a lot easier than you suppose.  The system would be built in two
> parts:  the motivational system, which would not change substantially
> during RSI, and the "thinking part" (for want of a better term), which
> is where you do all the improvement.

For concreteness, I have called these the Utility Function and World Model in 
my writings on the subject...

A plan that says "Let RSI consist of growing the WM and not the UF" suffers 
from the problem that the sophistication of the WM's understanding soon makes 
the UF look crude and stupid. Human babies want food, proximity to their 
mothers, and are frightened of strangers. That's good for babies but a person 
with greater understanding and capabilities is better off (and the rest of us 
are better off if the person has) a more sophisticated UF as well.

> It is not quite a contradiction, but certainly this would be impossible:
>   deciding to make a modification that clearly was going to leave it
> wanting something that, if it wanted that thing today, would contradict
> its current priorities.  Do you see why?  The motivational mechanism IS
> what the system wants, it is not what the system is considering wanting.

This is a good first cut at the problem, and is taken by e.g. Nick Bostrom in 
a widely cited paper at http://www.nickbostrom.com/ethics/ai.html

> The system is not protecting current beliefs, it is believing its
> current beliefs.  Becoming more capable of understanding the "reality"
> it is immersed in?  You have implicitly put a motivational priority in
> your system when you suggest that that is important to it ... does that
> rank higher than its empathy with the human race?
>
> You see where I am going:  there is nothing god-given about the desire
> to "understand reality" in a better way.  That is just one more
> candidate for a motivational priority.

Ah, but consider: knowing more about how the world works is often a valuable 
asset to the attempt to increase the utility of the world, *no matter* what 
else the utility function might specify. 

Thus, a system's self-modification (or evolution in general) is unlikely to 
remove curiosity / thirst for knowledge / desire to improve one's WM as a 
high utility even as it changes other things. 

There are several such properties of a utility function that are likely to be 
invariant under self-improvement or evolution. It is by the use of such 
invariants that we can design self-improving AIs with reasonable assurance of 
their continued beneficence.

--Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Matt Mahoney wrote:

--- Hank Conn <[EMAIL PROTECTED]> wrote:

The further the actual target goal state of that particular AI is away from
the actual target goal state of humanity, the worse.

The goal of ... humanity... is that the AGI implemented that will have the
strongest RSI curve also will be such that its actual target goal state is
exactly congruent to the actual target goal state of humanity.


This was discussed on the Singularity list.  Even if we get the motivational
system and goals right, things can still go badly.  Are the following things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a
computer and making many redundant backups, that you become immortal. 
Furthermore, once your consciousness becomes a computation in silicon, your

universe can be simulated to be anything you want it to be.


See my previous lengthy post on the subject of motivational systems vs 
"goal stack" systems.


The questions you asked above are predicated on a goal stack approach.

You are repeating the same mistakes that I already dealt with.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Matt Mahoney wrote:

I guess we are arguing terminology.  I mean that the part of the brain which
generates the reward/punishment signal for operant conditioning is not
trainable.  It is programmed only through evolution.


There is no such thing.  This is the kind of psychology that died out at 
least thirty years ago (with the exception of a few diehards in North 
Wales and Cambridge).




Richard Loosemore


[With apologies to Fergus, Nick and Ian, who may someday come across 
this message and start flaming me].


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Samantha Atkins wrote:


On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:


Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such a 
way as to preserve its existing motivational priorities.




How could the system anticipate whether on not significant RSI would 
lead it to question or modify its current motivational priorities?  Are 
you suggesting that the system can somehow simulate an improved version 
of itself in sufficient detail to know this?  It seems quite unlikely.


Well, I'm certainly not suggesting the latter.

It's a lot easier than you suppose.  The system would be built in two 
parts:  the motivational system, which would not change substantially 
during RSI, and the "thinking part" (for want of a better term), which 
is where you do all the improvement.


The idea of "questioning or modifying its current motivational 
priorities" is extremely problematic, so be careful how quickly you 
deploy it as if it meant something coherent.  What would it mean for ths 
system to modify it in such a way as to contradict the current state? 
That gets very close to a contradiction in terms.


It is not quite a contradiction, but certainly this would be impossible: 
 deciding to make a modification that clearly was going to leave it 
wanting something that, if it wanted that thing today, would contradict 
its current priorities.  Do you see why?  The motivational mechanism IS 
what the system wants, it is not what the system is considering wanting.




That means:  the system would *not* choose to do any RSI if the RSI 
could not be done in such a way as to preserve its current 
motivational priorities:  to do so would be to risk subverting its own 
most important desires.  (Note carefully that the system itself would 
put this constraint on its own development, it would not have anything 
to do with us controlling it).




If the improvements were an improvement in capabilities and such 
improvement led to changes in its priorities then how would those 
improvements be undesirable due to showing current motivational 
priorities as being in some way lacking?  Why is protecting current 
beliefs or motivational priorities more important than becoming 
presumably more capable and more capable of understanding the reality 
the system is immersed in?


The system is not protecting current beliefs, it is believing its 
current beliefs.  Becoming more capable of understanding the "reality" 
it is immersed in?  You have implicitly put a motivational priority in 
your system when you suggest that that is important to it ... does that 
rank higher than its empathy with the human race?


You see where I am going:  there is nothing god-given about the desire 
to "understand reality" in a better way.  That is just one more 
candidate for a motivational priority.





There is a bit of a problem with the term "RSI" here:  to answer your 
question fully we might have to get more specific about what that 
would entail.


Finally:  the usefulness of RSI would not necessarily be indefinite. 
The system could well get to a situation where further RSI was not 
particularly consistent with its goals.  It could live without it.




Then are its goal  more important to it than reality?

- samantha


Now you have become too abstract for me to answer, unless you are 
repeating the previous point.




Richard Loosemore.















-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Matt Mahoney

--- Hank Conn <[EMAIL PROTECTED]> wrote:

> On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > The "goals of humanity", like all other species, was determined by
> > evolution.
> > It is to propagate the species.
> 
> 
> That's not the goal of humanity. That's the goal of the evolution of
> humanity, which has been defunct for a while.

We have slowed evolution through medical advances, birth control and genetic
engineering, but I don't think we have stopped it completely yet.

> You are confusing this abstract idea of an optimization target with the
> actual motivation system. You can change your motivation system all you
> want, but you woulnd't (intentionally) change the fundamental specification
> of the optimization target which is maintained by the motivation system as a
> whole.

I guess we are arguing terminology.  I mean that the part of the brain which
generates the reward/punishment signal for operant conditioning is not
trainable.  It is programmed only through evolution.

>   To some extent you can do this.  When rats can
> > electrically stimulate their nucleus accumbens by pressing a lever, they
> > do so
> > nonstop in preference to food and water until they die.
> >
> > I suppose the alternative is to not scan brains, but then you still have
> > death, disease and suffering.  I'm sorry it is not a happy picture either
> > way.
> 
> 
> Or you have no death, disease, or suffering, but not wireheading.

How do you propose to reduce the human mortality rate from 100%?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn


This seems rather circular and ill-defined.

- samantha



Yeah I don't really know what I'm talking about at all.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Hank Conn

On 12/1/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:


--- Hank Conn <[EMAIL PROTECTED]> wrote:
> The further the actual target goal state of that particular AI is away
from
> the actual target goal state of humanity, the worse.
>
> The goal of ... humanity... is that the AGI implemented that will have
the
> strongest RSI curve also will be such that its actual target goal state
is
> exactly congruent to the actual target goal state of humanity.

This was discussed on the Singularity list.  Even if we get the
motivational
system and goals right, things can still go badly.  Are the following
things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a
computer and making many redundant backups, that you become immortal.
Furthermore, once your consciousness becomes a computation in silicon,
your
universe can be simulated to be anything you want it to be.

The "goals of humanity", like all other species, was determined by
evolution.
It is to propagate the species.



That's not the goal of humanity. That's the goal of the evolution of
humanity, which has been defunct for a while.


 This goal is met by a genetically programmed

individual motivation toward reproduction and a fear of death, at least
until
you are past the age of reproduction and you no longer serve a purpose.
Animals without these goals don't pass on their DNA.

A property of motivational systems is that cannot be altered.  You cannot
turn
off your desire to eat or your fear of pain.  You cannot decide you will
start
liking what you don't like, or vice versa.  You cannot because if you
could,
you would not pass on your DNA.



You are confusing this abstract idea of an optimization target with the
actual motivation system. You can change your motivation system all you
want, but you woulnd't (intentionally) change the fundamental specification
of the optimization target which is maintained by the motivation system as a
whole.


Once your brain is in software, what is to stop you from telling the AGI

(that
you built) to reprogram your motivational system that you built so you are
happy with what you have?



Uh... go for it.


 To some extent you can do this.  When rats can

electrically stimulate their nucleus accumbens by pressing a lever, they
do so
nonstop in preference to food and water until they die.

I suppose the alternative is to not scan brains, but then you still have
death, disease and suffering.  I'm sorry it is not a happy picture either
way.



Or you have no death, disease, or suffering, but not wireheading.


-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Matt Mahoney
--- Hank Conn <[EMAIL PROTECTED]> wrote:
> The further the actual target goal state of that particular AI is away from
> the actual target goal state of humanity, the worse.
> 
> The goal of ... humanity... is that the AGI implemented that will have the
> strongest RSI curve also will be such that its actual target goal state is
> exactly congruent to the actual target goal state of humanity.

This was discussed on the Singularity list.  Even if we get the motivational
system and goals right, things can still go badly.  Are the following things
good?

- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.

You might think so, and program an AGI with these goals.  Suppose the AGI
figures out that by scanning your brain and copying the information into a
computer and making many redundant backups, that you become immortal. 
Furthermore, once your consciousness becomes a computation in silicon, your
universe can be simulated to be anything you want it to be.

The "goals of humanity", like all other species, was determined by evolution. 
It is to propagate the species.  This goal is met by a genetically programmed
individual motivation toward reproduction and a fear of death, at least until
you are past the age of reproduction and you no longer serve a purpose. 
Animals without these goals don't pass on their DNA.

A property of motivational systems is that cannot be altered.  You cannot turn
off your desire to eat or your fear of pain.  You cannot decide you will start
liking what you don't like, or vice versa.  You cannot because if you could,
you would not pass on your DNA.

Once your brain is in software, what is to stop you from telling the AGI (that
you built) to reprogram your motivational system that you built so you are
happy with what you have?  To some extent you can do this.  When rats can
electrically stimulate their nucleus accumbens by pressing a lever, they do so
nonstop in preference to food and water until they die.

I suppose the alternative is to not scan brains, but then you still have
death, disease and suffering.  I'm sorry it is not a happy picture either way.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Samantha Atkins


On Nov 30, 2006, at 10:15 PM, Hank Conn wrote:


Yes, now the point being that if you have an AGI and you aren't in a  
sufficiently fast RSI loop, there is a good chance that if someone  
else were to launch an AGI with a faster RSI loop, your AGI would  
lose control to the other AGI where the goals of the other AGI  
differed from yours.




Are you sure that "control" would be a high priority of such systems?



What I'm saying is that the outcome of the Singularity is going to  
be exactly the target goal state of the AGI with the strongest RSI  
curve.


The further the actual target goal state of that particular AI is  
away from the actual target goal state of humanity, the worse.




What on earth is "the actual target goal state of humanity"?   AFAIK  
there is no such thing.  For that matter I doubt very much there is or  
can be an unchanging target goal state for any real AGI.




The goal of ... humanity... is that the AGI implemented that will  
have the strongest RSI curve also will be such that its actual  
target goal state is exactly congruent to the actual target goal  
state of humanity.




This seems rather circular and ill-defined.

- samantha


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Samantha Atkins


On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:


Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such  
a way as to preserve its existing motivational priorities.




How could the system anticipate whether on not significant RSI would  
lead it to question or modify its current motivational priorities?   
Are you suggesting that the system can somehow simulate an improved  
version of itself in sufficient detail to know this?  It seems quite  
unlikely.



That means:  the system would *not* choose to do any RSI if the RSI  
could not be done in such a way as to preserve its current  
motivational priorities:  to do so would be to risk subverting its  
own most important desires.  (Note carefully that the system itself  
would put this constraint on its own development, it would not have  
anything to do with us controlling it).




If the improvements were an improvement in capabilities and such  
improvement led to changes in its priorities then how would those  
improvements be undesirable due to showing current motivational  
priorities as being in some way lacking?  Why is protecting current  
beliefs or motivational priorities more important than becoming  
presumably more capable and more capable of understanding the reality  
the system is immersed in?



There is a bit of a problem with the term "RSI" here:  to answer  
your question fully we might have to get more specific about what  
that would entail.


Finally:  the usefulness of RSI would not necessarily be indefinite.  
The system could well get to a situation where further RSI was not  
particularly consistent with its goals.  It could live without it.




Then are its goal  more important to it than reality?

- samantha

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

James Ratcliff wrote:
You could start a smaller AI with a simple hardcoded "desire" or reward 
mechanism to "learn" new things, or to increase the size of its knowledge.


 That would be a simple way to programmaticaly insert it.  That along 
with a seed AI, must be put in there in the beginning.


Remember we are not just throwing it out there with no goals or anything 
in the beginning, or it would learn nothing, and DO nothing atall.


Later this piece may need to be directly modifiable by the code to 
decrease or increase its desire to "explore" or learn new things, 
depending on its other goals.


James


It's difficult to get into all the details (this is a big subject), but 
you do have to remember that what you have done is to say *what* needs 
to be done (no doubt in anybody's mind that it needs a desire to learn!) 
but that the problem under discussion is the difficulty of figuring out 
*how* to do that.


That's where my arguments come in:  I was claiming that the idea of 
motivating an AGI has not been properly thought through by many people, 
who just assume that the system has a stack of goals (top level goal, 
then subgoals that, if acheived in sequence or in parallel, would cause 
top level goal to succeed, then a breakdown of those subgoals into 
sub-subgoals, and so on for maybe hundreds of levels  you probably 
get the idea).  My claim is that this design is too naive.  And that 
minor variations on this design won't necessarily improve it.


The devil, in other words, is in the details.


Richard Loosemore.










*/Philip Goetz <[EMAIL PROTECTED]>/* wrote:

On 11/19/06, Richard Loosemore wrote:

 > The goal-stack AI might very well turn out simply not to be a
workable
 > design at all! I really do mean that: it won't become intelligent
 > enough to be a threat. Specifically, we may find that the kind of
 > system that drives itself using only a goal stack never makes it
up to
 > full human level intelligence because it simply cannot do the kind of
 > general, broad-spectrum learning that a Motivational System AI
would do.
 >
 > Why? Many reasons, but one is that the system could never learn
 > autonomously from a low level of knowledge *because* it is using
goals
 > that are articulated using the system's own knowledge base. Put
simply,
 > when the system is in its child phase it cannot have the goal
"acquire
 > new knowledge" because it cannot understand the meaning of the words
 > "acquire" or "new" or "knowledge"! It isn't due to learn those words
 > until it becomes more mature (develops more mature concepts), so
how can
 > it put "acquire new knowledge" on its goal stack and then unpack that
 > goal into subgoals, etc?

This is an excellent observation that I hadn't heard before -
thanks, Richard!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php



Everyone is raving about the all-new Yahoo! Mail beta. 



This list is sponsored by AGIRI: http://www.agiri.org/email To 
unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-01 Thread Richard Loosemore

Hank Conn wrote:
Yes, now the point being that if you have an AGI and you aren't in a 
sufficiently fast RSI loop, there is a good chance that if someone else 
were to launch an AGI with a faster RSI loop, your AGI would lose 
control to the other AGI where the goals of the other AGI differed from 
yours.
 
What I'm saying is that the outcome of the Singularity is going to be 
exactly the target goal state of the AGI with the strongest RSI curve.
 
The further the actual target goal state of that particular AI is away 
from the actual target goal state of humanity, the worse.
 
The goal of ... humanity... is that the AGI implemented that will have 
the strongest RSI curve also will be such that its actual target goal 
state is exactly congruent to the actual target goal state of humanity.
 
This is assuming AGI becomes capable of RSI before any human does. I 
think that's a reasonable assumption (this is the AGI list after all).


I agree with you, as far as you take these various points, although with 
some refinements.


Taking them in reverse order:

1)  There is no doubt in my mind that machine RSI will come long before 
human RSI.


2)  The goal of humanity is to build an AGI with goals (in the most 
general sense of "goals") that matches its own.  That is as it should 
be, and I think there are techniques that could lead to that.  I also 
believe that those techniques will lead to AGI quicker than other 
techniques, which is a very good thing.


3)  The way that the "RSI curves" play out is not clear at this point, 
but my thoughts are that because of the nature of exponential curves 
(flattish for a long time, then the "knee", then off to the sky) we will 
*not* have an arms race situation with competing AGI projects.  An arms 
race can only really happen if the projects stay on closely matched, 
fairly shallow curves:  people need to be neck and neck to have a 
situation in which nobody quite gets the upper hand and everyone 
competes.  That is fundamentally at odds with the exponential shape of 
the RSI curve.


What does that mean in practice?  It means that when the first system 
gets to really fast part of the curve, it might (for example) go from 
human level to 10x human level in a couple of months, then to 100x in a 
month, then 1000x in a week regardless of the exact details of these 
numbers, you can see that such a sudden arrival at superintelligence 
would most likley *not* occur at the same moment as someone else's project.


Then, the first system would quietly move to change any other projects 
so that their motivations were not a threat.  It wouldn't take them out, 
it would just ensure they were safe.


End of worries.

The only thing to worry about is that the first system have sympathetic 
motivations.  I think ensuring that should be our responsibility.  I 
think, also, that the first design will use the kind of diffuse 
motivational system that I talked about before, and for that reason it 
will most likely be similiar in design to ours, and not be violent or 
aggressive.


I actually have stronger beliefs than that, but they are hardly to 
articulate - basically, that a smart enough system will naturally and 
inevitably *tend* toward sympathy for life.  But I am not relying on 
that extra idea for the above arguments.


Does that make sense?


Richard Loosemore








-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Hank Conn

On 11/30/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:



> Hank Conn wrote:
[snip...]
>  > I'm not asserting any specific AI design. And I don't see how
>  > a motivational system based on "large numbers of diffuse
constrains"
>  > inherently prohibits RSI, or really has any relevance to this. "A
>  > motivation system based on large numbers of diffuse constraints"
does
>  > not, by itself, solve the problem- if the particular constraints
> do not
>  > form a congruent mapping to the concerns of humanity, regardless
of
>  > their number or level of diffuseness, then we are likely facing
an
>  > Unfriendly outcome of the Singularity, at some point in the
future.
>
> Richard Loosemore wrote:
> The point I am heading towards, in all of this, is that we need to
> unpack some of these ideas in great detail in order to come to
sensible
> conclusions.
>
> I think the best way would be in a full length paper, although I did
> talk about some of that detail in my recent lengthy post on
> motivational
> systems.
>
> Let me try to bring out just one point, so you can see where I am
going
> when I suggest it needs much more detail.  In the above, you really
are
> asserting one specific AI design, because you talk about the goal
stack
> as if this could be so simple that the programmer would be able to
> insert the "make paperclips" goal and the machine would go right
ahead
> and do that.  That type of AI design is very, very different from
the
> Motivational System AI that I discussed before (the one with the
diffuse
> set of constraints driving it).
>
>
> Here is one of many differences between the two approaches.
>
> The goal-stack AI might very well turn out simply not to be a
workable
> design at all!  I really do mean that:  it won't become intelligent
> enough to be a threat.   Specifically, we may find that the kind of
> system that drives itself using only a goal stack never makes it up
to
> full human level intelligence because it simply cannot do the kind
of
> general, broad-spectrum learning that a Motivational System AI would
do.
>
> Why?  Many reasons, but one is that the system could never learn
> autonomously from a low level of knowledge *because* it is using
goals
> that are articulated using the system's own knowledge base.  Put
simply,
> when the system is in its child phase it cannot have the goal
"acquire
> new knowledge" because it cannot understand the meaning of the words
> "acquire" or "new" or "knowledge"!  It isn't due to learn those
words
> until it becomes more mature (develops more mature concepts), so how
can
> it put "acquire new knowledge" on its goal stack and then unpack
that
> goal into subgoals, etc?
>
>
> Try the same question with any goal that the system might have when
it
> is in its infancy, and you'll see what I mean.  The whole concept of
a
> system driven only by a goal stack with statements that resolve on
its
> knowledge base is that it needs to be already very intelligent
before it
> can use them.
>
>
>
> If your system is intelligent, it has some goal(s) (or "motivation(s)").
> For most really complex goals (or motivations), RSI is an extremely
> useful subgoal (sub-...motivation). This makes no further assumptions
> about the intelligence in question, including those relating to the
> design of the goal (motivation) system.
>
>
> Would you agree?
>
>
> -hank

Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such a
way as to preserve its existing motivational priorities.

That means:  the system would *not* choose to do any RSI if the RSI
could not be done in such a way as to preserve its current motivational
priorities:  to do so would be to risk subverting its own most important
desires.  (Note carefully that the system itself would put this
constraint on its own development, it would not have anything to do with
us controlling it).

There is a bit of a problem with the term "RSI" here:  to answer your
question fully we might have to get more specific about what that would
entail.

Finally:  the usefulness of RSI would not necessarily be indefinite.
The system could well get to a situation where further RSI was not
particularly consistent with its goals.  It could live without it.


Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




Yes, now the point being that if you have an AGI and you aren't in a
sufficiently fast RSI loop, there is a good chance that if someone else were
to launch an AGI with a faster RSI loop, your AGI would lose control to the
other AGI where the goals of the other AGI differed from yours.

What I'm saying is that the outcome of 

Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Richard Loosemore



Hank Conn wrote:

[snip...]

 > I'm not asserting any specific AI design. And I don't see how
 > a motivational system based on "large numbers of diffuse constrains"
 > inherently prohibits RSI, or really has any relevance to this. "A
 > motivation system based on large numbers of diffuse constraints" does
 > not, by itself, solve the problem- if the particular constraints
do not
 > form a congruent mapping to the concerns of humanity, regardless of
 > their number or level of diffuseness, then we are likely facing an
 > Unfriendly outcome of the Singularity, at some point in the future.

Richard Loosemore wrote:
The point I am heading towards, in all of this, is that we need to
unpack some of these ideas in great detail in order to come to sensible
conclusions.

I think the best way would be in a full length paper, although I did
talk about some of that detail in my recent lengthy post on
motivational
systems.

Let me try to bring out just one point, so you can see where I am going
when I suggest it needs much more detail.  In the above, you really are
asserting one specific AI design, because you talk about the goal stack
as if this could be so simple that the programmer would be able to
insert the "make paperclips" goal and the machine would go right ahead
and do that.  That type of AI design is very, very different from the
Motivational System AI that I discussed before (the one with the diffuse
set of constraints driving it).


Here is one of many differences between the two approaches.

The goal-stack AI might very well turn out simply not to be a workable
design at all!  I really do mean that:  it won't become intelligent
enough to be a threat.   Specifically, we may find that the kind of
system that drives itself using only a goal stack never makes it up to
full human level intelligence because it simply cannot do the kind of
general, broad-spectrum learning that a Motivational System AI would do.

Why?  Many reasons, but one is that the system could never learn
autonomously from a low level of knowledge *because* it is using goals
that are articulated using the system's own knowledge base.  Put simply,
when the system is in its child phase it cannot have the goal "acquire
new knowledge" because it cannot understand the meaning of the words
"acquire" or "new" or "knowledge"!  It isn't due to learn those words
until it becomes more mature (develops more mature concepts), so how can
it put "acquire new knowledge" on its goal stack and then unpack that
goal into subgoals, etc?


Try the same question with any goal that the system might have when it
is in its infancy, and you'll see what I mean.  The whole concept of a
system driven only by a goal stack with statements that resolve on its
knowledge base is that it needs to be already very intelligent before it
can use them.

 
 
If your system is intelligent, it has some goal(s) (or "motivation(s)"). 
For most really complex goals (or motivations), RSI is an extremely 
useful subgoal (sub-...motivation). This makes no further assumptions 
about the intelligence in question, including those relating to the 
design of the goal (motivation) system.
 
 
Would you agree?
 
 
-hank


Recursive Self Inmprovement?

The answer is yes, but with some qualifications.

In general RSI would be useful to the system IF it were done in such a 
way as to preserve its existing motivational priorities.


That means:  the system would *not* choose to do any RSI if the RSI 
could not be done in such a way as to preserve its current motivational 
priorities:  to do so would be to risk subverting its own most important 
desires.  (Note carefully that the system itself would put this 
constraint on its own development, it would not have anything to do with 
us controlling it).


There is a bit of a problem with the term "RSI" here:  to answer your 
question fully we might have to get more specific about what that would 
entail.


Finally:  the usefulness of RSI would not necessarily be indefinite. 
The system could well get to a situation where further RSI was not 
particularly consistent with its goals.  It could live without it.



Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread Hank Conn

On 11/19/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Hank Conn wrote:
>  > Yes, you are exactly right. The question is which of my
> assumption are
>  > unrealistic?
>
> Well, you could start with the idea that the AI has "... a strong
goal
> that directs its behavior to aggressively take advantage of these
> means...".   It depends what you mean by "goal" (an item on the task
> stack or a motivational drive?  They are different things) and this
> begs
> a question about who the idiot was that designed it so that it
pursue
> this kind of aggressive behavior rather than some other!
>
> A goal is a problem you want to solve in some environment. The "idiot"
> who designed it may program its goal to be, say, making paperclips.
> Then, after some thought and RSI, the AI decides converting the entire
> planet into a computronium in order to figure out how to maximize the
> number of paper clips in the Universe will satisfy this goal quite
> optimally. Anybody could program it with any goal in mind, and RSI
> happens to be a very useful process for accomplishing many complex
goals.
>
> There is *so* much packed into your statement that it is difficult
to go
> into it in detail.
>
> Just to start with, you would need to cross compare the above
statement
> with the account I gave recently of how a system should be built
with a
> motivational system based on large numbers of diffuse
constraints.  Your
> description is one particular, rather dangerous, design for an AI -
it
> is not an inevitable design.
>
>
> I'm not asserting any specific AI design. And I don't see how
> a motivational system based on "large numbers of diffuse constrains"
> inherently prohibits RSI, or really has any relevance to this. "A
> motivation system based on large numbers of diffuse constraints" does
> not, by itself, solve the problem- if the particular constraints do not
> form a congruent mapping to the concerns of humanity, regardless of
> their number or level of diffuseness, then we are likely facing an
> Unfriendly outcome of the Singularity, at some point in the future.

The point I am heading towards, in all of this, is that we need to
unpack some of these ideas in great detail in order to come to sensible
conclusions.

I think the best way would be in a full length paper, although I did
talk about some of that detail in my recent lengthy post on motivational
systems.

Let me try to bring out just one point, so you can see where I am going
when I suggest it needs much more detail.  In the above, you really are
asserting one specific AI design, because you talk about the goal stack
as if this could be so simple that the programmer would be able to
insert the "make paperclips" goal and the machine would go right ahead
and do that.  That type of AI design is very, very different from the
Motivational System AI that I discussed before (the one with the diffuse
set of constraints driving it).



Here is one of many differences between the two approaches.


The goal-stack AI might very well turn out simply not to be a workable
design at all!  I really do mean that:  it won't become intelligent
enough to be a threat.   Specifically, we may find that the kind of
system that drives itself using only a goal stack never makes it up to
full human level intelligence because it simply cannot do the kind of
general, broad-spectrum learning that a Motivational System AI would do.

Why?  Many reasons, but one is that the system could never learn
autonomously from a low level of knowledge *because* it is using goals
that are articulated using the system's own knowledge base.  Put simply,
when the system is in its child phase it cannot have the goal "acquire
new knowledge" because it cannot understand the meaning of the words
"acquire" or "new" or "knowledge"!  It isn't due to learn those words
until it becomes more mature (develops more mature concepts), so how can
it put "acquire new knowledge" on its goal stack and then unpack that
goal into subgoals, etc?



Try the same question with any goal that the system might have when it

is in its infancy, and you'll see what I mean.  The whole concept of a
system driven only by a goal stack with statements that resolve on its
knowledge base is that it needs to be already very intelligent before it
can use them.




If your system is intelligent, it has some goal(s) (or "motivation(s)").
For most really complex goals (or motivations), RSI is an extremely useful
subgoal (sub-...motivation). This makes no further assumptions about the
intelligence in question, including those relating to the design of the goal
(motivation) system.


Would you agree?


-hank


I have never seen this idea discussed by anyone except me, but it is

extremely powerful and potentially a complete showstopper for the kind
of design inherent in the goal stack approach.  I have certainly never
seen anything like a reasonable rebuttal of it:  even if it turns

Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread James Ratcliff
Also could both or any of you describe a little bit more the idea or your 
"goal-stacks" and how they should/would function?

James

David Hart <[EMAIL PROTECTED]> wrote: On 11/30/06, Ben Goertzel <[EMAIL 
PROTECTED]> wrote: Richard,

This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...



Ben,

Could you elaborate for the list some of the nuances between [explicit] 
cognitive control and [implicit] cognitive bias, either theoretically or within 
Novamente? 

David


 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or 
change your options, please go to: http://v2.listbox.com/member/?list_id=303 


___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Access over 1 million songs - Yahoo! Music Unlimited.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-30 Thread James Ratcliff
You could start a smaller AI with a simple hardcoded "desire" or reward 
mechanism to "learn" new things, or to increase the size of its knowledge.

 That would be a simple way to programmaticaly insert it.  That along with a 
seed AI, must be put in there in the beginning. 

Remember we are not just throwing it out there with no goals or anything in the 
beginning, or it would learn nothing, and DO nothing atall.

Later this piece may need to be directly modifiable by the code to decrease or 
increase its desire to "explore" or learn new things, depending on its other 
goals.

James


Philip Goetz <[EMAIL PROTECTED]> wrote: On 11/19/06, Richard Loosemore  wrote:

> The goal-stack AI might very well turn out simply not to be a workable
> design at all!  I really do mean that:  it won't become intelligent
> enough to be a threat.   Specifically, we may find that the kind of
> system that drives itself using only a goal stack never makes it up to
> full human level intelligence because it simply cannot do the kind of
> general, broad-spectrum learning that a Motivational System AI would do.
>
> Why?  Many reasons, but one is that the system could never learn
> autonomously from a low level of knowledge *because* it is using goals
> that are articulated using the system's own knowledge base.  Put simply,
> when the system is in its child phase it cannot have the goal "acquire
> new knowledge" because it cannot understand the meaning of the words
> "acquire" or "new" or "knowledge"!  It isn't due to learn those words
> until it becomes more mature (develops more mature concepts), so how can
> it put "acquire new knowledge" on its goal stack and then unpack that
> goal into subgoals, etc?

This is an excellent observation that I hadn't heard before - thanks, Richard!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



___
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
-
Everyone is raving about the all-new Yahoo! Mail beta.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread David Hart

On 11/30/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:


Richard,

This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...




Ben,

Could you elaborate for the list some of the nuances between [explicit]
cognitive control and [implicit] cognitive bias, either theoretically or
within Novamente?

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread Ben Goertzel

Richard,

This is certainly true, and is why in Novamente we use a goal stack
only as one aspect of cognitive control...

ben

On 11/29/06, Philip Goetz <[EMAIL PROTECTED]> wrote:

On 11/19/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> The goal-stack AI might very well turn out simply not to be a workable
> design at all!  I really do mean that:  it won't become intelligent
> enough to be a threat.   Specifically, we may find that the kind of
> system that drives itself using only a goal stack never makes it up to
> full human level intelligence because it simply cannot do the kind of
> general, broad-spectrum learning that a Motivational System AI would do.
>
> Why?  Many reasons, but one is that the system could never learn
> autonomously from a low level of knowledge *because* it is using goals
> that are articulated using the system's own knowledge base.  Put simply,
> when the system is in its child phase it cannot have the goal "acquire
> new knowledge" because it cannot understand the meaning of the words
> "acquire" or "new" or "knowledge"!  It isn't due to learn those words
> until it becomes more mature (develops more mature concepts), so how can
> it put "acquire new knowledge" on its goal stack and then unpack that
> goal into subgoals, etc?

This is an excellent observation that I hadn't heard before - thanks, Richard!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-29 Thread Philip Goetz

On 11/19/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


The goal-stack AI might very well turn out simply not to be a workable
design at all!  I really do mean that:  it won't become intelligent
enough to be a threat.   Specifically, we may find that the kind of
system that drives itself using only a goal stack never makes it up to
full human level intelligence because it simply cannot do the kind of
general, broad-spectrum learning that a Motivational System AI would do.

Why?  Many reasons, but one is that the system could never learn
autonomously from a low level of knowledge *because* it is using goals
that are articulated using the system's own knowledge base.  Put simply,
when the system is in its child phase it cannot have the goal "acquire
new knowledge" because it cannot understand the meaning of the words
"acquire" or "new" or "knowledge"!  It isn't due to learn those words
until it becomes more mature (develops more mature concepts), so how can
it put "acquire new knowledge" on its goal stack and then unpack that
goal into subgoals, etc?


This is an excellent observation that I hadn't heard before - thanks, Richard!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-11-19 Thread Richard Loosemore

Hank Conn wrote:

 > Yes, you are exactly right. The question is which of my
assumption are
 > unrealistic?

Well, you could start with the idea that the AI has "... a strong goal
that directs its behavior to aggressively take advantage of these
means...".   It depends what you mean by "goal" (an item on the task
stack or a motivational drive?  They are different things) and this
begs
a question about who the idiot was that designed it so that it pursue
this kind of aggressive behavior rather than some other!

A goal is a problem you want to solve in some environment. The "idiot" 
who designed it may program its goal to be, say, making paperclips. 
Then, after some thought and RSI, the AI decides converting the entire 
planet into a computronium in order to figure out how to maximize the 
number of paper clips in the Universe will satisfy this goal quite 
optimally. Anybody could program it with any goal in mind, and RSI 
happens to be a very useful process for accomplishing many complex goals.


There is *so* much packed into your statement that it is difficult to go
into it in detail.

Just to start with, you would need to cross compare the above statement
with the account I gave recently of how a system should be built with a
motivational system based on large numbers of diffuse constraints.  Your
description is one particular, rather dangerous, design for an AI - it
is not an inevitable design.

 
I'm not asserting any specific AI design. And I don't see how 
a motivational system based on "large numbers of diffuse constrains" 
inherently prohibits RSI, or really has any relevance to this. "A 
motivation system based on large numbers of diffuse constraints" does 
not, by itself, solve the problem- if the particular constraints do not 
form a congruent mapping to the concerns of humanity, regardless of 
their number or level of diffuseness, then we are likely facing an 
Unfriendly outcome of the Singularity, at some point in the future.


The point I am heading towards, in all of this, is that we need to 
unpack some of these ideas in great detail in order to come to sensible 
conclusions.


I think the best way would be in a full length paper, although I did 
talk about some of that detail in my recent lengthy post on motivational 
systems.


Let me try to bring out just one point, so you can see where I am going 
when I suggest it needs much more detail.  In the above, you really are 
asserting one specific AI design, because you talk about the goal stack 
as if this could be so simple that the programmer would be able to 
insert the "make paperclips" goal and the machine would go right ahead 
and do that.  That type of AI design is very, very different from the 
Motivational System AI that I discussed before (the one with the diffuse 
set of constraints driving it).


Here is one of many differences between the two approaches.

The goal-stack AI might very well turn out simply not to be a workable 
design at all!  I really do mean that:  it won't become intelligent 
enough to be a threat.   Specifically, we may find that the kind of 
system that drives itself using only a goal stack never makes it up to 
full human level intelligence because it simply cannot do the kind of 
general, broad-spectrum learning that a Motivational System AI would do.


Why?  Many reasons, but one is that the system could never learn 
autonomously from a low level of knowledge *because* it is using goals 
that are articulated using the system's own knowledge base.  Put simply, 
when the system is in its child phase it cannot have the goal "acquire 
new knowledge" because it cannot understand the meaning of the words 
"acquire" or "new" or "knowledge"!  It isn't due to learn those words 
until it becomes more mature (develops more mature concepts), so how can 
it put "acquire new knowledge" on its goal stack and then unpack that 
goal into subgoals, etc?


Try the same question with any goal that the system might have when it 
is in its infancy, and you'll see what I mean.  The whole concept of a 
system driven only by a goal stack with statements that resolve on its 
knowledge base is that it needs to be already very intelligent before it 
can use them.


I have never seen this idea discussed by anyone except me, but it is 
extremely powerful and potentially a complete showstopper for the kind 
of design inherent in the goal stack approach.  I have certainly never 
seen anything like a reasonable rebuttal of it:  even if it turns out 
not to be as serious as I claim it is, it still needs to be addressed in 
a serious way before anyone can make assertions about what goal stack 
systems can do.


What is the significance of just this one idea?  That all the goal stack 
approaches might be facing a serious a problem if they want to get 
autonomous, powerful learning mechanisms that build themselves from a 
low level.  So what are AI researchers doing about t