On Sat, Aug 30, 2008 at 11:15 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>> (1) Whether "goal drift" (I call "task alienation" in
>> http://www.springer.com/west/home/computer/artificial?SGWID=4-147-22-173659733-0)
>> is always undesired --- your paper treats it as obviously bad.
>
> It's not always undesirable ... but I think we should seek to avoid it in
> dealing with
> **top level goals** in the context of the creation of AI systems more
> powerful than ourselves

The concept of "top-level goals" (or "super goals") in this discussion
is often ambiguous. It can mean (1) the initial (given or built-in)
goal(s) from which all the other goals are derived, or (2) the
dominating goal when conflicts happen among goals. Many people
implicitly assume they are the same, but they usually are different in
human mind, and there is no reason to assume in AGIs they will be the
same. Which do you mean?

> Goal drift among **subgoals** is just fine and can be a source of valued
> creativity, of course ...
> but goal drift among top-level goals seems less necessary

The "top-level goals" in the sense of (1) never change, while in the
sense of (2) change as a function of the system's experience. Unless
the system's experience is fully predictable (then the system won't be
intelligent), there is no way to fully and accurately bound them.

> In the case that a subgoal drifts, it can still be tested as to whether it
> fulfills the top-level goals or not

How much testing is enough? In human history, many initially
benign-looking ideas lead to long-term troubles. I don't think there
are ways to reach conclusive conclusions, except in special domains.

>> (1) This phenomenon is a root of many valuable properties, including
>> originality, creativity, and flexibility, and it explained many
>> things, including art appreciation, aimless playing, even scientific
>> exploration. Without it, human beings would just be like other
>> animals, driving only by their built-in biological goals.
>
> Agree ... but humans don't have a structured, top-down goal system
> in the sense that a system like NM or OpenCog can.  We can build
> such goal systems in our minds and use them to partially govern our
> behavior, but these are running on top of our primordial biological
> goal systems... whose goals are concrete rather than abstract..

I don't think any AGI system can maintain a  top-down goal system in
the sense that the child-goals are logically consistent with the
parent-goals, unless the world/environment is assumed to be closed and
fully predictable.

For a concrete example, working on OpenCog is a subgoal derived from
the goal of building AGI, according to your knowledge. Nobody can
really "proof" (or symmetrically, "disproof") the logical consistency
of these goals in the near future. If we expect such a proof before
all of our actions, then we can do almost nothing. We derive sub-goals
according to our knowledge/beliefs, with the hope that they will serve
as means to achieve certain ends (the parent-goals, which may be
"top-level", or sub-goals of other goals), though we know for sure
that some of the hopes will turn out to fail.

>> (2) It is impossible to completely avoid this phenomenon in a truly
>> intelligent system, whether we like it or not. Your solution won't
>> change the big picture, even though it may help in some special cases.
>
> I agree due to the irreducible complexity of the environment, as noted
> above...
>
> However, the big picture is VERY BIG in this context ...

Not really. As soon as you agree that the system in principle has
insufficient knowledge and resources, it directly follows that the
system cannot be absolutely sure whether a "subgoal" derived according
to the system's current belief will indeed lead to the satisfaction of
the "supergoal" that producing it. What the system does may reduce
this inconsistency, but cannot avoid it. This is the "big picture" I
talked about.

If you propose your solution as one way to increase the consistency in
goal-derivation, I have no problem. It is just that in the "AGI
ethics" discussion, there are beliefs that AGI systems can be designed
with guaranteed "friendliness" by carefully choosing the "supergoal",
and making all the "subgoals" consistent with them, which, to me, is a
completely wrong idea (though I respect the motivation).

Pei


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to