Kaj Sotala wrote:
On 2/16/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Kaj Sotala wrote:
 > Well, the basic gist was this: you say that AGIs can't be constructed
 > with built-in goals, because a "newborn" AGI doesn't yet have built up
 > the concepts needed to represent the goal. Yet humans seem tend to
 > have built-in (using the term a bit loosely, as all goals do not
 > manifest in everyone) goals, despite the fact that newborn humans
 > don't yet have built up the concepts needed to represent those goals.
 >
Oh, complete agreement here.  I am only saying that the idea of a
 "built-in goal" cannot be made to work in an AGI *if* one decides to
 build that AGI using a "goal-stack" motivation system, because the
 latter requires that any goals be expressed in terms of the system's
 knowledge.  If we step away from that simplistic type of GS system, and
 instead use some other type of motivation system, then I believe it is
 possible for the system to be motivated in a coherent way, even before
 it has the explicit concepts to talk about its motivations (it can
 pursue the goal "seek Momma's attention" long before it can explicitly
 represent the concept of [attention], for example).

Alright. But previously, you said that Omohundro's paper, which to me
seemed to be a general analysis of the behavior of *any* minds with
(more or less) explict goals, looked like it was based on a
'goal-stack' motivation system. (I believe this has also been the
basis of your critique for e.g. some SIAI articles about
friendliness.) If built-in goals *can* be constructed into
motivational system AGIs, then why do you seem to assume that AGIs
with built-in goals are goal-stack ones?

I seem to have caused lots of confusion earlier on in the discussion, so let me backtrack and try to summarize the structure of my argument.

1) Conventional AI does not have a concept of a "Motivational-Emotional System" (MES), the way that I use that term, so when I criticised Omuhundro's paper for referring only to a "Goal Stack" control system, I was really saying no more than that he was assuming that the AI was driven by the system that all conventional AIs are supposed to have. These two ways of controlling an AI are two radically different designs.

2) Not only are MES and GS different classes of drive mechanism, they also make very different assumptions about the general architecture of the AI. When I try to explain how an MES works, I often get tangled up in the problem of explaining the general architecture that lies behind it (which does, I admit, cause much confusion). I sometimes use the terms "molecular" or "sub-symbolic" to describe that architecture.

2(a) I should say something about the architecture difference. In a sub-symbolic architecture you would find that the significant "thought events" are the result of clouds of sub-symbolic elements interacting with one another across a broad front. This is to be contrasted with the way that symbols interact in a regular symbolic AI, where symbols are single entities that get plugged into well-defined mechanisms like deduction operators. In a sub-symbolic system, operations are usually the result of several objects *constraining* one another in a relatively weak manner, not the result of a very small number of objects slotting into a precisely defined, rigid mechanism. There is a flexibility inherent in the sub-symbolic architecture that is completely lacking in the conventional symbolic system.

3) It is important to understand that in an AI that uses the MES drive system, there is *also* a goal stack, quite similar to what is found in a GS-driven AI, but this goal stack is entirely subservient to the MES, and it plays a role only in the day to day (and moment to moment) thinking of the system.

4) I plead guilty to saying things like "... Goal-Stack motivation system..." when what I should do is use the word "motivation" only in the context of an MES system. A better wording would have been "... Goal-Stack *drive* system...". Or perhaps "... Goal-Stack *control* system...".

5) The main thrust of my attack on GS-driven AIs is that goal stacks were invented in the context of planning problems, and were never intended to be used as the global control system for an AI that is capable of long-range development. So, you will find me saying things like "A GS drive system is appropriate for handling goals like 'Put the red pyramid on top of the green block', but it makes no sense in the context of goals like 'Be friendly to humans'". Most AI people assume that a GS control system *must* be the way to go, but I would argue that they are in denial about the uselessness of a GS. Also, most conventional AI people assume that a GS is valid simply because they see no alternative ... and this is because the architecture used by most conventional AI does not easily admit of any other type of drive system. In a sense, they have to support the GS idea because they cannot envision any alternative.

6) With regard to whether the drive system (MES or GS) has any "built-in" goals/motivations, I am not really trying to say that either type of drive system can or cannot have built-in goals/motivations. What I would say is that the whole idea of a GS-type AI becomes incoherent if we ask what happens when such an AI is raised from neonate form and given the task of acquiring most of its symbols by itself (instead of being hand-stuffed with symbols in Cyc-fashion). Under those circumstances the AI must have global goals that are extremely abstract (e.g. "Imitate Mommy"), but because of the nature of GS systems, the system can do nothing until it unpacks the abstract goals into subgoals using its knowledge of what those abstract goals "mean" .... and by assumption, the neonate AI has no idea what an abstract goal like "Imitate" actually means. It *must* use its knowledge store to reduce the "Imitate Mommy" goal to some subgoals, but its knowledge store is practically empty. There are various strategies that could be used to fix this problem, but what I think you would find, if you investigated those strategies, is that they would eventually become so complicated that, in fact, they would actually morph into something equivalent to the Motivational-Emotional system that I am proposing.

7) When I criticise GS-type systems, I also say that they are deeply unstable (and in this respect I partially agree with Omuhundro and many others). But while other people see a danger in this, I see something different. A GS-type AI will not, I believe, ever become stable enough to make it to full, human-level intelligence. Omuhundro makes the blanket assumption that it would be superintelligent AND controlled by a Goal Stack. I say: How is it ever going to become superintelligent in the first place if it is controlled by something that will make it fall apart during its infancy? This is a very important point which deserves more attention than I can give it in a short message, but even though it is only a sketch of an argument you can probably see that there is an issue here: I think that conventional-AI people are trying to have their cake and eat it too. They want to argue that an AI could be extremely unpredictable if it were controlled by a Goal Stack, but at the same time they want to assume that it will be stable enough to make it through a long and intellectually strenous childhood so that it becomes superintelligent.

So now:  does that clarify the specific question you asked above?


 The way to get around that problem is to notice two things.  One is that
 the sex drives can indeed be there from the very beginning, but in very
 mild form, just waiting to be kicked into high gear later on.  I think
 this accounts for a large chunk of the explanation (there is evidence
 for this:  some children are explictly thinking engaged in sex-related
 activities at the age of three, at least).  The second part of the
 explanation is that, indeed, the human mind *does* have trouble making a
 an easy connection to those later concepts: sexual ideas do tend to get
 attached to the most peculiar behaviors.  Perhaps this is a sigh that
 the hook-up process is not straightforward.

This sounds like the beginnings of the explanation, yes.


I am in a very busy phase right now, but as part of what I am oing I may get time to write out a full description of the sub-symbolic architecture and the MES. I'll post these when they are (at least half) done.



Richard Loosemore

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to