Charles D Hixson wrote:
Richard Loosemore wrote:
Kaj Sotala wrote:
On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
...
goals.

But now I ask:  what exactly does this mean?

In the context of a Goal Stack system, this would be represented by a top level goal that was stated in the knowledge representation language of the AGI, so it would say "Improve Thyself".

Next, it would subgoal this (break it down into subgoals). Since the top level goal is so unbelievably vague, there are a billion different ways to break this down into subgoals: it might get out a polishing cloth and start working down its beautiful shiny exterior, or it might start a transistor-by-transistor check of all its circuits, or.... all the way up to taking a course in Postmodern critiques of the Postmodern movement.

And included in that range of "improvement" activities would be the possibility of something like "Improve my ability to function efficiently" which gets broken down into subgoals like "Remove all sources of distraction that reduce efficiency" and then "Remove all humans, because they are a distraction".

My point here is that a Goal Stack system would *interpret* this goal in any one of an infinite number of ways, because the goal was represented as an explicit statement. The fact that it was represented explicitly meant that an extremely vague concept ("Improve Thyself") had to be encoded in such a way as to leave it open to ambiguity. As a result, what the AGI actually does as a result of this goal, which is embedded in a Goal Stack architecture, is completely indeterminate.

Stepping back from the detail, we can notice that *any* vaguely worded goal is going to have the same problem in a GS architecture. And if we dwell on that for a moment, we start to wonder exactly what would happen to an AGI that was driven by goals that had to be stated in vague terms ... will the AGI *ever* exhibit coherent, intelligent behavior when driven by such a GS drive system, or will it have flashes of intelligence puncuated by the wild pursuit of bizarre obsessions? Will it even have flashes of intelligence?

So long as the goals that are fed into a GS architecture are very, very local and specific (like "Put the red pyramid on top of the green block") I can believe that the GS drive system does actually work (kind of). But no one has ever built an AGI that way. Never. Everyone assumes that a GS will scale up to a vague goal like "Improve Thyself", and yet no one has tried this in practice. Not on a system that is supposed to be capable of a broad-based, autonomous, *general* intelligence.

So when you paraphrase Omuhundro as saying that "AIs will want to self-improve", the meaning of that statement is impossible to judge.
...

Perhaps I don't understand "Goal-Stack System". You seem to be presuming that the actual implementation would involve statements in English (or some equivalent language). To me it seems more as if Goals would be represented as internal states reflecting such things as sensor state and projected sensor state, etc. Thus "Improve yourself" would need to be represented by something which would be more precisely translated into English as something like "change your program so that a smaller or faster implementation will predictions regarding future sensor states that are no worse than the current version of the program." (I'm not at all clear how something involving external objects could be encoded here. Humans seem to "imprint" on a human face, and work out from there. This implies a predictable sensory configuration for the initial state, but there must also be backup mechanisms, or Helen Keller would have been totally asocial.)

At all events, the different versions of "improve yourself" that you mentioned would seem to require different internal representations. Also, the existence of one goal doesn't preclude the existence of other goals, and which goal was of top priority would be expected to shift over time. Additionally, any proposed method of reaching a goal would have costs as well as benefits. No AGI would reasonably have only a few high level goals, so rather than *a* goal stack, even with my interpretation there would need to be several of them.

To me it seems as if the problem that you are foreseeing is more to be expected from a really powerful narrow AI than from an AGI.

The idea of a "Goal Stack" drive mechanism is my term for the only idea currently on the table from the conventional AGI folks: a stack of explicitly represented goals.

I have written the top level goal statement in English only as a paraphrase for the real representation (which would be in the AGI's internal representation language). But although the real version would look more logical and less English, it could not be of the sort you suggest, with just a simple function over sensor states: after all, what I am doing is trying to ask what the conventional AI folks would put at the top of the their stack, and they are suggesting something like "Be Friendly To Humans".

You say that this looks like a really powerful narrow AI, and in a sense you are right. But that is the only idea available from the conventional AGI community, as far as I can tell. Very little thought has been given to the nature of the control architecture: everyone just assumes as a default that it will be a souped up version of the control systems in use in narrow AI systems. That is the only reason I target it.

And again, you are right that there would be several goals, but if there were, there would have to be some overall goal, or guidance mechanism, for swapping goals in and out. In a Goal Stack system it woud make no difference if there were several stacks, from the point of view of the arguments I was presenting.

So I don't think anything is changed: In an AGI you have to have "vague" or "abstract" high level goals, because if you don't then the system is, almost by definition, just a boring old Narrow AI system with no ability to generally deal with all aspects of the world. You absolutely *need* there to be general goals to have a general intelligence, and my argument is that you cannot have general goals that are meaningful in the context of a Goal Stack system.



Richard Loosemore









-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to