Richard Loosemore wrote:
Kaj Sotala wrote:
On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
...
goals.

But now I ask:  what exactly does this mean?

In the context of a Goal Stack system, this would be represented by a top level goal that was stated in the knowledge representation language of the AGI, so it would say "Improve Thyself".

Next, it would subgoal this (break it down into subgoals). Since the top level goal is so unbelievably vague, there are a billion different ways to break this down into subgoals: it might get out a polishing cloth and start working down its beautiful shiny exterior, or it might start a transistor-by-transistor check of all its circuits, or.... all the way up to taking a course in Postmodern critiques of the Postmodern movement.

And included in that range of "improvement" activities would be the possibility of something like "Improve my ability to function efficiently" which gets broken down into subgoals like "Remove all sources of distraction that reduce efficiency" and then "Remove all humans, because they are a distraction".

My point here is that a Goal Stack system would *interpret* this goal in any one of an infinite number of ways, because the goal was represented as an explicit statement. The fact that it was represented explicitly meant that an extremely vague concept ("Improve Thyself") had to be encoded in such a way as to leave it open to ambiguity. As a result, what the AGI actually does as a result of this goal, which is embedded in a Goal Stack architecture, is completely indeterminate.

Stepping back from the detail, we can notice that *any* vaguely worded goal is going to have the same problem in a GS architecture. And if we dwell on that for a moment, we start to wonder exactly what would happen to an AGI that was driven by goals that had to be stated in vague terms ... will the AGI *ever* exhibit coherent, intelligent behavior when driven by such a GS drive system, or will it have flashes of intelligence puncuated by the wild pursuit of bizarre obsessions? Will it even have flashes of intelligence?

So long as the goals that are fed into a GS architecture are very, very local and specific (like "Put the red pyramid on top of the green block") I can believe that the GS drive system does actually work (kind of). But no one has ever built an AGI that way. Never. Everyone assumes that a GS will scale up to a vague goal like "Improve Thyself", and yet no one has tried this in practice. Not on a system that is supposed to be capable of a broad-based, autonomous, *general* intelligence.

So when you paraphrase Omuhundro as saying that "AIs will want to self-improve", the meaning of that statement is impossible to judge.
...

Perhaps I don't understand "Goal-Stack System". You seem to be presuming that the actual implementation would involve statements in English (or some equivalent language). To me it seems more as if Goals would be represented as internal states reflecting such things as sensor state and projected sensor state, etc. Thus "Improve yourself" would need to be represented by something which would be more precisely translated into English as something like "change your program so that a smaller or faster implementation will predictions regarding future sensor states that are no worse than the current version of the program." (I'm not at all clear how something involving external objects could be encoded here. Humans seem to "imprint" on a human face, and work out from there. This implies a predictable sensory configuration for the initial state, but there must also be backup mechanisms, or Helen Keller would have been totally asocial.)

At all events, the different versions of "improve yourself" that you mentioned would seem to require different internal representations. Also, the existence of one goal doesn't preclude the existence of other goals, and which goal was of top priority would be expected to shift over time. Additionally, any proposed method of reaching a goal would have costs as well as benefits. No AGI would reasonably have only a few high level goals, so rather than *a* goal stack, even with my interpretation there would need to be several of them.

To me it seems as if the problem that you are foreseeing is more to be expected from a really powerful narrow AI than from an AGI.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to