Pei,
> The concept of "top-level goals" (or "super goals") in this discussion > is often ambiguous. It can mean (1) the initial (given or built-in) > goal(s) from which all the other goals are derived, or (2) the > dominating goal when conflicts happen among goals. Many people > implicitly assume they are the same, but they usually are different in > human mind, and there is no reason to assume in AGIs they will be the > same. Which do you mean? > I mean (2) ... however it is also the case that most of the system's other goals *will* be derived from the top-level goal in sense (2), in NM/OpenCog's goal system. > > How much testing is enough? In human history, many initially > benign-looking ideas lead to long-term troubles. I don't think there > are ways to reach conclusive conclusions, except in special domains. > Agreed > > > I don't think any AGI system can maintain a top-down goal system in > the sense that the child-goals are logically consistent with the > parent-goals, unless the world/environment is assumed to be closed and > fully predictable. True ... but it can try to maintain such consistency probabilistically! > > > > Not really. As soon as you agree that the system in principle has > insufficient knowledge and resources, it directly follows that the > system cannot be absolutely sure whether a "subgoal" derived according > to the system's current belief will indeed lead to the satisfaction of > the "supergoal" that producing it. What the system does may reduce > this inconsistency, but cannot avoid it. This is the "big picture" I > talked about. I agree and the NM/OpenCog inference approach is all about uncertain, probabilistic inference ... not absolute certainty. > > > If you propose your solution as one way to increase the consistency in > goal-derivation, I have no problem. Yes, that is the nature of my proposal > It is just that in the "AGI > ethics" discussion, there are beliefs that AGI systems can be designed > with guaranteed "friendliness" by carefully choosing the "supergoal", > and making all the "subgoals" consistent with them, which, to me, is a > completely wrong idea (though I respect the motivation). Perhaps there are those beliefs, but they are not MY beliefs ;-) ben ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com
