On Thu, Jun 12, 2008 at 6:30 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> A very diplomatic reply, it's appreciated.
>
> However, I have no desire (or time) to argue people into my point of view. I
> especially have no time to argue with people over what they did or didn't
> understand. And if someone wishes to state that I misunderstood what he
> understood, fine. If he wishes to go into detail about specifics of his idea
> that explain empirical facts that mine don't, I'm all ears. Otherwise, I have
> code to debug...
>

Haven't we all? ;-)

The classic argument for this point: you won't take a pill that will
make you want to kill people, if you don't want to kill people,
because if you take it, it will result in people dying.

U(x), or whole physical-makeup-of-AI, is also part of the territory,
and its properties is one of the things estimated by U(x). The message
that I tried to convey in the first post is that, for example,
rationality of AI's beliefs, which are a part of AI, is a rather
important goal for AI. Likewise, keeping U(x) from being replaced by
something wrong is a very important goal (which Jiri said explicitly).
You estimate value with your current utility, not with modified
utility. If before modifying utility it turns out that according to
it, utility-modification to nirvana-class variant is undesirable, it
will be rejected.

Before you actually accept the new utility, its strange properties,
such as driving you to do-nothing-and-be-happy attractor, don't apply
to you. The properties of new utility function are the elements of the
new world-state x that are estimated by current utility function.

-- 
Vladimir Nesov
[EMAIL PROTECTED]


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to