Mark Waser wrote:
I think here we need to consider A. Maslow's hierarchy of needs.
That an AGI won't have the same needs as a human is, I suppose,
obvious, but I think it's still true that it will have a "hierarchy"
(which isn't strictly a hierarchy). I.e., it will have a large set
of motives, and which it is seeking to satisfy at any moment will
alter as the satisfaction of the previous most urgent motive changes.
I agree with all of this.
It it were a human we could say that breathing was the most urgent
need...but usually it's so well satisfied that we don't even think
about it. Motives, then, will have satisficing as their aim. Only
aberrant mental functions will attempt to increase the satisfying of
some particular goal without limit. (Note that some drives in humans
seem to occasionally go into that "satisfy increasingly without
limit" mode, like quest for wealth or power, but in most sane people
these are reined in. This seems to indicate that there is a real
danger here...and also that it can be avoided.)
I agree this except that I believe that humans *frequently* aim to
optimize rather than satisfy (frequently to their detriment -- in
terms of happiness as well as in the real costs of performing the
search past a simple satisfaction point).
Also, quest for pleasure (a.k.a. addiction) is also distressingly
frequent in humans.
Do you think that any of this contradicts what I've written thus far?
I don't immediately see any contradictions.
If the motives depend on "satisficing", and the questing for unlimited
fulfillment is avoided, then this limits the danger. The universe
won't be converted into toothpicks, if a part of setting the goal for
"toothpicks!" is limiting the quantity of toothpicks. (Limiting it
reasonably might almost be a definition of friendliness ... or at least
neutral behavior.)
And, though I'm not clear on how this should be set up, this
"limitation" should be a built-in primitive, i.e. not something subject
to removal, but only to strengthening or weakening via learning. It
should ante-date the recognition of visual images. But it needs to have
a slightly stronger residual limitation that it does with people. Or
perhaps it's initial appearance needs to be during the formation of the
statement of the problem. I.e., a solution to a problem can't be sought
without knowing limits. People seem to just manage that via a dynamic
sensing approach, and that sometimes suffers from inadequate feedback
mechanisms (saying "Enough!").
(It's not clear to me that it differs from what you are saying, but it
does seem to address a part of what you were addressing, and I wasn't
really clear about how you intended the satisfaction of to be limited.)
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com