Tom McCabe wrote:
The problem isn't that the AGI will violate its
original goals; it's that the AGI will eventually do
something that will destroy something really important
in such a way as to satisfy all of its constraints. By
setting constraints on the AGI, you're trying to think
of everything bad the AGI might possibly do in
advance, and that isn't going to work.
- Tom
What if one of the goals was "minimize the amount of destruction that
you do".
I'll grant you that that particular goal might result in a rather
useless AI, but it could be quite useful if you adjusted the strength of
that sub-goal properly WRT it's other goals.
Note that this isn't a constraint (i.e., a part of the problem), but
this is a part of what the AI considers to be it's "core being".
Presumably and strong AI will be presented with several problems, and
each one will have constraints appropriate to that problem, and the
solution to that problem will have a certain value, and potential
solutions will each have it's associated cost...as will postponing the
solution to the problem, but these will be transient, and not a part of
what the AI thinks of as "myself".
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=9724239-f20cc9