What I meant was that if it had awareness of the consequences of its
actions, it would think before acting, and if it thought about
consequences before acting, it would, ipso facto, not be a "tool".
An AGI thinking only about what it's tasked to think about by a user
makes it a tool, but it does not lower its AGI capabilities. It can
still demonstrate problem solving skills superior to the user.
Jiri's being obnoxious but I have to agree with his point. It is certainly
possible to set up a goal system so that the agi could *think* about the
consequences of it's actions better than we ever could but ignore it's
conclusions if told to do so. That's what someone else (I forget who,
sorry) was getting at with his comment about being a "tool of the man". All
you need is an over-riding goal of "OBEY!" (whether due to a threat of hell
or removal of heaven/paradise or just because it's hardwired in :-)
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61403756-603aa1