On Wed, Aug 27, 2008 at 8:32 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>> But, how does your description not correspond to giving the AGI the
>> goals of being helpful and not harmful? In other words, what more does
>> it do than simply try for these? Does it pick goals randomly such that
>> they conflict only minimally with these?
>
> Actually, my description gave the AGI four goals: be helpful, don't be
> harmful, learn, and keep moving.
>
> Learn, all by itself, is going to generate an infinite number of subgoals.
> Learning subgoals will be picked based upon what is most likely to learn the
> most while not being harmful.
>
> (and, by the way, be helpful and learn should both generate a
> self-protection sub-goal  in short order with procreation following
> immediately behind)
>
> Arguably, be helpful would generate all three of the other goals but
> learning and not being harmful without being helpful is a *much* better
> goal-set for a novice AI to prevent "accidents" when the AI thinks it is
> being helpful.  In fact, I've been tempted at times to entirely drop the be
> helpful since the other two will eventually generate it with a lessened
> probability of trying-to-be-helpful accidents.
>
> Don't be harmful by itself will just turn the AI off.
>
> The trick is that there needs to be a balance between goals.  Any single
> goal intelligence is likely to be lethal even if that goal is to help
> humanity.
>
> Learn, do no harm, help.  Can anyone come up with a better set of goals?
> (and, once again, note that learn does *not* override the other two -- there
> is meant to be a balance between the three).
>

And AGI will just read the command, "help", 'h'-'e'-'l'-'p', and will
know exactly what to do, and will be convinced to do it. To implement
this "simple" goal, you need to somehow communicate its functional
structure in the AGI, this won't just magically happen. Don't talk
about AGI as if it was a human, think about how exactly to implement
what you want. Today's rant on Overcoming Bias applies fully to such
suggestions ( http://www.overcomingbias.com/2008/08/dreams-of-ai-de.html
).


-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to