Vlad,

This is called a strawman argument. It is where you make a ridiculous claim about what I meant and then proceed to shoot it down. Eliezer has done it for years and has single-handedly been responsible for an incredible number of people simply giving up in disgust.

I said nothing and assume nothing about implementation. The fact that you're jumping to implementation at this stage is just plain incorrect. Maybe you should analyze exactly why you have such a need to prove people wrong that you have to put words into their mouths and ideas into their heads in order to be able to do so.


----- Original Message ----- From: "Vladimir Nesov" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Wednesday, August 27, 2008 1:31 PM
Subject: **SPAM** Re: AGI goals (was Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment))


On Wed, Aug 27, 2008 at 8:32 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
But, how does your description not correspond to giving the AGI the
goals of being helpful and not harmful? In other words, what more does
it do than simply try for these? Does it pick goals randomly such that
they conflict only minimally with these?

Actually, my description gave the AGI four goals: be helpful, don't be
harmful, learn, and keep moving.

Learn, all by itself, is going to generate an infinite number of subgoals. Learning subgoals will be picked based upon what is most likely to learn the
most while not being harmful.

(and, by the way, be helpful and learn should both generate a
self-protection sub-goal  in short order with procreation following
immediately behind)

Arguably, be helpful would generate all three of the other goals but
learning and not being harmful without being helpful is a *much* better
goal-set for a novice AI to prevent "accidents" when the AI thinks it is
being helpful. In fact, I've been tempted at times to entirely drop the be
helpful since the other two will eventually generate it with a lessened
probability of trying-to-be-helpful accidents.

Don't be harmful by itself will just turn the AI off.

The trick is that there needs to be a balance between goals.  Any single
goal intelligence is likely to be lethal even if that goal is to help
humanity.

Learn, do no harm, help.  Can anyone come up with a better set of goals?
(and, once again, note that learn does *not* override the other two -- there
is meant to be a balance between the three).


And AGI will just read the command, "help", 'h'-'e'-'l'-'p', and will
know exactly what to do, and will be convinced to do it. To implement
this "simple" goal, you need to somehow communicate its functional
structure in the AGI, this won't just magically happen. Don't talk
about AGI as if it was a human, think about how exactly to implement
what you want. Today's rant on Overcoming Bias applies fully to such
suggestions ( http://www.overcomingbias.com/2008/08/dreams-of-ai-de.html
).


--
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to