On 8/25/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> On Mon, Aug 25, 2008 at 1:07 PM, Valentina Poletti <[EMAIL PROTECTED]>
> wrote:
> > In other words, Vladimir, you are suggesting that an AGI must be at some
> > level controlled from humans, therefore not 'fully-embodied' in order to
> > prevent non-friendly AGI as the outcome.
>
> Controlled in Friendliness sense of the word. (I still have no idea
> what "embodied" refers to, now that you, me and Terren used it in
> different senses, and I recall reading a paper about 6 different
> meanings of this word in academic literature, none of them very
> useful).


Agree

> Therefore humans must somehow be able to control its goals, correct?
> >
> > Now, what if controlling those goals would entail not being able to
> create
> > an AGI, would you suggest we should not create one, in order to avoid the
> > disastrous consequences you mentioned?
> >
>
> Why would anyone suggest creating a disaster, as you pose the question


Also agree. As far as you know, has anyone, including Eliezer, suggested any
method or approach (as theoretical or complicated as it may be) to solve
this problem? I'm asking this because the Singularity has confidence in
creating a self-improving AGI in the next few decades, and, assuming they
have no intention to create the above mentioned disaster.. I figure someone
must have figured some way to approach this problem.



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to