On Nov 18, 2007 3:19 PM, Dennis Gorelik <[EMAIL PROTECTED]> wrote:

> Stefan,
>
> Thanks, but it seems that "ensuring that AGI is human-friendly
> problem" is not really a problem that we need to solve at the moment.
>
> Currently it is sufficient to test if whatever system we develop is:
> 1) Is useful for us.
> 2) Is not too harmful for us.
>
> On the later stages of AGI development it may become useful to find
> some algorithms that would constrain advanced AGI behavior, but
> currently such constraint would simply kill AGI prototype.
>
> Still if you had short list of tips about how to design & apply such
> "AGI safety constraint" -- that would be useful, but your article is
> considerably longer and way more abstract than that.
>

Thank you for your feedback. Simply put I am confident - based on my
research - that friendliness will emerge without a need to be designed in.
-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66322918-2cb277

Reply via email to