My own view is that our state of knowledge about AGI is far too weak
for us to make detailed
plans about how to **ensure** AGI safety, at this point

What we can do is conduct experiments designed to gather data about
AGI goal systems and
AGI dynamics, which can lead us to more robust AGI theories, which can
lead us to detailed
plans about how to ensure AGI safety (or, pessimistically, detailed
knowledge as to why
this is not possible)

However, it must of course be our intuition that guides these
experiments.  My intuition tells
me that a system with probabilistic-logic-based goal-achievement at
its core is much more
likely to ultimately lead to a safe AI than a system with neural net
dynamics at its core.  But
vague statements like the one in the previous sentence are of limited
use; their main use is in
leading to more precise formulations and experiments...

-- Ben G

On Sun, May 25, 2008 at 6:26 AM, Panu Horsmalahti <[EMAIL PROTECTED]> wrote:
> What is your approach on ensuring AGI safety/Friendliness on this project?
> ________________________________
> agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to