It is not at all sensible.  Today we have no real idea how to build a working 
AGI.

Right. The Friendly AI work is aimed at a future system. Fermi and
company planned against meltdown _before_ they let their reactor go
critical.

...spontaneously ...
People are working on an AGI that can do things spontaneously.  It
does not yet exist.

...concept extraction and learning ... algorithms and ...come understand 
software and
hardware in depth ...and develop a will to be better greater than all other
If these are the best ways to achieve its goal, and if it is _truly_
intelligent, then of course that is what it would do. How long it
takes researchers to create such an AGI or whether they manage to help
it avoid the dangers I mention is another question.

By the way, the standard example of seemingly harmless but potentially
deadly AGI goal is making paper-clips. I mentioned theorem proving for
variety, although the difference between goals that do and don't
affect the material world might be worth some thought.

I just read that even before their first airplane, the Wright Brothers
thought not only about the basics of heavier-than-air powered flight
but about safety, specifically about stability. This kept them ahead
of competing planes-- which could fly but only straight and in still
air--for a few years.

Why not at least ponder the safety of inventions before they exist?

Joshua

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to