On May 28, 2007, at 1:11 PM, Joshua Fox wrote:

It is not at all sensible. Today we have no real idea how to build a working AGI.

Right. The Friendly AI work is aimed at a future system. Fermi and
company planned against meltdown _before_ they let their reactor go
critical.

The analogy is not great. A fission reaction can relatively easily be kept from going critical and the general way to produce such a fission reaction that would bring up the danger was known. Also a chain reaction was guaranteed without controls if enough fissionable material was brought together.


...spontaneously ...
People are working on an AGI that can do things spontaneously.  It
does not yet exist.

The scenario in question also suggest a much limited specialized AI would arrive at such abilities, which was ridiculous.


...concept extraction and learning ... algorithms and ...come understand software and hardware in depth ...and develop a will to be better greater than all other
If these are the best ways to achieve its goal, and if it is _truly_
intelligent, then of course that is what it would do. How long it
takes researchers to create such an AGI or whether they manage to help
it avoid the dangers I mention is another question.


Go back an reread.



By the way, the standard example of seemingly harmless but potentially
deadly AGI goal is making paper-clips. I mentioned theorem proving for
variety, although the difference between goals that do and don't
affect the material world might be worth some thought.


That one is even more silly.

- samantha

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to