Matt Mahoney wrote:
An AGI will not design its goals. It is up to humans to define the
goals of an AGI, so that it will do what we want it to do.
Are you certain that this is the optimal approach? To me it seems more
promising to design the motives, and to allow the AGI to design it's own
goals to satisfy those motives. This provides less fine grained control
over the AGI, but I feel that a fine-grained control would be
counter-productive.
To me the difficulty is designing the motives of the AGI in such a way
that they will facilitate human life, when they must be implanted in an
AGI that currently has no concept of an external universe, much less any
particular classes of inhabitant therein. The only (partial) solution
that I've been able to come up with so far (i.e., identify, not design)
is based around imprinting. This is fine for the first generation
(probably, if everything is done properly), but it's not clear that it
would be fine for the second generation et seq. For this reason RSI is
very important. It allows all succeeding generations to be derived from
the first by cloning, which would preserve the initial imprints.
Unfortunately, this is a problem. We may or may not be successful in
programming the goals of AGI to satisfy human goals. If we are not
successful, ... unpleasant because it would result in a different state.
-- Matt Mahoney, [EMAIL PROTECTED]
Failure is an extreme danger, but it's not only failure to design safely
that's a danger. Failure to design a successful AGI at all could be
nearly as great a danger. Society has become too complex to be safely
managed by the current approaches...and things aren't getting any simpler.
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com