Joshua Fox wrote:
[snip]
When you understand the following, you will have surpassed most AI experts in understanding the risks: If the first AGI is given or decides to try for almost any goal, including a simple "harmless" goal like being as good as possible at proving theorems, then humanity will be wiped out by accident.

This is not true.

You assume a general intelligence, but then you also assume that this
general, smart-as-a-human AGI is driven by a motivational system so
incredibly stupid that it is barely above the level of a pocket calculator.

Almost certainly, such a system would not actually work.  With a
motivational system as bad as that, it would never get to be an AGI in
the first place.  Hence your assertion that "humanity will be wiped out
by accident" is completely untenable.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to