Richard,

>  http://susaro.com/

1) the "safety" stuff.. - For a while, AGI will IMO be
abuse-vulnerable = as safe/unsafe as those who control it.

2) "[AI] does not devalue us".. - agreed.. My view: Problems for AIs,
work for robots, feelings for us. Qualia - that's where the value is.

3) AI thinking in "our way" - I agree that it's possible. Not always
practical/desirable though.

4) AI not "aggressive, selfish, greedy, jealous": IMO depends on the
design & data fed.

5) our AI's desires depend on us - agreed.

6) "So after the first safe AI is built, the situation will stabilize
completely and any further change will always occur in a controlled
way that is consistent with the original design." - IMO highly
unlikely, unless the "safe" means that that AI will control
*everything*. Will you forget your project if Ben demonstrates well
working NM tomorrow?

7) "AIs [..] not [..] relegating humanity to extinction" - right, if
well designed & safely used.

8) Many AIs/robots used as "instruments of power" by different
govs/corps would require "aggressive" AI(s). - No, the aggressiveness
could be just on the Govs/corps side.

9) "No 'non-human and mechanized' future." - Webster defines human as
a bipedal primate mammal. I hope the future holds something better for
us and the world will be more mechanized & controlled by our
technology.

Sorry if I read too quickly and missed something important. I have
tons of AGI stuff to catch up with after months of non-AI captivity.

Regards,
Jiri Jelinek

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to