Jiri Jelinek wrote:
Richard,
http://susaro.com/
1) the "safety" stuff.. - For a while, AGI will IMO be
abuse-vulnerable = as safe/unsafe as those who control it.
2) "[AI] does not devalue us".. - agreed.. My view: Problems for AIs,
work for robots, feelings for us. Qualia - that's where the
Richard,
> http://susaro.com/
1) the "safety" stuff.. - For a while, AGI will IMO be
abuse-vulnerable = as safe/unsafe as those who control it.
2) "[AI] does not devalue us".. - agreed.. My view: Problems for AIs,
work for robots, feelings for us. Qualia - that's where the value is.
3) AI thin
Bob Mottram wrote:
This made me chuckle:
"So after the first safe AI is built, the situation will stabilize
completely and any further change will always occur in a controlled
way that is consistent with the original design."
Okay, O Wicked-Tongued One, how would you have summarized an extreme
This made me chuckle:
"So after the first safe AI is built, the situation will stabilize
completely and any further change will always occur in a controlled
way that is consistent with the original design."
---
agi
Archives: http://www.listbox.com/member/ar
I have just written a new blog post that is the begining of a daily
series this week and next, when I will be launching a few broadsides
against the orthodoxy and explaining where I am going with my work.
http://susaro.com/
Richard Loosemore
---
agi