Jiri Jelinek wrote:
Richard,
http://susaro.com/
1) the "safety" stuff.. - For a while, AGI will IMO be
abuse-vulnerable = as safe/unsafe as those who control it.
2) "[AI] does not devalue us".. - agreed.. My view: Problems for AIs,
work for robots, feelings for us. Qualia - that's where the value is.
3) AI thinking in "our way" - I agree that it's possible. Not always
practical/desirable though.
4) AI not "aggressive, selfish, greedy, jealous": IMO depends on the
design & data fed.
5) our AI's desires depend on us - agreed.
6) "So after the first safe AI is built, the situation will stabilize
completely and any further change will always occur in a controlled
way that is consistent with the original design." - IMO highly
unlikely, unless the "safe" means that that AI will control
*everything*. Will you forget your project if Ben demonstrates well
working NM tomorrow?
See comments below. But also: I do not believe that will happen. My
belief that that will not happen is part of the reason for having to
take the approach that I do.
7) "AIs [..] not [..] relegating humanity to extinction" - right, if
well designed & safely used.
8) Many AIs/robots used as "instruments of power" by different
govs/corps would require "aggressive" AI(s). - No, the aggressiveness
could be just on the Govs/corps side.
9) "No 'non-human and mechanized' future." - Webster defines human as
a bipedal primate mammal. I hope the future holds something better for
us and the world will be more mechanized & controlled by our
technology.
Sorry if I read too quickly and missed something important. I have
tons of AGI stuff to catch up with after months of non-AI captivity.
No, that's fine.
Alas, the *intent* of that piece was a little misunderstood (entirely my
fault).
What I was doing was making a declaration, for people coming from
outside with no knowledge of AI at all, that there is at least one
interpretation of who to build a full AGI that allows me to dismiss all
those ideas as myths.
I would not want to argue that ALL approaches to AI would lead to the
same conclusions, I was only declaring that the approach that I will be
describing allows me to say that.
My difficulty was this. Explaining the justification for all those
claims cannot be done in anything less than a book-sized piece of text.
However, I wanted outsiders (who often fear the implications of AI
because they are sold on a version of AI that I repudiate) to know that
it might be worth looking deeper.
I think I am going to have to rewrite it to include more detail.
Anyhow, I'm hard at work on the next one, which is about complex systems.
Richard Loosemore.
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com