Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:
...
I think you should read some stories from the 1930's by John W.
Campbell, Jr. Specifically the three stories collectively called "The
Story of the Machine". You can find them in "The Cloak of Aesir and
other stories" by John W. Campbell, Jr.
Essentially, even if a AGI is benevolently inclined towards people, it
won't necessarily do what they want. It may instead do what appears
best for them. (Do parents always do what their children want?)
That the machine isn't doing what you want doesn't mean that it isn't
considering your long-term best interests...and as it becomes wiser, it
may well change it's mind about what those are. (In the stories, the
machine didn't become wiser, it just accumulated experience with how
people reacted. )
Mind you, I'm not convinced that he was right about what is in people's
long term best interest...but I certainly couldn't prove that he was
wrong, so he MIGHT be right. In which case an entirely benevolent
machine might decide to appear to abandon us, even though it would cause
it great pain, because it was constructed to want to help.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60973622-5b7071