Charles D Hixson wrote:
Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:

...

I think you should read some stories from the 1930's by John W. Campbell, Jr. Specifically the three stories collectively called "The Story of the Machine". You can find them in "The Cloak of Aesir and other stories" by John W. Campbell, Jr.

Essentially, even if a AGI is benevolently inclined towards people, it won't necessarily do what they want. It may instead do what appears best for them. (Do parents always do what their children want?)

That the machine isn't doing what you want doesn't mean that it isn't considering your long-term best interests...and as it becomes wiser, it may well change it's mind about what those are. (In the stories, the machine didn't become wiser, it just accumulated experience with how people reacted. )

Mind you, I'm not convinced that he was right about what is in people's long term best interest...but I certainly couldn't prove that he was wrong, so he MIGHT be right. In which case an entirely benevolent machine might decide to appear to abandon us, even though it would cause it great pain, because it was constructed to want to help.

This is a question that comes up frequently, and it was not so long ago that I gave a long answer to this one. I suppose we could call it the "Nanny Problem".

The brief version of the answer is that the analogy of AGI=Human Parent (or Nanny) does not hold water when you look into it in any detail. parents do the "This is going to hurt but, trust me, it is good for you" thing under specific circumstances ... most importantly, they do it because they are driven by certain built-in motivations, and they do it because of the societal demands of ensuring that the children can survive by themselves in the particular human world we live in.

Think about it long enough, and none of those factors apply. The analogy just breaks down all over the place.

Stepping back for a moment, this is also a case of "shallow science fiction nightmare" meets the hard truth of actual AGI. We definitely need to spend more time, I think, throwing out the science fiction nightmares that are based on wildly inaccurate assumptions.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61201718-f76b25

Reply via email to