Charles D Hixson wrote:
Richard Loosemore wrote:
Charles D Hixson wrote:
Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:

...

I think you should read some stories from the 1930's by John W. Campbell, Jr. Specifically the three stories collectively called "The Story of the Machine". You can find them in "The Cloak of Aesir and other stories" by John W. Campbell, Jr.

Essentially, even if a AGI is benevolently inclined towards people, it won't necessarily do what they want. It may instead do what appears best for them. (Do parents always do what their children want?)

That the machine isn't doing what you want doesn't mean that it isn't considering your long-term best interests...and as it becomes wiser, it may well change it's mind about what those are. (In the stories, the machine didn't become wiser, it just accumulated experience with how people reacted. )

Mind you, I'm not convinced that he was right about what is in people's long term best interest...but I certainly couldn't prove that he was wrong, so he MIGHT be right. In which case an entirely benevolent machine might decide to appear to abandon us, even though it would cause it great pain, because it was constructed to want to help.

This is a question that comes up frequently, and it was not so long ago that I gave a long answer to this one. I suppose we could call it the "Nanny Problem".

The brief version of the answer is that the analogy of AGI=Human Parent (or Nanny) does not hold water when you look into it in any detail. parents do the "This is going to hurt but, trust me, it is good for you" thing under specific circumstances ... most importantly, they do it because they are driven by certain built-in motivations, and they do it because of the societal demands of ensuring that the children can survive by themselves in the particular human world we live in.

Think about it long enough, and none of those factors apply. The analogy just breaks down all over the place.

Stepping back for a moment, this is also a case of "shallow science fiction nightmare" meets the hard truth of actual AGI. We definitely need to spend more time, I think, throwing out the science fiction nightmares that are based on wildly inaccurate assumptions.



Richard Loosemore
It's not exactly a matter of an analogy, it's a matter of what the logical answer to the problem is. The logical answer RESULTS in parents saying "Trust me...",

No: you are assuming *motives* on the part of the parent or AGI, and those assumptions can (easily) be challenged.

Because of those assumptions, the answer given by the parent is not "logical", it is a consequence of the motives that you assume to be present in the parent.

In parents, sure, those motives exist.

But in an AGI there is no earthly reason to assume that the same motives exist. At the very least, the outcome depends completely on what motives you *assume* to be in the AGI, and you are in fact assuming the motive "Do what is 'best' for humans in the long run (whatever that means) even if they do not appreciate it".

You may not agree with me when I say that that would be a really, really dumb motivation to give to an AGI, but surely you agree that the outcome depends on which motivations you choose?

If the circumstances are such that no "nannying" motivation is present in any of the AGIs, then the scenario you originally mentioned would be impossible. There is nothing logically mecessary about that scenario UNLESS specific motivations are inserted in the AGI.

Which is why I said that it is only an analogy to human parenting behavior.



Richard Loosemore



but the same logic might apply in other
circumstances. If something is designed to further your "long term best interests", then when it becomes wiser than you are, you won't be able to predict what it will choose to do. This is only a nightmare if you believe that because it does things that aren't what you want, it has "turned against you" rather than just being able to predict further ahead.

A long answer isn't any better than a short one unless it can explicitly say why something that is doing what it was designed to do should have it's actions be predictable by someone less wise than it is. I don't believe that such predictions are feasible, except in very constrained situations.

(And science fictions stories, as opposed to movies, are often quite insightful when read at the appropriate level of abstraction. Equally, of course, it often isn't. Frequently it's insightful along one axis and rather silly along several others. Writing an entertaining thought problem is difficult...the movies generally don't even seem to realize that that's what good science fiction is about, they just notice which titles are popular. [This may be the distinction between fantasy and science fiction...at least in my lexicon.])

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61269020-a668b6

Reply via email to