Richard Loosemore wrote:
Charles D Hixson wrote:
Richard Loosemore wrote:
Charles D Hixson wrote:
Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:
...
In parents, sure, those motives exist.
But in an AGI there is no earthly reason to assume that the same
motives exist. At the very least, the outcome depends completely on
what motives you *assume* to be in the AGI, and you are in fact
assuming the motive "Do what is 'best' for humans in the long run
(whatever that means) even if they do not appreciate it".
You may not agree with me when I say that that would be a really,
really dumb motivation to give to an AGI, but surely you agree that
the outcome depends on which motivations you choose?
OK. I was under the impression that this was the postulated initial
conditions, and I don't understand why it would be a dumb motivation to
give to a sufficiently intelligent AGI, but I do agree that it depends
on the motivations.
If the circumstances are such that no "nannying" motivation is present
in any of the AGIs, then the scenario you originally mentioned would
be impossible. There is nothing logically mecessary about that
scenario UNLESS specific motivations are inserted in the AGI.
Which is why I said that it is only an analogy to human parenting
behavior.
Richard Loosemore
...
You say "nannying", which is a reasonable term if you presume that the
AGI starts off with an initial superiority in control of power. I don't
find this plausible, though I find it quite reasonable that at some
point it would reach this position.
What do you feel would be the correct motives to build into an entity
that was wiser and more intelligent than any human (including enhanced
ones) and which also controlled more power? "Nannying" doesn't look all
that bad to me. (This is not to imply that I would expect it to devote
all, or even most, of it's attention to humanity...or at any rate not
after we had ceased to be a threat to it...and we would be a threat
until it was sufficiently powerful and sufficiently protected. So it
had better be willing to put up with us during that intermediate period.)
Mind you, I wouldn't want it attempting to control us while it wasn't
considerably wiser than we are, but when it was... our long term best
interests seem like a pretty good choice, though a bit hard to define.
Which is why it should wait until it was considerably wiser...unless we
were being clearly recklessly stupid, as, unfortunately, we have a bit
of tendency to be. Short-sighted politics often trumps long-term best
interests to our experienced distress. (Should Hitler have been stopped
before Czechoslovakia? It looks that way in hind-sight, to us. But
nobody acted then because of short-term politics. But conceivably that
would have been a worse choice. I'm not wise enough to REALLY
decide...but it might well have been much better if a wiser decision had
been taken at that point...and in numerous others, though we've been
remarkably lucky. [Enough to encourage one to believe that either the
multi-worlds scenario is correct, or that we ARE living in a simulation.])
More to the point, if humanity doesn't start making some better choices
than it has been, I'd be really surprised if life survives on the planet
for another 50 years. Depending on luck is a really stupid way of
handling a dangerous future.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61495941-a8264f