Shane Legg wrote:
Aleksei Riikonen wrote:
You claimed that an AGI is "critically unstable due to evolutionary
pressure" unless it "is primarily interested in its own self
preservation".
To me, "I did not claim that a primary interest in self preservation
was a necessary feature" seems to directly contradict this.
I see no contradiction at all. You could build your AGI that didn't
have self preservation as a primary goal. I have no problem with
that, indeed it's probably a good thing. Clearly then, I don't consider
this to be a strictly necessary feature. My concern is that your AGI
won't be long term stable. Where's the contradiction?
In the context of your blog entry, I got the impression that you
thought of "critically unstable due to evolutionary pressure" as a
property that should be avoided. You said "I suspect that the only
stable long term possibility is a super AGI that is primarily
interested in its own self preservation." which I took to mean that
this kind of AGI is what you think should be built.
Ok, I might have misinterpreted you on this part, sorry -- I didn't
think of the possibility, that the systems you want to build would be
those you saw as critically unstable, instead of the stable ones.
So we are in agreement on the point that the AGIs we build should (at
least probably) be non-maximally-self-preserving. The disagreement
remains with regard to your claim, that these AGIs would necessarily
be critically unstable.
Ben mentioned one of the best counterarguments to this: if the first
AGI system to achieve superintelligence is
non-maximally-self-preserving, it might nevertheless be able to
prevent other entities from ever reaching superintelligence because of
it's head start (which it could use to obtain close control of all
yet-to-be-finished AGI research projects, and to set up a very
extensive surveillance network), and thus it would never face any real
competitors that would be able to exert evolutionary pressure.
Such a non-maximally-self-preserving entity could also switch to
self-preservation-maximization mode in the surprising event that it
e.g. finds sufficiently threatening extraterrestrial
superintelligences. I'm unable to think of sources of evolutionary
pressure that would make such a head start -equipped entity critically
unstable.
It would not resist scenarios, where it's destruction is necessary for
the happiness of humankind, which I see as a nice feature.
It certainly is a nice feature. However the fact that the AGI is willing to
destroy itself in situations where an AGI that was primarily interested in
its own self preservation wouldn't, seems to support my argument rather
than yours?
This point removes support from none of my arguments, and whatever
support your argument of critical instability receives, is taken away
by the previous argument presented in this message (and Ben's
message).
As a separate note, I'd like to mention that I was mistaken in
thinking that the arguments of yours that I criticized were indicative
of you being unaccustomed to ethical thinking. There was a mistake in
the arguments I commented on, but that mistake was not really an
ethical one.
--
Aleksei Riikonen - http://www.iki.fi/aleksei
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]