Aleksei Riikonen wrote:
In the context of your blog entry, I got the impression that you
thought of "critically unstable due to evolutionary pressure" as a
property that should be avoided. You said "I suspect that the only
stable long term possibility is a super AGI that is primarily
interested in its own self preservation." which I took to mean that
this kind of AGI is what you think should be built.
Ok, I might have misinterpreted you on this part, sorry -- I didn't
think of the possibility, that the systems you want to build would be
those you saw as critically unstable, instead of the stable ones.
So we are in agreement on the point that the AGIs we build should (at
least probably) be non-maximally-self-preserving. The disagreement
remains with regard to your claim, that these AGIs would necessarily
be critically unstable.
Why is being maximally self-preserving incompatible with being a
desirable AGI exactly? What is the "maximal" part? Human beings are
relatively hard-wired toward self-preservation. That does not mean that
this goal is never ever superseded nor that self-preservation is
incompatible with ethical behavior. Rational self-interest can even be
posited as a better guide to ethical behavior than other more
"unselfish" notions. Are we reifying an old debate from human ethical
philosophy onto AGIs?
Ben mentioned one of the best counterarguments to this: if the first
AGI system to achieve superintelligence is
non-maximally-self-preserving, it might nevertheless be able to
prevent other entities from ever reaching superintelligence because of
it's head start (which it could use to obtain close control of all
yet-to-be-finished AGI research projects, and to set up a very
extensive surveillance network), and thus it would never face any real
competitors that would be able to exert evolutionary pressure.
This prevention of other intelligences is not at all a desirable outcome
in my opinion. I do not believe that any intelligence can be all things
within itself. I also do not believe that these "evolutionary"
arguments are very enlightening when applied to a radically different
type of intelligent being largely responsible for its own change over
time and systems of such beings.
Such a non-maximally-self-preserving entity could also switch to
self-preservation-maximization mode in the surprising event that it
e.g. finds sufficiently threatening extraterrestrial
superintelligences. I'm unable to think of sources of evolutionary
pressure that would make such a head start -equipped entity critically
unstable.
It would not resist scenarios, where it's destruction is necessary for
the happiness of humankind, which I see as a nice feature.
I do not see this as an axiomatically good feature. Considering the
limited intelligence and very fickle ways of humans I consider this a
great threat to the viability of any greater than human intelligence. I
don't think it would at all rational to consider human happiness,
whatever that may be, as more important than the very existence of a
much greater intelligence.
As a separate note, I'd like to mention that I was mistaken in
thinking that the arguments of yours that I criticized were indicative
of you being unaccustomed to ethical thinking. There was a mistake in
the arguments I commented on, but that mistake was not really an
ethical one.
The mistake was coming out with rather harsh judgments in the context of
this discussion.
- samantha
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]