Samantha said: Ultimately it will be determined by its own programming. So
one very interesting part is what is required in the "seed" to enable such
growth.
-----
I like this thought about the seed. The answer to understanding in
everything alive is in this 'seed.' E.g., one can't understand where trees
come from by looking at a tree or a forest. You have to go back to where it
started and look at it at a microcosm...break the seed down into its
components and operations. This is the same thing with AI:
Where does knowledge come from? Where does creativity and innovation come
from? Where does intention come from? Where do problems come from? Where
do questions come from?
If one doesn't know where things like this come from, how can one pretend to
know where they are going? It amazes me to see how advanced we have become
and we still don't know where knowledge comes from. In most circles this is
still seen as largely a mystery, or worse, an accident.
It's very similar in its challenges to the touchy nature of genetics.
Controlled it can produce badly needed food, out of control it can destroy
all life. All of the power of genetics is in the microcosm, not the
product. Key question then: What is the DNA of knowledge?
I think the answer to this is the catalyst to both singularity and coping
with this as an outcome.
Kind Regards,
Bruce LaDuke
Managing Director
Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com
----Original Message Follows----
From: Samantha Atkins <[EMAIL PROTECTED]>
To: Aleksei Riikonen <[EMAIL PROTECTED]>
CC: [email protected]
Subject: Re: [singularity] Re: Is Friendly AI Bunk?
Date: Sun, 10 Sep 2006 22:16:01 -0700
On Sep 10, 2006, at 1:56 PM, Aleksei Riikonen wrote:
samantha <[EMAIL PROTECTED]> wrote:
Why is being maximally self-preserving incompatible with being a
desirable AGI exactly? What is the "maximal" part?
In this discussion, maximal self-preservation includes e.g. that the
entity wouldn't allow itself to be destroyed under any circumstances,
which I see as an unnecessarily problematic and limiting feature,
which I wouldn't want to include in an AGI.
a) No one would;
b) This isn't how self-preservation works;
c) It is an often exploded idea that we can impose rigidly a particular
goal/behavior such as this seemingly problematic version of
self-preservation.
Such a feature would
prevent e.g. the inclusion of safety measures of the kind, where the
AGI automatically shuts off if it finds catastrophic bugs in it's own
code.
Sometimes I think it would be nice if humans had this feature. Of course I
would get lonely before also shutting down.
Human beings are relatively hard-wired toward self-preservation.
That does not mean that this goal is never ever superseded nor that
self-preservation is incompatible with ethical behavior.
Yes, in the case of humans the goal of self-preservation can be
superseded, and hence humans are not maximally self-preserving in the
sense of the word that was used in this discussion.
Then the discussion posits some type of "self-preservation" never seen
before and very unlikely.
Rational self-interest can even be posited as a better guide to ethical
behavior than other more "unselfish" notions. Are we reifying an old
debate from human ethical philosophy onto AGIs?
It seems that we aren't, at least not yet.
It seems to me that we may be circling such.
Ben mentioned one of the best counterarguments to this: if the first
AGI system to achieve superintelligence is
non-maximally-self-preserving, it might nevertheless be able to
prevent other entities from ever reaching superintelligence because of
it's head start (which it could use to obtain close control of all
yet-to-be-finished AGI research projects, and to set up a very
extensive surveillance network), and thus it would never face any real
competitors that would be able to exert evolutionary pressure.
This prevention of other intelligences is not at all a desirable outcome
in my opinion. I do not believe that any intelligence can be all things
within itself.
I am not advocating the prevention of all other intelligences (or
even, all other superintelligences), only the prevention (or
limitation) of superintelligences that would want to prevent the first
superintelligence from doing what we'd want it to do.
What makes us think that we get to tell an entity a million times smarter
than ourselves what to do? If we make the entity aggressive in
stopping/aborting all other entities whose goals do not seem compatible
have we done a good thing?
I meant to show,
that it is in principle possible in some scenarios for the first
superintelligence to prevent all those other human-created
superintelligences, that we'd deem undesirable.
I do not see that this is a desirable outcome. The first AGI becomes the
ultimate snoop on everything in order to prevent any other AI from arising
that might challenge it or its current view of its goals. It seems like
the ultimate dictator doesn't it? Given these hypothetical abilities and
goals anything that does go wrong would be utterly guaranteed to be
catastrophic since the AGI is posited as perfectly able to prevent anything
from interfering with its goals.
I also do not believe that these "evolutionary" arguments are very
enlightening when applied to a radically different type of intelligent
being largely responsible for its own change over time and systems
of such beings.
My "evolutionary argument" was that in some specific scenarios, no
significant outside evolutionary pressure will ever be exerted on the
first superintelligence. I do not see how the necessary difference
(when compared to humans) of this superintelligence would take away
from the validity of this argument.
Ah. So in the scenario where the AGI is set up to strangle other AGIs in
their crib there is no pressure for it ever to change. Great super-stasis
if we thought that evolution was the only way. Many argue that it is not.
How the superintelligence chooses to evolve (or make variations/copies
of itself to form a larger system) could be called "interior
evolutionary pressure" -- anyway, something else than outside
evolutionary pressure, which was what my "evolutionary argument" was
concerned with. "Interior evolutionary pressure" will be determined by
how we choose to program the first superintelligence.
Ultimately it will be determined by its own programming. So one very
interesting part is what is required in the "seed" to enable such growth.
It would not resist scenarios, where it's destruction is necessary for
the happiness of humankind, which I see as a nice feature.
I do not see this as an axiomatically good feature. Considering the
limited intelligence and very fickle ways of humans I consider this a
great threat to the viability of any greater than human intelligence.
Would you like to present an example of a scenario, where this feature
would be a problem?
It is very easy to see humanity getting paranoid and put out that they are
no longer top-dog intelligence-wise. The first time significant human
group goals/actions were thwarted by the AGI there would be a great hue and
cry to end this oppression by destroying it. Hell, most of humanity will
consider such an entity a monstrous evil abomination simply for the matter
of its creation. These are obviously not going to be happy.
In most cases, ensuring the happiness of humans should be a very easy
minor task for a proper superintelligence, one that it wouldn't need
to devote a significant part of it's attention to, and one that
wouldn't impose noticeable limits on it or it's growth and more
interesting pursuits.
This seems extremely unlikely in that we largely have no idea what would
make us happy and are very quarrelsome. Part of our fond notion of what
makes us happy is that we are in charge. But our limited intelligence and
famously limited rationality makes this rather incompatible with our
continued well-being. How would the AGI resolve such problems and do so
easily? Sounds a bit like a pie-in- the-sky in the sweet-by-and-by to me.
(Many/most humans would probably want a superintelligence to augment
them to be more-than-human. It is a bit more complicated problem, how
a nice superintelligence should deal with augmented humans, which
aren't necessarily very simple creatures.)
Under your earlier scenario, the AGI would limit how much it augmented
humans on any intelligence to avoid possible conflicts with its goals. I
doubt humans would be very happy with this. If you look at it that way
humans *are* a severely flawed intelligence but one that has no intention
of ending itself when the flaw becomes obvious.
I don't think it would at all rational to consider human happiness,
whatever that may be, as more important than the very existence of a
much greater intelligence.
I find rationality to be value-independent in the sense that all
internally consistent value systems are equally rational.
Huh? In my value system greater intelligence is *better*. In such a value
system making the existence of greater intelligence dependent on the whim
(happiness) of lesser intelligence is clearly sub-optimal. I think we both
share this value generally. Thus it does not appear constructive to go off
in the weeds of moral relativism at this juncture.
- samantha
-------
AGIRI.org hosts two discussion lists: http://www.agiri.org/email
[singularity] = more general, [agi] = more technical
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]