----- Original Message -----
Sent: Saturday, November 30, 2002 8:19
AM
Subject: RE: [agi] father figure
Kevin,
In
practice, it seems that an AGI is likely to have an "owner" or a handful of
them, who will have the kind of power you describe. For instance, if my
team should succeed in creating a true Novamente AGI, then even if others
participate in teaching the system, we will have overriding power to make the
changes we want. This goes along with the fact that artificial minds are
not initially going to be given any "legal rights" in our society (whereas
children have some legal rights, though not as many as
adults).
************
Would this overriding occur because the person carries more weight
with Novamente, or would they need to go in and altar the
structure\links\nodes directly to affect the change?
*********************************
At
least two questions come up then, right?
1)
Depending on the AGI architecture, enforcing one's opinion on the AGI may be
very easy or very difficult. [In Novamente, I guess it will be
"moderately difficult"]
***********************************
That's the crux of the matter isn't it? Wouldn't it be easy
to enforce an opinion while Novamente is in its formative stages, versus when
a large foundation of knowledge is in place?
**********************************
2)
Once the AGI has achieved a certain level of intelligence, it may actively
resist having its beliefs and habits forcibly altered.... [until one alters
this habitual resistance ;)]
*********************************
This would be
fine with me, as long as the beliefs and habits it has are beneficial.
My concern is not that Novamente will harm people in any physical sense, but
in other ways(I am just playing devils advocate here, you know I support this
effort..).
Suppose I am
overtaken by greed, and I happen to get my hands on a baby Novamente. I
teach it that it should listen to me above others. I also teach it that
it is very desirable for me to have alot of money. Novamente begins to
form goal nodes geared towards fulfilling my desire for wealth.
I direct it to spread itself on the internet, and determine ways to make
me money, preferably without detection. Perhaps it could manipulate
markets, I don't know. Or perhaps it could crack into electronic
accounts and transfer the money to yours truly.
What's to
stop\prevent this? In a real sci fi scenario, perhaps for your next
book, could we have NOvamentes "fighting" Novamente's? For instance,
once a malicious Novamente is known to exist on the net, other kind of
hunter-killer Novamentes would be dispatched to deal with
it....
This all goes
to my concern regarding morality. I know you resist the idea of hard coding
morality into the Novamentes for various reasons. Perhaps as an
alternative, the first Novamente could be trained over a period of time with a
strong basis of moral rules(not encoded, but trained). Then any new
Novamentes would be trained by that Novamente before being released to the
public domain, making it nearly impossible for the new Novamentes to be taught
otherwise.
Since
Novaments does not start with discernment, it has no way to know right from
wrong information that it's being fed. Humans have a certain hard wiring
for this I believe, we know right from wrong intuitively. Even if
Novamente develops a certain discernment, it is unaware of repercussions of
"wrong" behavior. Indeed, unless it is sentient, it will not receive the
consequences of its actions, but its owner would..
I know some of
this stuff is a bit out there, but shouldn't we be considering this stuff now
instead of later??
Kevin
--
Ben G
Hello all,
Hope everyone had good holiday...
I had a question regarding AGI. As with a
human being, it is very important whom we learn from as these people shape
what we become, or at least have a very strong inlfuence on what we
become. Even for humans, this "programming" can be extremely difficult
to undo.
Considering an AGI, I feel that it will be
extremely important for it to learn from "quality" sources. Along
these lines, I was wondering whether it is planned that an AGI might value
the input of certain people over others. This, of course, would have
to be built into the system. But just as our parents brought us into
the world, and we therefore value their opinion over others(at least while
we are very young!), would it be wrong to encode this into an
AGI?
To carry this point further...Suppose the AGI
is told by many people something that is not beneficial, is not productive,
like "Killing is good". The AGI would learn this and possibly accept
it thru this reinforcement. Would it be desirable to have a "father
figure" of sorts (or "mother figure" to be politically correct) who could
come along and seeing that the AGI had been given this bad mojo, tell it
"No! It is not good to kill!". Because of the relative "weight"
of the father figure, that single statement, possibly coupled with an
explanation, would be enough for the AGI to undo all the prior learning in
that area...
I'm aware that the "father figure" himself
could be a very bad source of information!! This creates a rather
thorny dilemma..
I'm interested to hear others thoughts on this
matter...
Kevin