Hi,

The problem, Ben, is that your response amounts to "I don't see why that
would work", but without any details.

The problem, Richard, is that you did not give any details as to why
you think your proposal will "work" (in the sense of delivering a
system whose Friendliness can be very confidently known)

The central claim was that because the behavior of the system is
constrained by a large number of connections that go from motivational
mechanism to thinking mechanism, the latter is tightly governed.

But this claim, as stated, seems not to be true....  The existence of
a large number of constraints does not intrinsically imply "tight
governance."

Of course, though, one can posit the existence of a large number of
constraints that DOES provide tight governance.

But the question then becomes whether this set of constraints can
simultaneously provide

a) the tightness of governance needed to guarantee Friendliness

b) the flexibility of governance needed to permit general, broad-based learning

You don't present any argument as to why this is going to be the case....

I just wonder if, in this sort of architecture you describe, it is
really possible to guarantee Friendliness without hampering creative
learning.  Maybe it is possible, but you don't give an argument re
this point.

Actually, I suspect that it probably **is** possible to make a
reasonably benevolent AGI according to the sort of NN architecture you
suggest ... (as well as according to a bunch of other sorts of
architectures)

However, your whole argument seems to assume an AGI with a fixed level
of intelligence, rather than a constantly self-modifying and improving
AGI.  If an AGI is rapidly increasing its hardware infrastructure and
its intelligence, then I maintain that guaranteeing its Friendliness
is probably impossible ... and your argument gives no way of getting
around this.

In a radically self-improving AGI built according to your
architecture, the set of constraints would constantly be increasing in
number and complexity ... in a pattern based on stimuli from the
environment as well as internal stimuli ... and it seems to me you
have no way to guarantee based on the smaller **initial** set of
constraints, that the eventual larger set of constraints is going to
preserve "Friendliness" or any other criterion.

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to