I am in sympathy with some aspects of Mark's position, but I also see a
serious problem running through the whole debate: everyone is making
statements based on unstated assumptions about the motivations of AGI
systems.
Bummer. I thought that I had been clearer about my assumptions. Let me try
to concisely point them out again and see if you can show me where I have
additional assumptions that I'm not aware that I'm making (which I would
appreciate very much).
Assumption - The AGI will be a goal-seeking entity.
And I think that is it. :-)
EVERYTHING depends on what assumptions you make, and yet each voice in
this debate is talking as if their own assumption can be taken for
granted.
I agree with you and am really trying to avoid this. I will address your
specific examples below and would appreciate any others that you can point
out.
The three most common of these assumptions are:
1) That it will have the same motivations as humans, but with a tendency
toward the worst that we show.
I don't believe that I'm doing this. I believe that all goal-seeking
generally tends to be optimized by certain behaviors (the Omohundro drives).
I believe that humans show many of these behaviors because these behaviors
are relatively optimal in relation to the alternatives (and because humans
are relatively optimal). But I also believe that the AGI will also have
dramatically different motivations from humans where the human motivations
were evolved stepping stones that were on the necessary and optimal path for
one environment but haven't been eliminated now that they are unnecessary
and sub-optimal in the current environment/society (Richard's "the worst
that we show").
2) That it will have some kind of "Gotta Optimize My Utility Function"
motivation.
I agree with the statement but I believe that it is a logical follow-on to
my assumption that the AGI is a goal-seeking entity (i.e. it's an Omohundro
drive). Would you agree, Richard?
3) That it will have an intrinsic urge to increase the power of its own
computational machinery.
Again, I agree with the statement but I believe that it is a logical
follow-on to my single initial assumption (i.e. it's another Omohundro
drive). Wouldn't you agree?
There are other assumptions, but these seem to be the big three.
And I would love to go through all of them, actually (or debate one of my
answers above).
So what I hear is a series of statements <snip> (Except, of course, that
nobody is actually coming right out and saying what color of AGI they
assume.)
I thought that I pretty explicitly was . . . . :-(
In the past I have argued strenuously that (a) you cannot divorce a
discussion of friendliness from a discussion of what design of AGI you are
talking about,
And I have reached the conclusion that you are somewhat incorrect. I
believe that goal-seeking entities OF ANY DESIGN of sufficient intelligence
(goal-achieving ability) will see an attractor in my particular vision of
Friendliness (which I'm deriving by *assuming* the attractor and working
backwards from there -- which I guess you could call a second assumption if
you *really* had to ;-).
and (b) some assumptions about AGI motivation are extremely incoherent.
If you perceive me as incoherent, please point out where. My primary AGI
motivation is "self-interest" (defined as achievement of *MY* goals -- which
directly derives from my assumption that "the AGI will be a goal-seeking
entity"). All other motivations are clearly logically derived from that
primary motivation. If you see an example where this doesn't appear to be
the case, *please* flag it for me (since I need to fix it :-).
And yet in spite of all my efforts that I have made, there seems to be no
acknowledgement of the importance of these two points.
I think that I've acknowledged both in the past and will continue to do so
(despite the fact that I am now somewhat debating the first point -- more
the letter than the spirit :-).
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com