I find myself totally bemused by the recent discussion of AGI friendliness.
I am in sympathy with some aspects of Mark's position, but I also see a
serious problem running through the whole debate: everyone is making
statements based on unstated assumptions about the motivations of AGI
systems. EVERYTHING depends on what assumptions you make, and yet each
voice in this debate is talking as if their own assumption can be taken
for granted.
The three most common of these assumptions are:
1) That it will have the same motivations as humans, but with a
tendency toward the worst that we show.
2) That it will have some kind of "Gotta Optimize My Utility
Function" motivation.
3) That it will have an intrinsic urge to increase the power of its
own computational machinery.
There are other assumptions, but these seem to be the big three.
So what I hear is a series of statements that are analogous to:
"Well, since the AGI will be bright yellow, it will clearly
do this and this and this.............."
"Well, since the AGI will be a dull sort of Cambridge blue,
it will clearly do this and this and this.............."
"Well, since the AGI will be orange, it will clearly do this
and this and this.............."
(Except, of course, that nobody is actually coming right out and saying
what color of AGI they assume.)
In the past I have argued strenuously that (a) you cannot divorce a
discussion of friendliness from a discussion of what design of AGI you
are talking about, and (b) some assumptions about AGI motivation are
extremely incoherent.
And yet in spite of all my efforts that I have made, there seems to be
no acknowledgement of the importance of these two points.
Richard Loosemore
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com