From: Matt Mahoney <[EMAIL PROTECTED]>, in reply to Mark Waser:
>You seem to be giving special status to Homo Sapiens.  How does this
>arise out of your dynamic?  I know you can program an initial bias,
>but how is it stable?

Keep in mind that Mark made a subsequent reply saying he isn't giving
a special status to Homo Sapiens.

Unlike Mark, I do like to give a special status to Homo Sapiens (which
I'll call humans, to save syllables).  One problem with humans is that
you can't prove anything about them.  If you want to say your AI won't
kill everbody, then you have to prove that the humans won't all be
suicidal, or you have to accept that the AI might refuse to do
something easy that all of the humans want.  I don't like either of
those options -- the net effect of this and similar scenarios is that
I can't make, much less prove, any general statements about the
behavior of the AI that are both interesting and useful.

The only path forward here that I can see is to compare the Friendly
AI with human society, rather than with some mathematical ideal.  I
can't prove anything about human society, but I can make plausible
arguments that human society has some failure modes that the AI does
not.  If we expect the AI to act much more quickly than human society,
we might still have a failure pretty soon just because the AI explores
the state space much more quickly, so even this is bad.

I don't see any hope of proving anything about the behavior of an AI
that meaningfully takes human desires into account.  Any ideas?

Hmm, it could collect some training data about what humans want right
now, and act on that forever without collecting any more training
data.  Right now the vast majority of the humans are not suicidal, so
if the AI isn't buggy that would yield an AI that wouldn't kill
everybody.  Unfortunately that would limit us forever to the
stupidities of the past.  Right now, most humans claim to be motivated
by considerations of what will happen to them in a supernatural
afterlife, for example.  I don't want to find out what a powerful AI
would do about that.

-- 
Tim Freeman               http://www.fungible.com           [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to