--- rg <[EMAIL PROTECTED]> wrote:
> Matt: Why will an AGI be friendly ?

The question only makes sense if you can define friendliness, which we can't.

Initially I believe that a distributed AGI will do what we want it to do
because it will evolve in a competitive, hostile environment that rewards
usefulness.  If by "friendly" you mean that it does what you want it to do,
then it should be friendly as long as humans are the dominant source of
knowledge.  This should be true until just before the singularity.

The question is more complicated when the technology to simulate and reprogram
your brain is developed.  With a simple code change, you could be put in an
eternal state of bliss and you wouldn't care about anything else.  Would you
want this?  If so, would an AGI be friendly if it granted or denied your
request?  Alternatively you could be inserted into a simulated fantasy world,
disconnected from reality, where you could have anything you want.  Would this
be friendly?  Or you could alter your memories so that you had a happy
childhood, or you had to overcame great obstacles to achieve your current
position, or you lived the lives of everyone on earth (with real or made-up
histories).  Would this be friendly?

Proposals like CEV ( http://www.singinst.org/upload/CEV.html ) don't seem to
work when brains are altered.  I prefer to investigate the question of what
will we do, not what should we do.  In that context, I don't believe CEV will
be implemented because it predicts what we would want in the future if we knew
more, but people want what they want right now.


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com

Reply via email to