ok see my responses below..
Matt Mahoney wrote:
--- rg <[EMAIL PROTECTED]> wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we can't.
We could say behavior that is acceptable in our society then..
In your mail you believed they would be friendly..
So I ask why would they behave in a way acceptable to us ?
Initially I believe that a distributed AGI will do what we want it to do
because it will evolve in a competitive, hostile environment that rewards
usefulness.
If it evolves in a competitive, hostile environment it would only do
what is best for itself..
How would that coincide with what is best for mankind ? Why would it?
If it is an artificial reward system, it will one day realize it is just
such a system
designed to evolve it in a particular direction, what happens then?
If by "friendly" you mean that it does what you want it to do,
then it should be friendly as long as humans are the dominant source of
knowledge. This should be true until just before the singularity.
The question is more complicated when the technology to simulate and reprogram
your brain is developed. With a simple code change, you could be put in an
eternal state of bliss and you wouldn't care about anything else. Would you
want this? If so, would an AGI be friendly if it granted or denied your
request? Alternatively you could be inserted into a simulated fantasy world,
disconnected from reality, where you could have anything you want. Would this
be friendly? Or you could alter your memories so that you had a happy
childhood, or you had to overcame great obstacles to achieve your current
position, or you lived the lives of everyone on earth (with real or made-up
histories). Would this be friendly?
I simply ask why would it fit into our society?
At a point then it does not have to, why would it care to ?
Proposals like CEV ( http://www.singinst.org/upload/CEV.html ) don't seem to
work when brains are altered. I prefer to investigate the question of what
will we do, not what should we do. In that context, I don't believe CEV will
be implemented because it predicts what we would want in the future if we knew
more, but people want what they want right now.
-- Matt Mahoney, [EMAIL PROTECTED]
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com