Hi
You said friendliness was AGIs locked in empathy towards mankind.
How can you make them feel this?
How did we humans get empathy?
Is it not very likely that we have empathy because
it turned out to be an advantage during our evolution
ensuring the survival of groups of humans.
So if an AGI is supposed to feel true empathy for a
human..must it not to evolve in a environment there
feeling empathy for a human is an advantage?
And how can one possibly do this?
Unless you do a virtual environment, simulating
generations after generations of AGIs coexisting
with simulated humans, simultaneously making it
an advantage for the AGIs to display empathy
towards said simulated humans..
Now what happens then you then allow theese AGIs
to interact with the real world? Then they realize
they have evolved in a virtual world designed to make
them behave in a certain way?
Richard Loosemore wrote:
Matt Mahoney wrote:
--- rg <[EMAIL PROTECTED]> wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which
we can't.
Wrong.
*You* cannot define friendliness for reasons of your own. Others cmay
well be able to do so.
It would be fine to state "I cannot see a way to define friendliness"
but it is not correct to state this as a general fact.
Friendliness, briefly, is a situation in which the motivations of the
AGI are locked into a state of empathy with the human race as a whole.
There are possible mechanisms to do this: those mechanisms are being
studied right now (by me, at the very least, and possibly by others too).
[For anyone reading this who is not familiar with Matt's style: he
has a preference for stating his opinions as if they are established
fact, when in fact the POV that he sets out is not broadly accepted by
the community as a whole. I, in particular, strongly disagree with
his position on these matters, so I feel obliged to step in when he
makes these declarations.]
Richard Loosemore
Initially I believe that a distributed AGI will do what we want it to do
because it will evolve in a competitive, hostile environment that
rewards
usefulness. If by "friendly" you mean that it does what you want it
to do,
then it should be friendly as long as humans are the dominant source of
knowledge. This should be true until just before the singularity.
The question is more complicated when the technology to simulate and
reprogram
your brain is developed. With a simple code change, you could be put
in an
eternal state of bliss and you wouldn't care about anything else.
Would you
want this? If so, would an AGI be friendly if it granted or denied your
request? Alternatively you could be inserted into a simulated
fantasy world,
disconnected from reality, where you could have anything you want.
Would this
be friendly? Or you could alter your memories so that you had a happy
childhood, or you had to overcame great obstacles to achieve your
current
position, or you lived the lives of everyone on earth (with real or
made-up
histories). Would this be friendly?
Proposals like CEV ( http://www.singinst.org/upload/CEV.html ) don't
seem to
work when brains are altered. I prefer to investigate the question
of what
will we do, not what should we do. In that context, I don't believe
CEV will
be implemented because it predicts what we would want in the future
if we knew
more, but people want what they want right now.
-- Matt Mahoney, [EMAIL PROTECTED]
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com