|
From: James Ratcliff
Sent: Friday, June 09, 2006 4:13 PM
Subject: Re: [agi] Two draft papers: AI and
existential risk; heuristics and biases
>> Hmm, now what again is your goal, I am confused?
To maximally increase Volition
actualization/wish fulfillment (Axiom 1).
>> You [said?] there is a
possible formula that will make an AI "friendly"
but unfriendly toward others, how will that benefit anyone then? That's horrible. I said absolutely nothing of
the sort and would strongly argue against anything
like that.
I said that friendly included a number of effects that
(currently unfriendly) others will decry as undesirable, horrible, or immoral
(like preventing them from killing infidels, etc.). Note that without
understanding exactly what friendly is, YOU may be unfriendly without even
realizing it (and I would contend that all of us are
unfriendly to some extent and to a much
greater degree than we realize).
The point is to make the AI equally friendly to all
(including itself).
>> Now, something that appeals to the
friendliness of everyone, 'sounds' better, but hasnt that already been tried
with Socialism, Communism,
and Democracy? With less than spectacular results? Yes, it 'sounds' better (because I would contend that it
is better -- and the fact that humanity is constantly striving towards it tends
to support my contention).
Yes, it has been attempted with a variety of different
implementation methods (Socialism, Communism, and Democracy) with less than
spectacular results. Of course, flying humans were also attempted a number
of times with a variety of different implementation methods with less than
spectacular results before Orville and Wilbur succeeded and society continued to
advance to the flying humans that we have today.
>> There still would be abortion/noabortion
xlaw/no xlaw that would be deemed unfriendly.
No. There still would be
abortion/noabortion xlaw/no xlaw that would be decried by some as undesirable,
horrible, or immoral. The point to a precise logical extendible
formulation of friendliness is that it will be obvious what is friendly and what
is not and -- once a sufficient number of lawmakers are friendly -- only
friendly laws will be implemented. My point is meant to be much larger
than just a friendly AI. WE (and our society) need to become
friendly
>> On another tack, I am looking at using some sort of
general goodness or friendliness equation as a decider for motivation of my AI,
and it takes into account many 'selfish' values such as personal wealth, but
will also have a 'world' value thatdetermines if the world is in better state,
ie preventing death where possible and making other people happy.
Hmmm. This statement makes me believe
that I'm not expressing myself well enough to convey what I mean to you
(personally) and that our ideas are actually quite closer than you believe
because it looks to me like a fairly close approximation to a simplified version
of my thesis. If you are willing to accept "maximally increase volition
actualization/wish fulfillment" as your decider for motivation for your AI, the
AI is certainly allowed to have and act upon 'selfish' wishes for personal
wealth while still acting in accordance with society's goals of increasing
everyone's volition (which generally means preventing death and making other
people happy).
>> Now the values on this in an AI can switch around, in
just the same way as humans, and they could become selfish, or homicidal as
well.
<BUZZER> NO! This is
ABSOLUTELY what we have to design to prevent. An
AI with a flipped value system could easily become the end of the human
race. If your AI's value system can flip then I will go to war to prevent
it's being built and I would be perfectly logically correct in doing
so.
Note too that it would be a far better world
if society's persuasion on human beings to not flip their value
systems were far more effective.
Mark
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] |
- [agi] Re: Four axioms (WAS Two draft papers . . . .) Mark Waser
- [agi] Friendly AI in an unfriendly world... AI to th... DGoe
- Re: [agi] Friendly AI in an unfriendly world... ... Charles D Hixson
- [agi] Evolution at its best.... Friendly AI ... DGoe
- Re: [agi] Evolution at its best.... Frie... Charles D Hixson
- Re: [agi] Friendly AI in an unfriendly world... James Ratcliff
- Re: [agi] Re: Four axioms (WAS Two draft papers . . ... James Ratcliff
- Re: [agi] Re: Four axioms (WAS Two draft papers ... Mark Waser
- Re: [agi] Re: Four axioms (WAS Two draft pap... James Ratcliff
- Re: [agi] Re: Four axioms (WAS Two draft... Mark Waser
