|
James: So are you seperating
'undesirable, horrible, or immoral'
from the term of
friendliness?
I am removing the requirement
from friendliness that it match everybody's opinion on
"undesirable, horrible, or immoral" since that is clearly an impossible
undertaking. However, friendly is certainly NOT going to include anything
that is considered 'undesirable,
horrible, or immoral' by a majority of
people.
>> one way or the other I consider abortion/noabortion to be very
unfriendly, and it will make me and others unhappy and
angry.
Yep, that's exactly my
point. If there is a choice either way, it is going to make some people
unhappy and angry. Yet, a set of choices (abortion is allowed under x
circumstances/abortion is not allowed under y circumstances) DOES need
to be both made and enforced by society to avoid
worse consequences. A "friendly" way of making these choice would go a
long way to promoting human happiness.
>> So can you just seperate this entirely from your friendliness
arguement?
No, I can't because it is the
entire point to developing friendliness.
>> Likewise how can we ever have laws that are only
friendly?
You're still conflating friendly
with makes everyone happy. As per axiom 1 - Friendly is maximizing
volition actualization/wish fulfillment. Laws that make people
pay taxes make many/most people unhappy yet any rational person will acknowledge
that taxes, when spent correctly, are the current best way to get
many things done and promote long-term happiness. Any law that
doesn't maximize volition actualization/wish fulfillment (i.e. isn't
friendly) is a bad law in either intent or design. Isn't it a
tragedy that we have laws that aren't friendly?
>> This seems inherintly impossible from a democratic point of
view
Unfriendly people create
unfriendly laws (and I've already contended that we're all unfriendly to a
greater degree than we believe). Democracy is a great form of government
in that it suppresses many of our unfriendly impulses -- but it is still
vulnerable to our unrecognized unfriendlinesses and, more importantly, to
hijacking and gross corruption when we can't clearly state what is friendly so
as to block unfriendly actions. A democracy with a majority of friendly
people will ALWAYS create friendly laws BY DEFINITION. Being able to
clearly define friendliness so the average person can understand it and use it
would be a tremendous boon to the world.
>> Many issues have two sides that people believe is
correct good and frienly to the world.
The problem is that most people
don't have an orderly and consistent system of values where they logically
determine what is good and what is not. Believing that something is
correct, good, or friendly is not the same as it actually being correct, good,
or friendly according to set criteria and derived from specific rules.
People need to get over the belief that their individual opinions are
necessarily better than someone else's (see axiom 2 but don't assume that I mean
that certain people's opinion's aren't necessarily "better" -- because they
are if person is more logical and friendly).
>>It doesnt seem like we can ever have a Single position/formula /track
that represents this.
>> It doesnt seem possible/good that a AGI would not change these values. Stick with axiom 1. Come
up with a case where maximizing volition actualization/wish fulfillment is a bad
idea (following the constraints of axiom 2). I don't mean to pick on you
personally but "It doesn't seem . . . " isn't a useful argument because it
doesn't give me anything concrete to address.
>> If not it would be stuck forever with its first created beliefs, which
looking back on the human race, def does not seem to be a good idea.
Yes, I DEFINITELY want the AGI
to be stuck forever with its first created beliefs that maximizing volition
actualization/wish fulfillment is the ultimate goal and that each separate
individual's volitions/wishes are of equal value. I believe that this
is fair to me and fair to the AI (because I believe that I and everyone else
should be stuck with these beliefs). Why, specifically is this a bad
idea?
>> For anything to truly grow it needs to consider its actions, motivations,
and change them where necessary.
This does not seem to be in
conflict with what I am saying.
>> Otherwise what you are saying is we need a perfect human to create this
perfect formula that the computer will use
forever.
No, I am not saying that we need
a perfect human to create this perfect formula. I am saying that the
presented axioms and a few derivation rules are the "perfect formula" that
the computer (and everyone else) should use forever.
>> A small number of years ago, america would have
created one that would have permitted slavery in perpetuity.
Live and learn. A small
number of years ago, America didn't get that axiom 2 was necessary (and most
people STILL don't get it).
>> Currently we believe that corporal punishment is good, but what if their
were a better alternative.
What do you mean by good?
If it is the most effective feasible alternative under the current
conditions at the current time then it is friendly. If there is
a better alternative then it is not friendly.
>> The AGI would never use this alternative because it would be hard-coded
not to change.
No. The AGI will always
use the best possible alternative because it's hard-coded goal of
maximizing volition actualization/wish fulfillment will force it to choose
it.
>> And when it comes down to it in the end, there is NO way of making one
that cannot change itself, if we are talking about a software
AI.
You are 100% absolutely correct
in saying that there is NO way of making one that CANNOT change itself.
But that is almost entirely irrelevant because the AGI won't
WANT to change itself in such a way that is contrary to it's current goals (i.e.
maximizing volition actualization/wish fulfillment). The ONLY way in which
that statement is relevant is because the AGI COULD make an error and
inadvertently change it's current goal -- but we will make the barrier to that
as high as possible and the AGI will always WANT to help us do so (and this
is not abusive to the AGI because it is what is best for the AGI as well as
us).
>> I dont actually have a solution (of
course)
I don't believe in your "of
course". Please keep trying to show me specific examples of where I'm
going wrong. It's a fun debate and if it gets us anywhere near a real
solution . . . .
>> I would of course take ALL initial precautions to enact Asimov laws and
make human safety first
Asimov's three laws suck.
Asimov knew it and relished the opportunity to prove it in story after
story. His point was that laws like that were the first thing that humans
were going to try and that they wouldn't work (or, at least, wouldn't work in
the specific details). Note, however, that his robots eventually evolved
to be things that were VERY friendly to the extent of hiding
themselves from common human view because they were "unhelpful" to the
human psyche yet still intervened to assist whenever they could do so invisibly
(in stark contrast to the early AI in the movie "I, Robot" -- which
wasn't Asimov's but which I believe was true to his intent and would have
tickled his fancy).
>> It is impossible to forsee all complications
there.
Thus the need for simplicity,
simplicity, simplicity. Mathematical inductive proofs start with very
small seeds and very limited rules yet can grow to incredibly complex things
while still maintaining the property of being able to have things proved about
them. The Mandelbrot set has infinite complexity from a very short
succinct statement (The Mandelbrot set is the set of all complex numbers
z for which sequence defined by the iteration z(0) =
z, z(n+1) = z(n)*z(n) + z, n=0,1,2,
... remains bounded). I don't care about foreseeing all of
the complications -- I just want to prevent some very specific ones . .
. .
Mark
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] |
- Re: [agi] Re: Four axioms (WAS Two draft papers . . . .) Mark Waser
