James: There still would be abortion/noabortion xlaw/no xlaw that would be deemed unfriendly. 

Mark: No.  There still would be abortion/noabortion xlaw/no xlaw that would be decried by some as undesirable, horrible, or immoral.
once a sufficient number of lawmakers are friendly -- only friendly laws will be implemented.

James: So are you seperating '
undesirable, horrible, or immoral' from the term of friendliness?
I have trouble with that, one way or the other I consider abortion/noabortion to be very unfriendly, and it will make me and others unhappy and angry.  So can you just seperate this entirely from your friendliness arguement?
Likewise how can we ever have laws that are only friendly?  This seems inherintly impossible from a democratic point of view. Many issues have two sides that people believe is correct good and frienly to the world.  It doesnt seem like we can ever have a Single position/formula /track that represents this.

You say we must understand what 'friendly' is, I must say I now dont understand it, do you have a concise defnition for this?

>> Switching values of a "goodness formula"
It doesnt seem possible/good that a AGI would not change these values.  If not it would be stuck forever with its first created beliefs, which looking back on the human race, def does not seem to be a good idea.  For anything to truly grow it needs to consider its actions, motivations, and change them where necessary.  Otherwise what you are saying is we need a perfect human to create this perfect formula that the computer will use forever.  That seems silly and unrealistic.   A small number of years ago, america would have created one that would have permitted slavery in perpetuity.  Currently we believe that corporal punishment is good, but what if their were a better alternative.  The AGI would never use this alternative because it would be hard-coded not to change.

Now changing tacks, I AGREE with you on a number of thoughts, it is TRULY a scary thought that someday it will flip around and say, ok humans are harming the earth, and should all be locked up. I understand the doomsday propositions, and they are scary.  But I dont think you can so easily just lock into position against them.
   And when it comes down to it in the end, there is NO way of making one that cannot change itself, if we are talking about a software AI.  Any software has the ability to write to a file, shutdown and restart, that is a simple idea that can be done now.  Now give come an AGI and if it determines it wants something changed, it can change it.  Period.  And any hacker can go in and change it as well.  Now if we went hardware wise and said ok, this is hard-written in titinium these rules, thats all good and well, until the first one downloads into another machine that is not hard-coded, then you have a free AGI again.

  I dont actually have a solution (of course) but I do see that we have to have a flexible, free, AGI, because that is what will happen in the end.  I am not really naieve enough to say it will be 'good' forever, just because it started that way, or that it will not rampage and run over the world.  But I dont see any easy way to prevent that.
  I would of course take ALL initial precautions to enact Asimov laws and make human safety first, but as youve seem, even all of those are totally dependent on all the others, and many other factors.  It is impossible to forsee all complications there.

James Ratcliff

Mark Waser <[EMAIL PROTECTED]>
wrote:
From: James Ratcliff
To: [email protected]
Sent: Friday, June 09, 2006 4:13 PM
Subject: Re: [agi] Two draft papers: AI and existential risk; heuristics and biases
 
>> Hmm, now what again is your goal, I am confused?
 
    To maximally increase Volition actualization/wish fulfillment (Axiom 1).
 
>> You [said?] there is a possible formula that will make an AI "friendly"
but unfriendly toward others, how will that benefit anyone then?
 
    That's horrible.  I said absolutely nothing of the sort and would strongly argue against anything like that. 
 
    I said that friendly included a number of effects that (currently unfriendly) others will decry as undesirable, horrible, or immoral (like preventing them from killing infidels, etc.).  Note that without understanding exactly what friendly is, YOU may be unfriendly without even realizing it (and I would contend that all of us are unfriendly to some extent and to a much greater degree than we realize).
 
    The point is to make the AI equally friendly to all (including itself).

>> Now, something that appeals to the friendliness of everyone, 'sounds' better, but hasnt that already been tried with Socialism, Communism,
and Democracy?  With less than spectacular results?
 
    Yes, it 'sounds' better (because I would contend that it is better -- and the fact that humanity is constantly striving towards it tends to support my contention). 
 
    Yes, it has been attempted with a variety of different implementation methods (Socialism, Communism, and Democracy) with less than spectacular results.  Of course, flying humans were also attempted a number of times with a variety of different implementation methods with less than spectacular results before Orville and Wilbur succeeded and society continued to advance to the flying humans that we have today.
 
>> There still would be abortion/noabortion xlaw/no xlaw that would be deemed unfriendly.
 
    No.  There still would be abortion/noabortion xlaw/no xlaw that would be decried by some as undesirable, horrible, or immoral.  The point to a precise logical extendible formulation of friendliness is that it will be obvious what is friendly and what is not and -- once a sufficient number of lawmakers are friendly -- only friendly laws will be implemented.  My point is meant to be much larger than just a friendly AI.  WE (and our society) need to become friendly

>> On another tack, I am looking at using some sort of general goodness or friendliness equation as a decider for motivation of my AI, and it takes into account many 'selfish' values such as personal wealth, but will also have a 'world' value thatdetermines if the world is in better state, ie preventing death where possible and making other people happy.
 
    Hmmm.  This statement makes me believe that I'm not expressing myself well enough to convey what I mean to you (personally) and that our ideas are actually quite closer than you believe because it looks to me like a fairly close approximation to a simplified version of my thesis.  If you are willing to accept "maximally increase volition actualization/wish fulfillment" as your decider for motivation for your AI, the AI is certainly allowed to have and act upon 'selfish' wishes for personal wealth while still acting in accordance with society's goals of increasing everyone's volition (which generally means preventing death and making other people happy).
 
>> Now the values on this in an AI can switch around, in just the same way as humans, and they could become selfish, or homicidal as well.
 
    <BUZZER> NO!  This is ABSOLUTELY what we have to design to prevent.  An AI with a flipped value system could easily become the end of the human race.  If your AI's value system can flip then I will go to war to prevent it's being built and I would be perfectly logically correct in doing so.
 
    Note too that it would be a far better world if society's persuasion on human beings to not flip their value systems were far more effective.
 
        Mark

 

To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



Thank You
James Ratcliff
http://FallsTown.com - Local Wichita Falls Community Website
http://Falazar.com - Personal Website
Hosting Starting at $9.95
Dialups Accounts - $8.95

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to