--- Mark Waser <[EMAIL PROTECTED]> wrote: > > I mean that ethics or friendliness is an algorithmically complex function, > > like our legal system. It can't be simplified. > > The determination of whether a given action is friendly or ethical or not is > certainly complicated but the base principles are actually pretty darn > simple.
If the base principles are to not harm humans or to do what humans tell them, then these are not simple. We both know what that means but we need common sense to do so. Common sense is not algorithmically simple. Ask Doug Lenat. > > However, I don't believe that friendliness can be made stable through RSI. > > > Your wording is a bit unclear here. RSI really has nothing to do with > friendliness other than the fact that RSI makes the machine smarter and the > machine then being smarter *might* have any of the consequences of: > 1.. understanding friendliness better > 2.. evaluating whether something is friendly better > 3.. convincing the machine that friendliness should only apply to the most > evolved life-form (something that this less-evolved life-form sees as > patently ridiculous) > I'm assuming that you mean you believe that friendliness can't be made > stable under improving intelligence. I believe that you're wrong. I mean that an initially friendly AGI might not be friendly after RSI. If the child AGI is more intelligent, then the parent will not be able to fully evaluate its friendliness. > > We > > can summarize the function's decision process as "what would the average > human > > do in this situation?" > > That's not an accurate summary as far as I'm concerned. I don't want > *average* human judgement. I want better. Who decides what is "better"? If the AGI decides, then future generations will certainly be better -- by its definition. > I suspect that our best current instincts are fairly close to friendliness. Our current instincts allow war, crime, torture, and genocide. Future generations will look back at us as barbaric. Do you want to freeze the AGI's model of ethics at its current level, or let it drift in a manner beyond our control. > > Second, as I mentioned before, RSI is necessarily experimental, and > therefore > > evolutionary, and the only stable goal in an evolutionary process is rapid > > reproduction and acquisition of resources. > > I disagree strongly. Experimental only implies a weak meaning of the term > evolutionary and your assertion that the only stable goal in a evolutionary > process is rapid reproduction and acquisition of resources may apply to the > most obvious case of animal evolution but it certainly doesn't apply to > numerous evolutionary process that scientists perform all the time (For > example, when scientists are trying to evolve a protein that binds to a > certain receptor. In that case, the stable goal is binding strength and > nothing else since the scientists then provide the reproduction for the best > goal-seekers). That only works because a more intelligent entity (the scientists) controls the goal. RSI is uncontrolled, just like biological evolution. > So you don't believe that humans will self-improve? You don't believe that > humans will be able to provide something that the AGI might value? You > don't believe that a friendly AGI would be willing not to hog *all* the > resources. Personally, I think that the worst case with a friendly AGI is > that we would end up as pampered pets until we could find a way to free > ourselves of our biology. AGI may well produce a utopian society. But as you say, there might not be anything in it that resembles human life. -- Matt Mahoney, [EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=50018754-0d8000
