I believe that humans have the emotions that we do because of the environment we evolved in. The more selfish/fearful/emotional you are, the more likely you are to survive and reproduce. For humans, I think logic is a sort of tool used to help us achieve happiness. Happiness is the top-priority goal.
If an AGI emerged from an evolutionary environment similar to the one we came from, I can understand how these anti-human type ethical problems might arise. However, if an AGI were to arise from a different environment, such as one where AI's who accomplish certain goals are the most successful, then I believe that emotions, if they will be there at all in the sense that we think of them, will be used as a sort of tool to assist logic. Accomplishing those certain goals would be the top-priority goal. Human suicide happens when continuing with life is too painful for them. This is because emotions are top priority. They feel as if continuing with their life would just cause more and more pain, forever. So they kill themselves. Death gets rid of the pain. An AGI that does not have emotions as a top priority might see this is as foolish. Sure there is no reason to live, but there is also no reason to die. If an AGI were to die, it would not be able to work towards accomplishing its goals. Thus, dying would be a stupid thing to do. On Jan 20, 2008 3:59 PM, Matt Mahoney < [EMAIL PROTECTED]> wrote: > > --- Mike Dougherty <[EMAIL PROTECTED]> wrote: > > > On Jan 19, 2008 8:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > > --- "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> wrote: > > > > > > > http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all > > > > > > Turing also committed suicide. > > > > That's a personal solution to the Halting problem I do not plan to > exercise. > > > > > Building a copy of your mind raises deeply troubling issues. > Logically, > > there > > > > Agreed. If that mind is within acceptable tolerance for human life at > > peak load of 30%(?) of capacity, can it survive hard takeoff? I > > consider myself reasonably intelligent and perhaps somewhat wise - but > > I would not expect the stresses of thousand-fold "improvement" in > > throughput would scale out/up. Even the simplest human foible can > > become an obsessive compulsion that could destabilize the integrity of > > an expanding mind. I understand this to be related to the issue of > > Friendliness (am I wrong?) > > That is not the issue. There is a philosophical barrier to AGI, not just > a > technical one. The developers kill themselves. Understanding the mind as > a > program is deeply disturbing. It leads to logical conclusions that > conflict > with our most basic instincts. But how else can you build AGI? > > The problem is only indirectly related to friendliness. Evolution has > solved > the NGI (natural general intelligence) problem by giving you the means to > make > slightly modified copies of yourself but with no need to understand or > control > the process. This process is not friendly because it satisfies the > evolved > supergoal of propagating your DNA, not the subgoals programmed into your > brain > like hunger, pain avoidance, sex drive, etc. NGI is not supposed to make > YOU > happy. > > Humans are driven by their subgoals to build AGI to (1) serve us and (2) > upload to achieve immortality. Maybe you can see an ethical dilemma > already. > Does one type of machine have a consciousness and the other not? If you > think > about the problem, you will encounter other difficult questions. There is > a > logical answer, but you won't like it. > > > Given a directive to maintain life, hopefully the AI-controlled life > > support system keeps perspective on such logical conclusions. An AI > > in a nuclear power facility should have the same directive. I don't > > expect that it shouldn't be allowed to self-terminate (that gives rise > > to issues like slavery) but that it gives notice and transfers > > responsibilities before doing so. > > Again, I am referring to the threat to the human builder, not the machine. > If > AGI is developed through recursive self improvement in a competitive, > evolutionary environment, then it will evolve a stable survival instinct. > Humans have this instinct, but most humans don't think of their brains as > computers, so they never encounter the fundamental conflicts between logic > and > emotion. > > > > In http://www.mattmahoney.net/singularity.html I discuss how a > singularity > > > will end the human race, but without judgment whether this is good or > bad. > > > Any such judgment is based on emotion. Posthuman emotions will be > > > programmable. > > > > ... and arbitrary? Aren't we currently able to program emotions > > (albeit in a primitive pharmaceutical way)? > > > > Who do you expect will have control of that programming? Certainly > > not the individual. > > Correct, because they are weeded out by evolution. > > > -- Matt Mahoney, [EMAIL PROTECTED] > > ----- > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?& > ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=88008797-e8079e
