Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Eliezer S. Yudkowsky
C. David Noziglia wrote: The problem with the issue we are discussing here is that the worst-case scenario for handing power to unrestricted, super-capable AI entities is very bad, indeed. So what we are looking for is not really building an ethical structure or moral sense at all. Failure is

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Bill Hibbard
Hi David, The problem here, I guess, is the conflict between Platonic expectations of perfection and the messiness of the real world. I never said perfection, and in my book make it clear that the task of a super-intelligent machine learning behaviors to promote human happiness will be very

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Eliezer S. Yudkowsky
Brad Wyble wrote: 3) A society of selfish AIs may develop certain (not really primatelike) rules for enforcing cooperative interactions among themselves; but you cannot prove for any entropic specification, and I will undertake to *disprove* for any clear specification, that this creates

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Brad Wyble
There are simple external conditions that provoke protective tendencies in humans following chains of logic that seem entirely natural to us. Our intuition that reproducing these simple external conditions serve to provoke protective tendencies in AIs is knowably wrong, failing an

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Eliezer S. Yudkowsky
Brad Wyble wrote: There are simple external conditions that provoke protective tendencies in humans following chains of logic that seem entirely natural to us. Our intuition that reproducing these simple external conditions serve to provoke protective tendencies in AIs is knowably wrong,