[agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Eliezer S. Yudkowsky
Ben, you and I have a long-standing disagreement on a certain issue which impacts the survival of all life on Earth. I know you're probably bored with it by now, but I hope you can understand why, given my views, I keep returning to it, and find a little tolerance for my doing so. The issue

Re: [agi] AIXI and Solomonoff induction

2003-02-12 Thread Shane Legg
Hi Cliff, So Solomonoff induction, whatever that precisely is, depends on a somehow compressible universe. Do the AIXI theorems *prove* something along those lines about our universe, AIXI and related work does not prove that our universe is compressible. Nor do they need to. The sun seems

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
I can spot the problem in AIXI because I have practice looking for silent failures, because I have an underlying theory that makes it immediately obvious which useful properties are formally missing from AIXI, and because I have a specific fleshed-out idea for how to create moral systems

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
Your intuitions say... I am trying to summarize my impression of your viewpoint, please feel free to correct me... AI morality is a matter of experiential learning, not just for the AI, but for the programmers. Also, we plan to start Novamente off with some initial goals embodying ethical

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
Hi, 2) If you get the deep theory wrong, there is a strong possibility of a silent catastrophic failure: the AI appears to be learning everything just fine, and both you and the AI are apparently making all kinds of fascinating discoveries about AI morality, and everything seems to be

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
Eliezer S. Yudkowsky wrote: 1) AI morality is an extremely deep and nonobvious challenge which has no significant probability of going right by accident. 2) If you get the deep theory wrong, there is a strong possibility of a silent catastrophic failure: the AI appears to be learning

[agi] AI Morality -- a hopeless quest

2003-02-12 Thread Arthur T. Murray
Alois Schicklgruber and his wife Klara probably did not give much thought to possible future aberrations when unser kleine Adi was born to them on 20 April 1889. Our little Adolf Hitler was probably cute and cuddly like any other baby. No one could be expected to know whether he would grow into

[agi] Breaking AIXI-tl

2003-02-12 Thread Eliezer S. Yudkowsky
Okay, let's see, I promised: An intuitively fair, physically realizable challenge, with important real-world analogues, formalizable as a computation which can be fed either a tl-bounded uploaded human or an AIXI-tl, for which the human enjoys greater success measured strictly by total reward

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
Hi Arthur, On Wed, 12 Feb 2003, Arthur T. Murray wrote: . . . Since the George and Barbara Bushes of this world are constantly releasing their little monsters onto the planet, why should we creators of Strong AI have to take any more precautions with our Moravecian Mind Children than human

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I don't think any human alive has the moral and ethical underpinnings to allow them to resist the corruption of absolute power in the long run. We are all kept in check by our lack of power, the competition of our fellow humans, the laws of society, and the instructions of our peers. Remove

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Arthur T. Murray wrote: [snippage] why should we creators of Strong AI have to take any more precautions with our Moravecian Mind Children than human parents do with their human babies? Here are three reasons I can think of, Arthur: 1) Because we know in advance that 'Strong AI', as you

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote: I don't think any human alive has the moral and ethical underpinnings to allow them to resist the corruption of absolute power in the long run. I am exceedingly glad that I do not share your opinion on this. Human altruism *is* possible, and indeed I observe myself

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I am exceedingly glad that I do not share your opinion on this. Human altruism *is* possible, and indeed I observe myself possessing a significant measure of it. Anyone doubting thier ability to 'resist corruption' should not IMO be working in AGI, but should be doing some serious

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
On Wed, 12 Feb 2003, Arthur T. Murray wrote: The quest is as hopeless as it is with human children. Although Bill Hibbard singles out the power of super-intelligence as the reason why we ought to try to instill morality and friendliness in our AI offspring, such offspring are made in our own

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I can't imagine the military would be interested in AGI, by its very definition. The military would want specialized AI's, constructed around a specific purpose and under their strict control. An AGI goes against everything the military wants from its weapons and agents. They train soldiers

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Eliezer S. Yudkowsky
Brad Wyble wrote: Tell me this, have you ever killed an insect because it bothered you? In other words, posthumanity doesn't change the goal posts. Being human should still confer human rights, including the right not to be enslaved, eaten, etc.. But perhaps being posthuman will confer

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Kevin
Hello All.. After reading all this wonderful debate on AI morality and Eliezer's People eating AGI concerns, I'm left wondering this: Am I the *only* one here who thinks that the *most* likely scenario is that such a thing as a universe devouring AGI is utterly impossible? Everyone here seems to

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread C. David Noziglia
- Original Message - From: Philip Sutton To: [EMAIL PROTECTED] Sent: Wednesday, February 12, 2003 2:55 PM Subject: Re: [agi] AI Morality -- a hopeless quest Brad, Maybe what you said below is the key to friendly GAI I don't think any human

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Stephen Reed
On Wed, 12 Feb 2003, Brad Wyble wrote: I can't imagine the military would be interested in AGI, by its very definition. The military would want specialized AI's, constructed around a specific purpose and under their strict control. An AGI goes against everything the military wants from its

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Ben Goertzel
As has been pointed out on this list before, the military IS interested in AGI, and primarily for information integration rather than directly weapons-related purposes. See http://www.darpa.mil/body/NewsItems/pdf/iptorelease.pdf for example. -- Ben G I can't imagine the military would be

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: Your intuitions say... I am trying to summarize my impression of your viewpoint, please feel free to correct me... AI morality is a matter of experiential learning, not just for the AI, but for the programmers. To teach an AI morality you must give it the right feedback

Re: [agi] Breaking AIXI-tl

2003-02-12 Thread Eliezer S. Yudkowsky
Shane Legg wrote: Eliezer, Yes, this is a clever argument. This problem with AIXI has been thought up before but only appears, at least as far as I know, in material that is currently unpublished. I don't know if anybody has analysed the problem in detail as yet... but it certainly is a very

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
This is slightly off-topic but no more so than the rest of the thread... 1) That it is selfishly pragmatic for a superintelligence to deal with humans economically rather than converting them to computronium. For convenience, lets rephrase this the majority of arbitrarily generated

Re: [agi] Breaking AIXI-tl

2003-02-12 Thread Shane Legg
Eliezer S. Yudkowsky wrote: Has the problem been thought up just in the sense of What happens when two AIXIs meet? or in the formalizable sense of Here's a computational challenge C on which a tl-bounded human upload outperforms AIXI-tl? I don't know of anybody else considering human upload

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote: Under the ethical code you describe, the AGI would swat them like a bug with no more concern than you swatting a mosquito. I did not describe an ethical code, I described two scenarios about a human (myself) then suggested the non-bug-swatting scenario was possible,

Re: [agi] Breaking AIXI-tl

2003-02-12 Thread Bill Hibbard
Hi Eliezer, An intuitively fair, physically realizable challenge, with important real-world analogues, formalizable as a computation which can be fed either a tl-bounded uploaded human or an AIXI-tl, for which the human enjoys greater success measured strictly by total reward over time, due

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Stephen Reed
Daniel, For a start look at the IPTO web page and links from: http://www.darpa.mil/ipto/research/index.html Darpa has a variety of Offices which sponsor AI related work, but IPTO is now being run by Ron Brachman, the former president of the AAAI. When I listened to the talk he gave a Cycorp in

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Ben Goertzel
Steve, Ben, do you have any gauge as too what kind of grants are hot right now or what kind of narrow AI projects with AGI implications have recently been funded through military agencies? The list would be very long. Just look at the DARPA IPTO website for starters...

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
Jonathan Standley wrote: Now here is my question, it's going to sound silly but there is quite a bit behind it: Of what use is computronium to a superintelligence? If the superintelligence perceives a need for vast computational resources, then computronium would indeed be very useful.

Re: [agi] AIXI and Solomonoff induction

2003-02-12 Thread Shane Legg
Cliff Stabbert wrote: [On a side note, I'm curious whether and if so, how, lossy compression might relate. It would seem that in a number of cases a simpler algorithm than expresses exactly the behaviour could be valuable in that it expresses 95% of the behaviour of the environment being

[agi] Re: [META] Moderation warning.

2003-02-12 Thread Alan Grimes
Ben Goertzel wrote: you really test my tolerance as list moderator. My appologies. Please, please, no personal insults. And no anti-Semitism or racism of any kind. ACK. I guess that your reference to Eliezer as the rabbi may have been meant as amusing, It is not at all amusing, nor