Re: [agi] Re: [META] Moderation warning.

2003-02-12 Thread RSbriggs
In a message dated 2/12/2003 10:49:23 PM Mountain Standard Time, [EMAIL PROTECTED] writes: It fits almost perfectly. The other two parts of the definition work very well when applied to him and his sheepish following. Just unsub me from this list - I've had all of this I can stand, thanks.

[agi] Re: [META] Moderation warning.

2003-02-12 Thread Alan Grimes
Ben Goertzel wrote: > you really test my tolerance as list moderator. My appologies. > Please, please, no personal insults. And no anti-Semitism or racism of > any kind. ACK. > I guess that your reference to Eliezer as "the rabbi" may have been > meant as amusing, It is not at all amusing

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
Alan, With comments like this > I want this list to be useful to me and not have to skim through > hundreds of e-mails watching the rabbi drive conversation into useless > spirals as he works on the implementation details of the real problems. > Really, I'm getting dizzy from all of this. Lets s

Re: [agi] AIXI and Solomonoff induction

2003-02-12 Thread Shane Legg
Cliff Stabbert wrote: [On a side note, I'm curious whether and if so, how, lossy compression might relate. It would seem that in a number of cases a simpler algorithm than expresses exactly the behaviour could be valuable in that it expresses 95% of the behaviour of the environment being studied

[agi] Reinforcement learning

2003-02-12 Thread Ben Goertzel
Hi  all ,  As a digression from the recent threads on the Friendliness or otherwise of certain uncomputable, unimplementable AI systems, I thought I'd post something on some fascinating practical AI algorithms  These are narrow-AI at present, but, they definitely have some AGI relevanc

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
> Jonathan Standley wrote: > > Now here is my question, it's going to sound silly but there is >> quite a bit behind it: > > "Of what use is computronium to a superintelligence?" > If the superintelligence perceives a need for vast computational > resources, then computronium would indeed be very

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Jonathan Standley
> Now here is my question, it's going to sound silly but there is quite a> bit behind it: > > "Of what use is computronium to a superintelligence?" >   If the superintelligence perceives a need for vast computational resources, then computronium would indeed be very useful.  Assuming said SI

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Ben Goertzel
> Steve, Ben, do you have any gauge as too what kind of grants are hot > right now or what kind of narrow AI projects with AGI implications have > recently been funded through military agencies? The list would be very long. Just look at the DARPA IPTO website for starters... http://www.darpa.mi

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Stephen Reed
Daniel, For a start look at the IPTO web page and links from: http://www.darpa.mil/ipto/research/index.html Darpa has a variety of Offices which sponsor AI related work, but IPTO is now being run by Ron Brachman, the former president of the AAAI. When I listened to the talk he gave a Cycorp in

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Daniel Colonnese
>I believe that as evidence of AGI (e. g. software that can learn >from reading) becomes widely known: (1) the military will provide abundant >funding - possibly in excess of what commercial firms could do without a >consortium (2) public outcry will assure that military AGI development >has civi

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
Goertzel the good wrote: > Perhaps living in Washington has made me a little paranoid, but I am > continually aware of the increasing threats posed by technology to > humanity's survival. I often think of humanity's near-term future as a > race between destructive and constructive technologies.

Re: [agi] Breaking AIXI-tl

2003-02-12 Thread Bill Hibbard
Hi Eliezer, > An intuitively fair, physically realizable challenge, with important > real-world analogues, formalizable as a computation which can be fed > either a tl-bounded uploaded human or an AIXI-tl, for which the human > enjoys greater success measured strictly by total reward over time, du

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote: > > Under the ethical code you describe, the AGI would swat > them like a bug with no more concern than you swatting a mosquito. > I did not describe an ethical code, I described two scenarios about a human (myself) then suggested the non-bug-swatting scenario was possible, analo

Re: [agi] Breaking AIXI-tl

2003-02-12 Thread Shane Legg
Eliezer S. Yudkowsky wrote: Has the problem been thought up just in the sense of "What happens when two AIXIs meet?" or in the formalizable sense of "Here's a computational challenge C on which a tl-bounded human upload outperforms AIXI-tl?" I don't know of anybody else considering "human uplo

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
This is slightly off-topic but no more so than the rest of the thread... > 1) That it is selfishly pragmatic for a superintelligence to deal with > humans economically rather than converting them to computronium. For convenience, lets rephrase this "the majority of arbitrarily generated s

RE: [agi] Breaking AIXI-tl

2003-02-12 Thread Ben Goertzel
Eliezer, > A (selfish) human upload can engage in complex cooperative > strategies with > an exact (selfish) clone, and this ability is not accessible to AIXI-tl, > since AIXI-tl itself is not tl-bounded and therefore cannot be simulated > by AIXI-tl, nor does AIXI-tl have any means of abstractly

Re: [agi] Breaking AIXI-tl

2003-02-12 Thread Eliezer S. Yudkowsky
Shane Legg wrote: Eliezer, Yes, this is a clever argument. This problem with AIXI has been thought up before but only appears, at least as far as I know, in material that is currently unpublished. I don't know if anybody has analysed the problem in detail as yet... but it certainly is a very i

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
Alan Grimes wrote: > You have not shown this at all. From everything you've said it seems > that you are trying to trick Ben into having so many misgivings about > his own work that he holds it up while you create your AI first. I hope > Ben will see through this deception and press ahead with no

Re: [agi] Breaking AIXI-tl

2003-02-12 Thread Shane Legg
Eliezer, Yes, this is a clever argument. This problem with AIXI has been thought up before but only appears, at least as far as I know, in material that is currently unpublished. I don't know if anybody has analysed the problem in detail as yet... but it certainly is a very interesting question

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: > >> Your intuitions say... I am trying to summarize my impression of your >> viewpoint, please feel free to correct me... "AI morality is a >> matter of experiential learning, not just for the AI, but for the >> programmers. To teach an AI morality you must give it the right >

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Ben Goertzel
As has been pointed out on this list before, the military IS interested in AGI, and primarily for information integration rather than directly weapons-related purposes. See http://www.darpa.mil/body/NewsItems/pdf/iptorelease.pdf for example. -- Ben G > > I can't imagine the military would b

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Ben Goertzel
> Hey, I'm a relatively new subscriber. I know that we are talking about > how to make an AI system friendly but has anyone considered the opposite > -- building AI weapons. > > In the US anyway, a significant portion of AI research funding comes > from the military, and this current administr

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Stephen Reed
On Wed, 12 Feb 2003, Brad Wyble wrote: > > I can't imagine the military would be interested in AGI, by its very > definition. The military would want specialized AI's, constructed > around a specific purpose and under their strict control. An AGI goes > against everything the military wants from

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread C. David Noziglia
- Original Message - From: Philip Sutton To: [EMAIL PROTECTED] Sent: Wednesday, February 12, 2003 2:55 PM Subject: Re: [agi] AI Morality -- a hopeless quest Brad, Maybe what you said below is the key to friendly GAI > I don't think any human a

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
> > Brad Wyble wrote: > > > > Tell me this, have you ever killed an insect because it bothered you? > > > > Well, of course. But there's bothering and BOTHERING. Let me give you > an example: I will often squash a deer-fly or a mosquito because a) they > hurt and b) they can infect me with dise

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Daniel Colonnese wrote: > > Hey, I'm a relatively new subscriber. I know that we are talking > about how to make an AI system friendly but has anyone considered the > opposite -- building AI weapons. > Short answer: Yes. Slightly longer answer: Most of the people I have discussed this with are e

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote: > > Tell me this, have you ever killed an insect because it bothered you? > Well, of course. But there's bothering and BOTHERING. Let me give you an example: I will often squash a deer-fly or a mosquito because a) they hurt and b) they can infect me with disease. However I liv

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
Hi Daniel, On Wed, 12 Feb 2003, Daniel Colonnese wrote: > Bill Hibbard wrote: > > >We better not make them in our own image. We can make > >them with whatever reinforcement values we like, rather > >than the ones we humans were born with. Hence my often > >repeated suggestion that they reinforce

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
> > 1) matter manipulating machines of such a grand scale are not possible > 2) mmm's are possible, but never actually do such a thing > 3) mmm's are possible and they created this current universe as a simulation > ala The Matrix. > As unlikely as it may be, we have to consider #4: that we're t

Occam's Razor (was Re: [agi] AIXI and Solomonoff induction)

2003-02-12 Thread James Rogers
On Wed, 2003-02-12 at 06:46, Cliff Stabbert wrote: > That is to say, the "the simplest explanation is right" heuristic > tends to break down in the presence of life -- and the more so the > more the life is conscious. Because among the things that it is > conscious of is that it *can be* and *is*

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Kevin
Hello All.. After reading all this wonderful debate on AI morality and Eliezer's People eating AGI concerns, I'm left wondering this: "Am I the *only* one here who thinks that the *most* likely scenario is that such a thing as a "universe devouring" AGI is utterly impossible?" Everyone here seems

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Ed Heflin
Arthur, I disagree with your 'strong' dismissal of a viable AGI Morality and Ethics System, leer an 'AGI MES'. My reasoning is that, by definition, AGI systems must be developed in a complex environment to solve complex problems, or risk complete irrelevance all together. So, if we assume that t

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
> Brad, > > Maybe what you said below is the key to friendly GAI > > > I don't think any human alive has the moral and ethical underpinnings > > to allow them to resist the corruption of absolute power in the long > > run. We are all kept in check by our lack of power, the competition > > of

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Eliezer S. Yudkowsky
Brad Wyble wrote: > > Tell me this, have you ever killed an insect because it bothered you? "In other words, posthumanity doesn't change the goal posts. Being human should still confer human rights, including the right not to be enslaved, eaten, etc.. But perhaps being posthuman will confer post

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Philip Sutton
Brad, Maybe what you said below is the key to friendly GAI > I don't think any human alive has the moral and ethical underpinnings > to allow them to resist the corruption of absolute power in the long > run.  We are all kept in check by our lack of power, the competition > of our fe

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I can't imagine the military would be interested in AGI, by its very definition. The military would want specialized AI's, constructed around a specific purpose and under their strict control. An AGI goes against everything the military wants from its weapons and agents. They train soldiers

RE: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Daniel Colonnese
Bill Hibbard wrote: >We better not make them in our own image. We can make >them with whatever reinforcement values we like, rather >than the ones we humans were born with. Hence my often >repeated suggestion that they reinforce behaviors >according to human happiness. Hey, I'm a relatively new s

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
On Wed, 12 Feb 2003, Arthur T. Murray wrote: > The quest is as hopeless as it is with human children. > Although Bill Hibbard singles out "the power of super-intelligence" > as the reason why we ought to try to instill morality and friendliness > in our AI offspring, such offspring are made in our

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
> > I am exceedingly glad that I do not share your opinion on this. Human > altruism *is* possible, and indeed I observe myself possessing a > significant measure of it. Anyone doubting thier ability to 'resist > corruption' should not IMO be working in AGI, but should be doing some > serious in

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Arthur T. Murray
On Wed, 12 Feb 2003, Michael Roy Ames wrote: > Arthur T. Murray wrote: > > > > [snippage] > > why should we creators of Strong AI have to take any > > more precautions with our Moravecian "Mind Children" > > than human parents do with their human babies? > > > > Here are three reasons I can thin

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Brad Wyble wrote: > I don't think any human alive has the moral and ethical underpinnings > to allow them to resist the corruption of absolute power in the long > run. > I am exceedingly glad that I do not share your opinion on this. Human altruism *is* possible, and indeed I observe myself posse

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Michael Roy Ames
Alan Grimes wrote: > > You have not shown this at all. From everything you've said it seems > that you are trying to trick Ben into having so many misgivings about > his own work that he holds it up while you create your AI first. I > hope Ben will see through this deception and press ahead with >

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Michael Roy Ames
Arthur T. Murray wrote: > > [snippage] > why should we creators of Strong AI have to take any > more precautions with our Moravecian "Mind Children" > than human parents do with their human babies? > Here are three reasons I can think of, Arthur: 1) Because we know in advance that 'Strong AI', as

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Brad Wyble
I don't think any human alive has the moral and ethical underpinnings to allow them to resist the corruption of absolute power in the long run. We are all kept in check by our lack of power, the competition of our fellow humans, the laws of society, and the instructions of our peers. Remove a

Re: [agi] AI Morality -- a hopeless quest

2003-02-12 Thread Bill Hibbard
Hi Arthur, On Wed, 12 Feb 2003, Arthur T. Murray wrote: > . . . > Since the George and Barbara Bushes of this world > are constantly releasing their little monsters onto the planet, > why should we creators of Strong AI have to take any > more precautions with our Moravecian "Mind Children" > tha

[agi] Breaking AIXI-tl

2003-02-12 Thread Eliezer S. Yudkowsky
Okay, let's see, I promised: An intuitively fair, physically realizable challenge, with important real-world analogues, formalizable as a computation which can be fed either a tl-bounded uploaded human or an AIXI-tl, for which the human enjoys greater success measured strictly by total reward o

[agi] AI Morality -- a hopeless quest

2003-02-12 Thread Arthur T. Murray
Alois Schicklgruber and his wife Klara probably did not give much thought to possible future aberrations when "unser kleine Adi" was born to them on 20 April 1889. "Our little Adolf" Hitler was probably cute and cuddly like any other baby. No one could be expected to know whether he would grow i

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Philip Sutton
Alan, > You have not shown this at all. From everything you've said it seems > that you are trying to trick Ben into having so many misgivings about > his own work that he holds it up while you create your AI first. I > hope Ben will see through this deception and press ahead with > novamente. --

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
Eliezer S. Yudkowsky wrote: > 1) AI morality is an extremely deep and nonobvious challenge which has > no significant probability of going right by accident. > 2) If you get the deep theory wrong, there is a strong possibility of > a silent catastrophic failure: the AI appears to be learning e

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
> So it's not the case that we intend to rely ENTIRELY on experiential > learning; we intend to rely on experiential learning from an engineering > initial condition, not from a complete tabula rasa. > > -- Ben G "engineered" initial condition, I meant, oops [typed in even more of a hurry as I ge

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
Hi, > 2) If you get the deep theory wrong, there is a strong possibility of a > silent catastrophic failure: the AI appears to be learning > everything just > fine, and both you and the AI are apparently making all kinds of > fascinating discoveries about AI morality, and everything seems to be

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
> Your intuitions say... I am trying to summarize my impression of your > viewpoint, please feel free to correct me... "AI morality is a matter of > experiential learning, not just for the AI, but for the programmers. Also, we plan to start Novamente off with some initial goals embodying ethical

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
> I can spot the problem in AIXI because I have practice looking for silent > failures, because I have an underlying theory that makes it immediately > obvious which useful properties are formally missing from AIXI, and > because I have a specific fleshed-out idea for how to create > moral system

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
> Your intuitions say... I am trying to summarize my impression of your > viewpoint, please feel free to correct me... "AI morality is a matter of > experiential learning, not just for the AI, but for the programmers. To > teach an AI morality you must give it the right feedback on moral > quest

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Philip Sutton
Eliezer, Thanks for being clear at last about what the deep issue is that you were driving at. Now I can start getting my head around what you are trying to talk about. Cheers, Philip --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http:

Re: [agi] AIXI and Solomonoff induction

2003-02-12 Thread Eliezer S. Yudkowsky
Shane Legg wrote: Hi Cliff, And if the "compressibility of the Universe" is an assumption, is there a way we might want to clarify such an assumption, i.e., aren't there numerical values that attach to the *likelihood* of gravity suddenly reversing direction; numerical values attaching to the li

Re: [agi] AIXI and Solomonoff induction

2003-02-12 Thread Cliff Stabbert
Wednesday, February 12, 2003, 3:34:31 AM, Shane Legg wrote: Shane, thanks for the explanation of Kolmogorov complexity and its relation to the matter at hand. [On a side note, I'm curious whether and if so, how, lossy compression might relate. It would seem that in a number of cases a simpler al

Re[2]: [agi] AIXI and Solomonoff induction

2003-02-12 Thread Bill Hibbard
Hi Cliff, > I should add, the example you gave is what raised my questions: it > seems to me an essentially untrainable case because it presents a > *non-repeatable* scenario. > > If I were to give to an AGI a 1,000-page book, and on the first 672 > pages was written the word "Not", it may predict

Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Shane Legg
Eliezer, I suppose my position is similar to Ben's in that I'm more worried about working out the theory of AI than about morality because until I have a reasonable idea of how an AI is going to actually work I don't see how I can productively think about something as abstract as AI morality. I

Re: [agi] AIXI and Solomonoff induction

2003-02-12 Thread Shane Legg
Hi Cliff, So "Solomonoff induction", whatever that precisely is, depends on a somehow compressible universe. Do the AIXI theorems *prove* something along those lines about our universe, AIXI and related work does not prove that our universe is compressible. Nor do they need to. The sun seems

[agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Eliezer S. Yudkowsky
Ben, you and I have a long-standing disagreement on a certain issue which impacts the survival of all life on Earth. I know you're probably bored with it by now, but I hope you can understand why, given my views, I keep returning to it, and find a little tolerance for my doing so. The issue is