Re: [agi] who is this "Bill Hubbard" I keep reading about?

2003-02-14 Thread Philip Sutton
Bill, Gulp..who was the Yank who said ... it was I ??? Johnny Appleseed or something? Well, it my turn to fess up. I'm pretty certain that it was my slip of the keyboard who started it all. Sorry. :) My only excuse is that in my area of domain knowledge King Hubbard is very famous. H

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Eliezer S. Yudkowsky
Eliezer S. Yudkowsky wrote: But if this isn't immediately obvious to you, it doesn't seem like a top priority to try and discuss it... Argh. That came out really, really wrong and I apologize for how it sounded. I'm not very good at agreeing to disagree. Must... sleep... -- Eliezer S. Yudk

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: > > I'll read the rest of your message tomorrow... > >> But we aren't *talking* about whether AIXI-tl has a mindlike >> operating program. We're talking about whether the physically >> realizable challenge, which definitely breaks the formalism, also >> breaks AIXI-tl in practi

RE: [agi] Breaking AIXI-tl

2003-02-14 Thread Ben Goertzel
Hmmm My friend, I think you've pretty much convinced me with this last batch of arguments. Or, actually, I'm not sure if it was your excellently clear arguments or the fact that I finally got a quiet 15 minutes to really think about it (the three kids, who have all been out sick from school

RE: [agi] Breaking AIXI-tl

2003-02-14 Thread Ben Goertzel
I'll read the rest of your message tomorrow... > But we aren't *talking* about whether AIXI-tl has a mindlike operating > program. We're talking about whether the physically realizable > challenge, > which definitely breaks the formalism, also breaks AIXI-tl in practice. > That's what I origina

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: > >> AIXI-tl *cannot* figure this out because its control process is not >> capable of recognizing tl-computable transforms of its own policies >> and strategic abilities, *only* tl-computable transforms of its own >> direct actions. Yes, it simulates entities who know this; it

RE: [agi] Breaking AIXI-tl

2003-02-14 Thread Ben Goertzel
Hi, > You appear to be thinking of AIXI-tl as a fuzzy little harmless baby being > confronted with some harsh trial. Once again, your ability to see into my mind proves extremely flawed ;-) You're right that my statement "AIXItl is slow at learning" was ill-said, though. It is very inefficien

Re: [agi] who is this "Bill Hubbard" I keep reading about?

2003-02-14 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote: Strange that there would be someone on this list with a name so similar to mine. I apologize, dammit! I whack myself over the head with a ballpeen hammer! Now let me ask you this: Do you want to trade names? -- Eliezer S. Yudkowsky http://singins

[agi] who is this "Bill Hubbard" I keep reading about?

2003-02-14 Thread Bill Hibbard
Strange that there would be someone on this list with a name so similar to mine. Cheers, Bill -- Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI 53706 [EMAIL PROTECTED] 608-263-4427 fax: 608-263-6738 http://www.ssec.wisc.edu/~billh/vis

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Michael Roy Ames
Eliezer S. Yudkowsky asked Ben Goertzel: > > Do you have a non-intuitive mental simulation mode? > LOL --#:^D It *is* a valid question, Eliezer, but it makes me laugh. Michael Roy Ames [Who currently estimates his *non-intuitive mental simulation mode* to contain about 3 iterations of 5 variab

Re: [agi] unFriendly Hibbard SIs

2003-02-14 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote: Hey Eliezer, my name is Hibbard, not Hubbard. *Argh* sorry. On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote: *takes deep breath* This is probably the third time you've sent a message to me over the past few months where you make some remark like this to indicate that y

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote: On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote: It *could* do this but it *doesn't* do this. Its control process is such that it follows an iterative trajectory through chaos which is forbidden to arrive at a truthful solution, though it may converge to a stable attractor.

Re: [agi] unFriendly Hubbard SIs

2003-02-14 Thread Bill Hibbard
Hey Eliezer, my name is Hibbard, not Hubbard. On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote: > Bill Hibbard wrote: > > > > I never said perfection, and in my book make it clear that > > the task of a super-intelligent machine learning behaviors > > to promote human happiness will be very messy.

[agi] unFriendly Hubbard SIs

2003-02-14 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote: I never said perfection, and in my book make it clear that the task of a super-intelligent machine learning behaviors to promote human happiness will be very messy. That's why it needs to be super-intelligent. The problem with laws is that they are inevitably ambiguous. They

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: >> Even if a (grown) human is playing PD2, it outperforms AIXI-tl >> playing PD2. > > Well, in the long run, I'm not at all sure this is the case. You > haven't proved this to my satisfaction. PD2 is very natural to humans; we can take for granted that humans excel at PD2. Th

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Eliezer S. Yudkowsky
Brad Wyble wrote: >> There are simple external conditions that provoke protective >> tendencies in humans following chains of logic that seem entirely >> natural to us. Our intuition that reproducing these simple external >> conditions serve to provoke protective tendencies in AIs is knowably >> w

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Brad Wyble
> > There are simple external conditions that provoke protective tendencies in > humans following chains of logic that seem entirely natural to us. Our > intuition that reproducing these simple external conditions serve to > provoke protective tendencies in AIs is knowably wrong, failing an >

Re: [agi] Reply to Bill Hubbard

2003-02-14 Thread Eliezer S. Yudkowsky
Eliezer S. Yudkowsky wrote: In even plainer language: If you rely on groups of AIs to police themselves you *will* get killed unless a miracle happens. A miracle m may be defined as a complex event which we have no Bayesian reason to expect, ergo, having probability 2^-K(m). You have to grou

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Eliezer S. Yudkowsky
Brad Wyble wrote: > >> 3) A society of selfish AIs may develop certain (not really >> primatelike) rules for enforcing cooperative interactions among >> themselves; but you cannot prove for any entropic specification, and >> I will undertake to *disprove* for any clear specification, that this >>

RE: [agi] Breaking AIXI-tl

2003-02-14 Thread Ben Goertzel
> Even if a (grown) human is playing PD2, it outperforms AIXI-tl playing > PD2. Well, in the long run, I'm not at all sure this is the case. You haven't proved this to my satisfaction. In the short run, it certainly is the case. But so what? AIXI-tl is damn slow at learning, we know that. Th

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Brad Wyble
> That *still* doesn't work. > > 1) "Hard-wired" rules are a pipe dream. It consists of mixing > mechanomorphism ("machines only do what they're told to do") with > anthropomorphism ("I wish those slaves down on the plantation would stop > rebelling"). The only hard-wired level of organiza

Re: [agi] Democracy / the happiness of all humans AND......

2003-02-14 Thread Bill Hibbard
Hi Philip, I am aware of the problem you raise about the happiness of animals, but don't have a clear answer. My preference is that human happiness will depend on animal happiness, especially as the productivity of super-intelligent machines gives humans more wealth and education. Explicitly writi

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: OK. Rather than responding point by point, I'll try to say something compact ;) You're looking at the interesting scenario of a iterated prisoners dilemma between two AIXI-tl's, each of which has a blank operating program at the start of the iterated prisoners' dilemma. (In

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Bill Hibbard
Hi David, > The problem here, I guess, is the conflict between Platonic expectations of > perfection and the messiness of the real world. I never said perfection, and in my book make it clear that the task of a super-intelligent machine learning behaviors to promote human happiness will be very m

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Eliezer S. Yudkowsky
C. David Noziglia wrote: The problem with the issue we are discussing here is that the worst-case scenario for handing power to unrestricted, super-capable AI entities is very bad, indeed. So what we are looking for is not really building an ethical structure or moral sense at all. Failure is n

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread C. David Noziglia
This is to extract these statements and reply to them. > > I the happiness/unhappiness of all humans is one good stepping off > > point for learning values. But there may be some values that are not > > shared strongly as major motivators by all humans which might be > > importaant values. > > >

RE: [agi] Breaking AIXI-tl

2003-02-14 Thread Ben Goertzel
> Really, when has a computer (with the exception of certain Microsoft > products) ever been able to disobey it's human masters? > > It's easy to get caught up in the romance of "superpowers", but come on, > there's nothing to worry about. > > -Daniel Hi Daniel, Clearly there is nothing to worry

RE: [agi] Breaking AIXI-tl

2003-02-14 Thread Ben Goertzel
OK. Rather than responding point by point, I'll try to say something compact ;) You're looking at the interesting scenario of a iterated prisoners dilemma between two AIXI-tl's, each of which has a blank operating program at the start of the iterated prisoners' dilemma. (In parts of my last rep

RE: [agi] Breaking AIXI-tl

2003-02-14 Thread Daniel Colonnese
> There is a lot of variation in human > psychology, and some humans are pretty damn dangerous. Also there is the > maxim "power corrupts, and absolute power corrupts absolutely" which tells > you something about human psychology. A human with superintelligence and > superpowers could be a great

RE: [agi] Breaking AIXI-tl

2003-02-14 Thread Ben Goertzel
Hi Eliezer Some replies to "side points": > This is a critical class of problem for would-be implementors of > Friendliness. If all AIs, regardless of their foundations, did sort of > what humans would do, given that AI's capabilities, the whole world would > be a *lot* safer. Hmmm. I don't

[agi] Democracy / the happiness of all humans AND......

2003-02-14 Thread Philip Sutton
Bill, I agree that, over the long haul, and admitting all its limitations, there is no better system than democracy. And it will be interesting to see how humans cope with admitting very intelligent AGIs into that democracy! On another matter, I think there may be a way to deal with the needs

Re: [agi] Reply to Bill Hubbard's post: Mon, 10 Feb 2003

2003-02-14 Thread Bill Hibbard
Hi Philip, > I was talking about ethics as being the top level goals because I was > trying to think about AGI ethics in the context of the Novamente > structure. > > I can imagine values being expressed as value statements: > > x is good/bad > y is desirable/undesirable > > But these can be turne

Re: [agi] Breaking AIXI-tl

2003-02-14 Thread Bill Hibbard
On Fri, 14 Feb 2003, Eliezer S. Yudkowsky wrote: > Ben Goertzel wrote: > . . . > >> Lee Corbin can work out his entire policy in step (2), before step > >> (3) occurs, knowing that his synchronized other self - whichever one > >> he is - is doing the same. > > > > OK -- now, if AIXItl were st