Re: [agi] What should we do to be prepared?

2008-03-12 Thread Maksym Taran
I understand it would be complicated and tedious to describe your information-theoretical argument by yourself, however I'm guessing that others are curious besides Vladimir. I for one would like to understand what your argument entails, and I would be the first one to admit I don't know as much

Re: [agi] What should we do to be prepared?

2008-03-12 Thread Mark Waser
I understand it would be complicated and tedious to describe your information-theoretical argument by yourself, however I'm guessing that others are curious besides Vladimir. I for one would like to understand what your argument entails, and I would be the first one to admit I don't know

Re: [agi] What should we do to be prepared?

2008-03-12 Thread Vladimir Nesov
On Wed, Mar 12, 2008 at 6:21 PM, Mark Waser [EMAIL PROTECTED] wrote: From: Vladimir Nesov [EMAIL PROTECTED] I give up. with or without conceding the point (or declaring that I've convinced you enough that you are now unsure but not enough that you're willing to concede it just yet --

Re: [agi] What should we do to be prepared?

2008-03-11 Thread Vladimir Nesov
On Tue, Mar 11, 2008 at 4:47 AM, Mark Waser [EMAIL PROTECTED] wrote: I can't prove a negative but if you were more familiar with Information Theory, you might get a better handle on why your approach is ludicrously expensive. Please reformulate what you mean by my approach independently

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 3:04 AM, Mark Waser [EMAIL PROTECTED] wrote: 1) If I physically destroy every other intelligent thing, what is going to threaten me? Given the size of the universe, how can you possibly destroy every other intelligent thing (and be sure that no others ever

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Stan Nilsen
Mark Waser wrote: Part 4. ... Eventually, you're going to get down to Don't mess with anyone's goals, be forced to add the clause unless absolutely necessary, and then have to fight over what when absolutely necessary means. But what we've got here is what I would call the goal of a

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 6:13 PM, Mark Waser [EMAIL PROTECTED] wrote: I can destroy all Earth-originated life if I start early enough. If there is something else out there, it can similarly be hostile and try destroy me if it can, without listening to any friendliness prayer. All

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 8:10 PM, Mark Waser [EMAIL PROTECTED] wrote: Information Theory is generally accepted as correct and clearly indicates that you are wrong. Note that you are trying to use a technical term in a non-technical way to fight a non-technical argument. Do you really think

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
Note that you are trying to use a technical term in a non-technical way to fight a non-technical argument. Do you really think that I'm asserting that virtual environment can be *exactly* as capable as physical environment? No, I think that you're asserting that the virtual environment is close

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 11:36 PM, Mark Waser [EMAIL PROTECTED] wrote: Note that you are trying to use a technical term in a non-technical way to fight a non-technical argument. Do you really think that I'm asserting that virtual environment can be *exactly* as capable as physical

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
errata: On Tue, Mar 11, 2008 at 12:13 AM, Vladimir Nesov [EMAIL PROTECTED] wrote: I'm sure that for computational efficiency it should be a very strict limitation. it *shouldn't* be a very strict limitation -- Vladimir Nesov [EMAIL PROTECTED] ---

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Tue, Mar 11, 2008 at 12:37 AM, Mark Waser [EMAIL PROTECTED] wrote: How do we get from here to there? Without a provable path, it's all just magical hand-waving to me. (I like it but it's ultimately an unsatifying illusion) It's an independent statement. No, it

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
My second point that you omitted from this response doesn't need there to be universal substrate, which is what I mean. Ditto for significant resources. I didn't omit your second point, I covered it as part of the difference between our views. You believe that certain tasks/options are

Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
Part 5. The nature of evil or The good, the bad, and the evil Since we've got the (slightly revised :-) goal of a Friendly individual and the Friendly society -- Don't act contrary to anyone's goals unless absolutely necessary -- we now can evaluate actions as good or bad in relation to that

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Sun, Mar 9, 2008 at 2:09 AM, Mark Waser [EMAIL PROTECTED] wrote: What is different in my theory is that it handles the case where the dominant theory turns unfriendly. The core of my thesis is that the particular Friendliness that I/we are trying to reach is an attractor --

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
Sure! Friendliness is a state which promotes an entity's own goals; therefore, any entity will generally voluntarily attempt to return to that (Friendly) state since it is in it's own self-interest to do so. In my example it's also explicitly in dominant structure's self-interest to

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Sun, Mar 9, 2008 at 8:13 PM, Mark Waser [EMAIL PROTECTED] wrote: Sure! Friendliness is a state which promotes an entity's own goals; therefore, any entity will generally voluntarily attempt to return to that (Friendly) state since it is in it's own self-interest to do so. In my

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Tim Freeman
From: Mark Waser [EMAIL PROTECTED]: Hmm. Bummer. No new feedback. I wonder if a) I'm still in Well duh land, b) I'm so totally off the mark that I'm not even worth replying to, or c) I hope being given enough rope to hang myself. :-) I'll read the paper if you post a URL to the finished

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Ben Goertzel
Agree... I have not followed this discussion in detail, but if you have a concrete proposal written up somewhere in a reasonably compact format, I'll read it and comment -- Ben G On Sun, Mar 9, 2008 at 1:48 PM, Tim Freeman [EMAIL PROTECTED] wrote: From: Mark Waser [EMAIL PROTECTED]: Hmm.

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
My impression was that your friendliness-thing was about the strategy of avoiding being crushed by next big thing that takes over. My friendliness-thing is that I believe that a sufficiently intelligent self-interested being who has discovered the f-thing or had the f-thing explained to it

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
OK. Sorry for the gap/delay between parts. I've been doing a substantial rewrite of this section . . . . Part 4. Despite all of the debate about how to *cause* Friendly behavior, there's actually very little debate about what Friendly behavior looks like. Human beings actually have had the

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 12:35 AM, Mark Waser [EMAIL PROTECTED] wrote: Because you're *NEVER* going to be sure that you're in a position where you can prevent that from ever happening. That's a current point of disagreement then. Let's iterate from here. I'll break it up this way: 1) If I

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
1) If I physically destroy every other intelligent thing, what is going to threaten me? Given the size of the universe, how can you possibly destroy every other intelligent thing (and be sure that no others ever successfully arise without you crushing them too)? Plus, it seems like an

Re: [agi] What should we do to be prepared?

2008-03-09 Thread J Storrs Hall, PhD
On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote: 1) If I physically destroy every other intelligent thing, what is going to threaten me? Given the size of the universe, how can you possibly destroy every other intelligent thing (and be sure that no others ever successfully arise

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Nathan Cravens
Pack your bags foaks, we're headed toward damnation and hellfire! haha! Nathan On Sun, Mar 9, 2008 at 7:10 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote: 1) If I physically destroy every other intelligent thing, what is going to

Re: [agi] What should we do to be prepared?

2008-03-08 Thread Mark Waser
This raises another point for me though. In another post (2008-03-06 14:36) you said: It would *NOT* be Friendly if I have a goal that I not be turned into computronium even if your clause (which I hereby state that I do) Yet, if I understand our recent exchange correctly, it is possible for

Re: [agi] What should we do to be prepared?

2008-03-08 Thread Vladimir Nesov
On Sat, Mar 8, 2008 at 6:30 PM, Mark Waser [EMAIL PROTECTED] wrote: This sounds like magic thinking, sweeping the problem under the rug of 'attractor' word. Anyway, even if this trick somehow works, it doesn't actually address the problem of friendly AI. The problem with unfriendly AI is

Re: [agi] What should we do to be prepared?

2008-03-08 Thread Mark Waser
What is different in my theory is that it handles the case where the dominant theory turns unfriendly. The core of my thesis is that the particular Friendliness that I/we are trying to reach is an attractor -- which means that if the dominant structure starts to turn unfriendly, it is

Re: [agi] What should we do to be prepared?

2008-03-07 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 08:45:00 pm, Vladimir Nesov wrote: On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: The scenario takes on an entirely different tone if you replace weed out some wild carrots with kill all the old people who are economically

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
Whether humans conspire to weed out wild carrots impacts whether humans are classified as Friendly (or, it would if the wild carrots were sentient). Why does it matter what word we/they assign to this situation? My vision of Friendliness places many more constraints on the behavior

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
How do you propose to make humans Friendly? I assume this would also have the effect of ending war, crime, etc. I don't have such a proposal but an obvious first step is defining/describing Friendliness and why it might be a good idea for us. Hopefully then, the attractor takes over.

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: How do you propose to make humans Friendly? I assume this would also have the effect of ending war, crime, etc. I don't have such a proposal but an obvious first step is defining/describing Friendliness and why it might be a good idea for us.

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Stan Nilsen
Matt Mahoney wrote: --- Mark Waser [EMAIL PROTECTED] wrote: How do you propose to make humans Friendly? I assume this would also have the effect of ending war, crime, etc. I don't have such a proposal but an obvious first step is defining/describing Friendliness and why it might be a good

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote: Reprogramming humans doesn't appear to be an option. We do it all the time. It is called school. Less commonly, the mentally ill are forced to take drugs or treatment for their own good. Most notably, this includes drug addicts. Also, it is common

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Stan Nilsen
Matt Mahoney wrote: --- Stan Nilsen [EMAIL PROTECTED] wrote: Reprogramming humans doesn't appear to be an option. We do it all the time. It is called school. I might be tempted to call this manipulation rather than programming. The results of schooling are questionable while programming

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
Comments seem to be dying down and disagreement appears to be minimal, so let me continue . . . . Part 3. Fundamentally, what I'm trying to do here is to describe an attractor that will appeal to any goal-seeking entity (self-interest) and be beneficial to humanity at the same time

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: TAKE-AWAY: Having the statement The goal of Friendliness is to promote the goals of all Friendly entities allows us to make considerable progress in describing and defining Friendliness. How does an agent know if another agent is Friendly or not,

Re: [agi] What should we do to be prepared?

2008-03-07 Thread j.k.
On 03/07/2008 08:09 AM,, Mark Waser wrote: There is one unique attractor in state space. No. I am not claiming that there is one unique attractor. I am merely saying that there is one describable, reachable, stable attractor that has the characteristics that we want. There are *clearly*

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
How does an agent know if another agent is Friendly or not, especially if the other agent is more intelligent? An excellent question but I'm afraid that I don't believe that there is an answer (but, fortunately, I don't believe that this has any effect on my thesis).

Re: [agi] What should we do to be prepared?

2008-03-07 Thread j.k.
On 03/07/2008 03:20 PM,, Mark Waser wrote: For there to be another attractor F', it would of necessity have to be an attractor that is not desirable to us, since you said there is only one stable attractor for us that has the desired characteristics. Uh, no. I am not claiming that there is

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 5:24 PM, Mark Waser [EMAIL PROTECTED] wrote: The core of my thesis is that the particular Friendliness that I/we are trying to reach is an attractor -- which means that if the dominant structure starts to turn unfriendly, it is actually a self-correcting situation.

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Hmm. Bummer. No new feedback. I wonder if a) I'm still in Well duh land, b) I'm so totally off the mark that I'm not even worth replying to, or c) I hope being given enough rope to hang myself. :-) Since I haven't seen any feedback, I think I'm going to divert to a section that I'm not

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Stephen Reed
: Mark Waser [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, March 6, 2008 9:01:53 AM Subject: Re: [agi] What should we do to be prepared? Hmm. Bummer. No new feedback. I wonder if a) I'm still in Well duh land, b) I'm so totally off the mark that I'm not even worth replying to, or c) I

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: And thus, we get back to a specific answer to jk's second question. *US* should be assumed to apply to any sufficiently intelligent goal-driven intelligence. We don't need to define *us* because I DECLARE that it should be assumed to include current

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
.listbox.com Sent: Thursday, March 06, 2008 10:01 AM Subject: Re: [agi] What should we do to be prepared? Hmm. Bummer. No new feedback. I wonder if a) I'm still in Well duh land, b) I'm so totally off the mark that I'm not even worth replying to, or c) I hope being given enough rope

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Or should we not worry about the problem because the more intelligent agent is more likely to win the fight? My concern is that evolution could favor unfriendly behavior, just as it has with humans. I don't believe that evolution favors unfriendly behavior. I believe that evolution is

Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 12:27:57 pm, Mark Waser wrote: TAKE-AWAY: Friendliness is an attractor because it IS equivalent to enlightened self-interest -- but it only works where all entities involved are Friendly. Check out Beyond AI pp 178-9 and 350-352, or the Preface which sums up the

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
My concern is what happens if a UFAI attacks a FAI. The UFAI has the goal of killing the FAI. Should the FAI show empathy by helping the UFAI achieve its goal? Hopefully this concern was answered by my last post but . . . . Being Friendly *certainly* doesn't mean fatally overriding your

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Thu, Mar 6, 2008 at 8:27 PM, Mark Waser [EMAIL PROTECTED] wrote: Now, I've just attempted to sneak a critical part of the answer right past everyone with my plea . . . . so let's go back and review it in slow-motion. :-) Part of our environment is that we have peers. And peers become

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
On 03/06/2008 08:32 AM,, Matt Mahoney wrote: --- Mark Waser [EMAIL PROTECTED] wrote: And thus, we get back to a specific answer to jk's second question. *US* should be assumed to apply to any sufficiently intelligent goal-driven intelligence. We don't need to define *us* because I DECLARE

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Mark, how do you intend to handle the friendliness obligations of the AI towards vastly different levels of intelligence (above the threshold, of course)? Ah. An excellent opportunity for continuation of my previous post rebutting my personal conversion to computronium . . . . First off,

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Thu, Mar 6, 2008 at 11:23 PM, Mark Waser [EMAIL PROTECTED] wrote: Friendliness must include reasonable protection for sub-peers or else there is no enlightened self-interest or attractor-hood to it -- since any rational entity will realize that it could *easily* end up as a sub-peer.

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
On 03/05/2008 05:04 PM,, Mark Waser wrote: And thus, we get back to a specific answer to jk's second question. *US* should be assumed to apply to any sufficiently intelligent goal-driven intelligence. We don't need to define *us* because I DECLARE that it should be assumed to include current

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: My concern is what happens if a UFAI attacks a FAI. The UFAI has the goal of killing the FAI. Should the FAI show empathy by helping the UFAI achieve its goal? Hopefully this concern was answered by my last post but . . . . Being

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: A Friendly entity does *NOT* snuff out (objecting/non-self-sacrificing) sub-peers simply because it has decided that it has a better use for the resources that they represent/are. That way lies death for humanity when/if become sub-peers (aka

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
I wonder if this is a substantive difference with Eliezer's position though, since one might argue that 'humanity' means 'the [sufficiently intelligent and sufficiently ...] thinking being' rather than 'homo sapiens sapiens', and the former would of course include SAIs and intelligent alien

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Would it be Friendly to turn you into computronium if your memories were preserved and the newfound computational power was used to make you immortal in a a simulated world of your choosing, for example, one without suffering, or where you had a magic genie or super powers or enhanced

Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote: This is different from what I replied to (comparative advantage, which J Storrs Hall also assumed), although you did state this point earlier. I think this one is a package deal fallacy. I can't see how whether humans conspire

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
I think this one is a package deal fallacy. I can't see how whether humans conspire to weed out wild carrots or not will affect decisions made by future AGI overlords. ;-) Whether humans conspire to weed out wild carrots impacts whether humans are classified as Friendly (or, it would if the

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Would an acceptable response be to reprogram the goals of the UFAI to make it friendly? Yes -- but with the minimal possible changes to do so (and preferably done by enforcing Friendliness and allowing the AI to resolve what to change to resolve integrity with Friendliness -- i.e. don't mess

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
And more generally, how is this all to be quantified? Does your paper go into the math? All I'm trying to establish and get agreement on at this point are the absolutes. There is no math at this point because it would be premature and distracting. but, a great question . . . . :-

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 1:48 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote: This is different from what I replied to (comparative advantage, which J Storrs Hall also assumed), although you did state this point earlier.

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 1:46 AM, Mark Waser [EMAIL PROTECTED] wrote: I think this one is a package deal fallacy. I can't see how whether humans conspire to weed out wild carrots or not will affect decisions made by future AGI overlords. ;-) Whether humans conspire to weed out wild

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
At the risk of oversimplifying or misinterpreting your position, here are some thoughts that I think follow from what I understand of your position so far. But I may be wildly mistaken. Please correct my mistakes. There is one unique attractor in state space. Any individual of a species that

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
On 03/06/2008 02:18 PM,, Mark Waser wrote: I wonder if this is a substantive difference with Eliezer's position though, since one might argue that 'humanity' means 'the [sufficiently intelligent and sufficiently ...] thinking being' rather than 'homo sapiens sapiens', and the former would of

Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote: My argument doesn't need 'something of a completely different kind'. Society and human is fine as substitute for human and carrot in my example, only if society could extract profit from replacing humans with 'cultivated humans'. But

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote: My argument doesn't need 'something of a completely different kind'. Society and human is fine as substitute for human and carrot in my example, only

Re: [agi] What should we do to be prepared?

2008-03-05 Thread rg
Hi Again I stress that I am not saying we should try to stop development (I do not think we can) But what is wrong with thinking about the possible outcomes and try to be prepared? To try to affect the development and stear it in better directions to take smaller steps to wherever we are going.

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Anthony George
On Wed, Mar 5, 2008 at 2:46 AM, rg [EMAIL PROTECTED] wrote: Anthony: Do not sociopaths understand the rules and the justice system ? Two responses come to mind. Both will be unsatisfactory probably, but oh well... 1. There's a difference between understanding rules and the justice system

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore
rg wrote: Hi Is anyone discussing what to do in the future when we have made AGIs? I thought that was part of why the singularity institute was made ? Note, that I am not saying we should not make them! Because someone will regardless of what we decide. I am asking for what should do to

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney
--- rg [EMAIL PROTECTED] wrote: Matt: Why will an AGI be friendly ? The question only makes sense if you can define friendliness, which we can't. Initially I believe that a distributed AGI will do what we want it to do because it will evolve in a competitive, hostile environment that rewards

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore
Matt Mahoney wrote: --- rg [EMAIL PROTECTED] wrote: Matt: Why will an AGI be friendly ? The question only makes sense if you can define friendliness, which we can't. Wrong. *You* cannot define friendliness for reasons of your own. Others cmay well be able to do so. It would be fine to

Re: [agi] What should we do to be prepared?

2008-03-05 Thread rg
ok see my responses below.. Matt Mahoney wrote: --- rg [EMAIL PROTECTED] wrote: Matt: Why will an AGI be friendly ? The question only makes sense if you can define friendliness, which we can't. We could say behavior that is acceptable in our society then.. In your mail you

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Friendliness, briefly, is a situation in which the motivations of the AGI are locked into a state of empathy with the human race as a whole. Which is fine as long as there is a sharp line dividing human from non-human. When that line goes away,

Re: [agi] What should we do to be prepared?

2008-03-05 Thread rg
Hi You said friendliness was AGIs locked in empathy towards mankind. How can you make them feel this? How did we humans get empathy? Is it not very likely that we have empathy because it turned out to be an advantage during our evolution ensuring the survival of groups of humans. So if an AGI

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Richard Loosemore [EMAIL PROTECTED] wrote: Friendliness, briefly, is a situation in which the motivations of the AGI are locked into a state of empathy with the human race as a whole. Which is fine as long as

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney
--- rg [EMAIL PROTECTED] wrote: ok see my responses below.. Matt Mahoney wrote: --- rg [EMAIL PROTECTED] wrote: Matt: Why will an AGI be friendly ? The question only makes sense if you can define friendliness, which we can't. We could say behavior that is

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Mark Waser
--- rg [EMAIL PROTECTED] wrote: Matt: Why will an AGI be friendly ? The question only makes sense if you can define friendliness, which we can't. Why Matt, thank you for such a wonderful opening . . . . :-) Friendliness *CAN* be defined. Furthermore, it is my contention that

Re: [agi] What should we do to be prepared?

2008-03-05 Thread j.k.
On 03/05/2008 12:36 PM,, Mark Waser wrote: snip... The obvious initial starting point is to explicitly recognize that the point of Friendliness is that we wish to prevent the extinction of the *human race* and/or to prevent many other horrible nasty things that would make *us* unhappy.

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore
rg wrote: Hi I made some responses below. Richard Loosemore wrote: rg wrote: Hi Is anyone discussing what to do in the future when we have made AGIs? I thought that was part of why the singularity institute was made ? Note, that I am not saying we should not make them! Because someone will

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Mark Waser
1. How will the AI determine what is in the set of horrible nasty thing[s] that would make *us* unhappy? I guess this is related to how you will define the attractor precisely. 2. Preventing the extinction of the human race is pretty clear today, but *human race* will become increasingly

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Anthony George
On Tue, Mar 4, 2008 at 10:53 AM, rg [EMAIL PROTECTED] wrote: Hi Is anyone discussing what to do in the future when we have made AGIs? I thought that was part of why the singularity institute was made ? Note, that I am not saying we should not make them! Because someone will regardless of

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Mark Waser
: [agi] What should we do to be prepared? On Tue, Mar 4, 2008 at 10:53 AM, rg [EMAIL PROTECTED] wrote: Hi Is anyone discussing what to do in the future when we have made AGIs? I thought that was part of why the singularity institute was made ? Note, that I am not saying

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Anthony George
- *From:* Anthony George [EMAIL PROTECTED] *To:* agi@v2.listbox.com *Sent:* Tuesday, March 04, 2008 2:47 PM *Subject:* Re: [agi] What should we do to be prepared? On Tue, Mar 4, 2008 at 10:53 AM, rg [EMAIL PROTECTED] wrote: Hi Is anyone discussing what to do in the future when we have

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Vladimir Nesov
On Tue, Mar 4, 2008 at 9:53 PM, rg [EMAIL PROTECTED] wrote: Hi Is anyone discussing what to do in the future when we have made AGIs? I thought that was part of why the singularity institute was made ? Note, that I am not saying we should not make them! Because someone will regardless

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Mike Tintner
Vlad: How to survive a zombie attack? I really like that thought :). You're right:we should seriously consider that possibility. But personally, I don't think we need to be afraid ... I'm sure they will be friendly zombies... --- agi Archives:

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Matt Mahoney
--- rg [EMAIL PROTECTED] wrote: Hi Is anyone discussing what to do in the future when we have made AGIs? I thought that was part of why the singularity institute was made ? Note, that I am not saying we should not make them! Because someone will regardless of what we decide. I am