RE: [agi] Friendliness toward humans

2003-01-10 Thread Gary Miller
, rude lout. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of EGHeflin Sent: Thursday, January 09, 2003 2:06 PM To: [EMAIL PROTECTED] Subject: Re: [agi] Friendliness toward humans Kevin et al., Fascinating set of observations, conjectures

Re: [agi] Friendliness toward humans

2003-01-10 Thread Alan Grimes
Ben Goertzel wrote: Since I'm too busy studying neuroscience, I simply don't have any time for learning operating systems. I will therefore either use the systems I know or the systems that require the least ammount of effort to learn regardless of their features. Alan, that sounds like a

RE: [agi] Friendliness toward humans

2003-01-10 Thread Ben Goertzel
Ben Goertzel wrote: Since I'm too busy studying neuroscience, I simply don't have any time for learning operating systems. I will therefore either use the systems I know or the systems that require the least ammount of effort to learn regardless of their features. Alan, that sounds

Re: [agi] Friendliness toward humans

2003-01-10 Thread Alan Grimes
I say this as someone who just burned half a week setting up a Linux network in his study. Ditto... The windows 3.11 machine took 10 minutes. The Leenooks machine took 3 days... Yeah, that stuff is a pain. But compared to designing, programming and testing a thinking machine, it's cake,

Re: [agi] Friendliness toward humans

2003-01-09 Thread C. David Noziglia
It strikes me that what many of the messages refer to as ethical stances toward life, the earth, etc., are actually simply extensions of self interest. In fact, ethical systems of cooperation are really, on a very simplistic level, ways of improving the lives of individuals. And this is not true

RE: [agi] Friendliness toward humans

2003-01-09 Thread Ben Goertzel
David Noziglia wrote: In fact, ethical systems of cooperation are really, on a very simplistic level, ways of improving the lives of individuals. And this is not true because of strictures from on high, but for reasons of real-world self-interest. Thus, the Nash Equilibrium, or the results

RE: [agi] Friendliness toward humans

2003-01-09 Thread Ben Goertzel
Rational self-interest does not stop us from knocking down forests to build cities, in spite of all the ants and squirrels that are rendered homeless or dead as a consequence. My point being that maybe it should. Our destruction of the environment can be seen as not just ethically

RE: [agi] Friendliness toward humans

2003-01-09 Thread Ben Goertzel
Hi, I think that to suggest that evolutionary wiring is the root of our problems is suspect at best. there are many great beings who have walked this earth that were subject to the same evolution, yet not at the whim of the destructive emotions. Causality is a very subtle notion

RE: [agi] Friendliness toward humans

2003-01-09 Thread Ben Goertzel
I agree with your ultimate objective, the big question is *how* to do it. What is clear is that no one has any idea that seems to be guaranteed to work in creating an AGI with these qualities. We are currently resigned to let's build it and see what happens. Which is quite scary for some,

Re: [agi] Friendliness toward humans

2003-01-09 Thread maitri
... Kevin - Original Message - From: Ben Goertzel [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, January 09, 2003 12:08 PM Subject: RE: [agi] Friendliness toward humans I agree with your ultimate objective, the big question is *how* to do it. What is clear is that no one has

Re: [agi] Friendliness toward humans

2003-01-09 Thread Tim Barnard
3) an intention to implement a careful AGI sandbox that we won't release our AGI from until we're convinced it is genuinely benevolent -- Ben Unfortunately, what one says and what one's intent is can be two completely different things. It's unlikely, to my mind, that the sandbox restriction

RE: [agi] Friendliness toward humans

2003-01-09 Thread Ben Goertzel
Eliezer, I certainly remember all those discussions on the SL4 list. I did not mean to imply that the AGI sandbox would be a perfect mechanism. Like everything else I mentioned, it is an imperfect mechanism. Of course, there is a nonzero chance that the AGI will turn evil and escape from the

RE: [agi] Friendliness toward humans

2003-01-09 Thread Ben Goertzel
To: [EMAIL PROTECTED] Subject: Re: [agi] Friendliness toward humans 3) an intention to implement a careful AGI sandbox that we won't release our AGI from until we're convinced it is genuinely benevolent -- Ben Unfortunately, what one says and what one's intent is can be two

Re: [agi] Friendliness toward humans

2003-01-09 Thread Eliezer S. Yudkowsky
maitri wrote: I agree with your ultimate objective, the big question is *how* to do it. What is clear is that no one has any idea that seems to be guaranteed to work in creating an AGI with these qualities. We are currently resigned to let's build it and see what happens. Which is quite scary

Re: [agi] Friendliness toward humans

2003-01-09 Thread Alan Grimes
This type of training should be given to the AGI as early as it is understandable in order to ensure proper consideration of the welfare of it's creators. Not so simple: The human brain has evolved a special agent modeling circuit that exists in the frontal lobe. (probably having a

RE: [agi] Friendliness toward humans

2003-01-09 Thread Ben Goertzel
This type of training should be given to the AGI as early as it is understandable in order to ensure proper consideration of the welfare of it's creators. Not so simple: The human brain has evolved a special agent modeling circuit that exists in the frontal lobe. (probably having a

Re: [agi] Friendliness toward humans

2003-01-09 Thread maitri
, January 09, 2003 10:59 AM Subject: Re: [agi] Friendliness toward humans Interesting thoughts... I have often conjectured that an AGI that was supposedly superior to humans would naturally gravitate towards benevolence and compassion. I am not certain this would be the case

Re: [agi] Friendliness toward humans

2003-01-09 Thread C. David Noziglia
Superior in intelligence doesn't necessarily mean superior in wisdom ... there are plenty of examples of that in human history. intelligence in the wrong hands is the most dangerous things...we are seeing that right now in our govt IMO And just WHERE do you see evidence of intelligence?

RE: [agi] Friendliness toward humans

2003-01-09 Thread Ben Goertzel
In stating that evil is the natural result of a strong sense of self, I washoping to avoid detailed discussion about good and evil, and instead propose a possible direction by which a solution can be found. Namely, do not instill a strong sense of self into the AGI... This is a very

Re: [agi] Friendliness toward humans

2003-01-09 Thread C. David Noziglia
I still hold that *if* and AGI has a sense of self, without the concomitant wisdom needed, it *will* develop the destructive emotions... I agree that it will develop SOME destructive emotions, and I think that any mind necessarily will develop SOME destructive emotions -- which it then

Re: [agi] Friendliness toward humans

2003-01-09 Thread Damien Sullivan
On Thu, Jan 09, 2003 at 11:24:14AM -0500, Ben Goertzel wrote: I think the issues that are problematic have to do with the emotional baggage that humans attach to the self/other distinction. Which an AGI will most likely *not* have, due to its lack of human evolutionary wiring...

Re: [agi] Friendliness toward humans

2003-01-09 Thread Alan Grimes
Damien Sullivan wrote: You _MIGHT_ be able to produce a proof of concept that way... However, a practical working AI, such as the one which could help me design my my next body, would need to be quite a bit more. =\ Why? Why should such a thing require replacing the original

Re: [agi] Friendliness toward humans

2003-01-09 Thread Damien Sullivan
On Thu, Jan 09, 2003 at 10:57:41PM -0800, Alan Grimes wrote: It would be a service-driven motovation system but I would expect a much more sophisticated implementation of agency beyond a windows shell or something. Quite possibly. But my point is that the evolutionary root _and_ guiding

RE: [agi] Friendliness toward humans

2003-01-09 Thread Ben Goertzel
Alan Grimes wrote: My position is that you don't really need friendly AI, you simply need to neglect to include the take over world motovator... I think that is a VERY bad approach !!! I don't want a superhuman AGI to destroy us by accident or through indifference... which are possibilities

RE: [agi] Friendliness toward humans

2003-01-09 Thread Ben Goertzel
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Damien Sullivan Sent: Thursday, January 09, 2003 10:27 PM To: [EMAIL PROTECTED] Subject: Re: [agi] Friendliness toward humans On Thu, Jan 09, 2003 at 10:23:07PM -0800, Alan Grimes wrote: You _MIGHT_ be able

Re: [agi] Friendliness toward humans

2003-01-09 Thread Alan Grimes
Ben Goertzel wrote: I think that is a VERY bad approach !!! I don't want a superhuman AGI to destroy us by accident or through indifference... which are possibilities just as real as aggression. Positive action requires positive motovation. -- Linux programmers: the only people in the

Re: [agi] Friendliness toward humans

2003-01-09 Thread Damien Sullivan
On Thu, Jan 09, 2003 at 11:18:36PM -0800, Alan Grimes wrote: Damien Sullivan wrote: Quite possibly. But my point is that the evolutionary root _and_ guiding principle would be that of a (Unix, ahem) shell. Are you nuts? Unix is the most user-hostile system still in common use! PUKE!!!