, rude lout.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On
Behalf Of EGHeflin
Sent: Thursday, January 09, 2003 2:06 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Friendliness toward humans
Kevin et al.,
Fascinating set of observations, conjectures
Ben Goertzel wrote:
Since I'm too busy studying neuroscience, I simply don't have any
time for learning operating systems. I will therefore either use the
systems I know or the systems that require the least ammount of effort
to learn regardless of their features.
Alan, that sounds like a
Ben Goertzel wrote:
Since I'm too busy studying neuroscience, I simply don't have any
time for learning operating systems. I will therefore either use the
systems I know or the systems that require the least ammount of effort
to learn regardless of their features.
Alan, that sounds
I say this as someone who just burned half a week setting up a Linux
network in his study.
Ditto...
The windows 3.11 machine took 10 minutes.
The Leenooks machine took 3 days...
Yeah, that stuff is a pain. But compared to designing,
programming and testing a thinking machine, it's cake,
It strikes me that what many of the messages refer to as ethical stances
toward life, the earth, etc., are actually simply extensions of self
interest.
In fact, ethical systems of cooperation are really, on a very simplistic
level, ways of improving the lives of individuals. And this is not true
David Noziglia wrote:
In fact, ethical systems of cooperation are really, on a very simplistic
level, ways of improving the lives of individuals. And this is not true
because of strictures from on high, but for reasons of real-world
self-interest. Thus, the Nash Equilibrium, or the results
Rational self-interest does not stop us from knocking down forests to
build
cities, in spite of all the ants and squirrels that are
rendered homeless
or
dead as a consequence.
My point being that maybe it should. Our destruction of the
environment can
be seen as not just ethically
Hi,
I think that to suggest that evolutionary wiring is the root of
our problems
is suspect at best. there are many great beings who have walked
this earth
that were subject to the same evolution, yet not at the whim of the
destructive emotions.
Causality is a very subtle notion
I agree with your ultimate objective, the big question is *how* to do it.
What is clear is that no one has any idea that seems to be guaranteed to
work in creating an AGI with these qualities. We are currently
resigned to
let's build it and see what happens. Which is quite scary for some,
...
Kevin
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, January 09, 2003 12:08 PM
Subject: RE: [agi] Friendliness toward humans
I agree with your ultimate objective, the big question is *how* to do
it.
What is clear is that no one has
3) an intention to implement a careful AGI sandbox that we won't release
our AGI from until we're convinced it is genuinely benevolent
-- Ben
Unfortunately, what one says and what one's intent is can be two completely
different things. It's unlikely, to my mind, that the sandbox restriction
Eliezer,
I certainly remember all those discussions on the SL4 list.
I did not mean to imply that the AGI sandbox would be a perfect mechanism.
Like everything else I mentioned, it is an imperfect mechanism.
Of course, there is a nonzero chance that the AGI will turn evil and escape
from the
To: [EMAIL PROTECTED]
Subject: Re: [agi] Friendliness toward humans
3) an intention to implement a careful AGI sandbox that we
won't release
our AGI from until we're convinced it is genuinely benevolent
-- Ben
Unfortunately, what one says and what one's intent is can be two
maitri wrote:
I agree with your ultimate objective, the big question is *how* to do it.
What is clear is that no one has any idea that seems to be guaranteed to
work in creating an AGI with these qualities. We are currently resigned to
let's build it and see what happens. Which is quite scary
This type of training should be given to the AGI as early as it is
understandable in order to ensure proper consideration of the welfare
of it's creators.
Not so simple:
The human brain has evolved a special agent modeling circuit that
exists in the frontal lobe. (probably having a
This type of training should be given to the AGI as early as it is
understandable in order to ensure proper consideration of the welfare
of it's creators.
Not so simple:
The human brain has evolved a special agent modeling circuit that
exists in the frontal lobe. (probably having a
, January 09, 2003 10:59 AM
Subject: Re: [agi] Friendliness toward humans
Interesting thoughts... I have often conjectured that an AGI that was
supposedly superior to humans would naturally gravitate towards
benevolence
and compassion. I am not certain this would be the case
Superior in intelligence doesn't necessarily mean superior in wisdom ...
there are plenty of examples of that in human history.
intelligence in the wrong hands is the most dangerous things...we are
seeing
that right now in our govt IMO
And just WHERE do you see evidence of intelligence?
In stating that evil is the natural result of a strong sense of self, I
washoping to avoid detailed discussion about good and evil, and instead
propose a possible direction by which a solution can be found. Namely, do
not instill a strong sense of self into the AGI...
This is a very
I still hold that *if* and AGI has a sense of self, without the
concomitant
wisdom needed, it *will* develop the destructive emotions...
I agree that it will develop SOME destructive emotions, and I think that
any
mind necessarily will develop SOME destructive emotions -- which it then
On Thu, Jan 09, 2003 at 11:24:14AM -0500, Ben Goertzel wrote:
I think the issues that are problematic have to do with the emotional
baggage that humans attach to the self/other distinction. Which an AGI will
most likely *not* have, due to its lack of human evolutionary wiring...
Damien Sullivan wrote:
You _MIGHT_ be able to produce a proof of concept that way...
However, a practical working AI, such as the one which could help me
design my my next body, would need to be quite a bit more. =\
Why? Why should such a thing require replacing the original
On Thu, Jan 09, 2003 at 10:57:41PM -0800, Alan Grimes wrote:
It would be a service-driven motovation system but I would expect a much
more sophisticated implementation of agency beyond a windows shell or
something.
Quite possibly. But my point is that the evolutionary root _and_ guiding
Alan Grimes wrote:
My position is that you don't really need friendly AI, you simply need
to neglect to include the take over world motovator...
I think that is a VERY bad approach !!!
I don't want a superhuman AGI to destroy us by accident or through
indifference... which are possibilities
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
Behalf Of Damien Sullivan
Sent: Thursday, January 09, 2003 10:27 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Friendliness toward humans
On Thu, Jan 09, 2003 at 10:23:07PM -0800, Alan Grimes wrote:
You _MIGHT_ be able
Ben Goertzel wrote:
I think that is a VERY bad approach !!!
I don't want a superhuman AGI to destroy us by accident or through
indifference... which are possibilities just as real as aggression.
Positive action requires positive motovation.
--
Linux programmers: the only people in the
On Thu, Jan 09, 2003 at 11:18:36PM -0800, Alan Grimes wrote:
Damien Sullivan wrote:
Quite possibly. But my point is that the evolutionary root _and_
guiding principle would be that of a (Unix, ahem) shell.
Are you nuts?
Unix is the most user-hostile system still in common use! PUKE!!!
27 matches
Mail list logo