I understand it would be complicated and tedious to describe your
information-theoretical argument by yourself, however I'm guessing that
others are curious besides Vladimir. I for one would like to understand what
your argument entails, and I would be the first one to admit I don't know as
much
I understand it would be complicated and tedious to describe your
information-theoretical argument by yourself, however I'm guessing that
others are curious besides Vladimir. I for one would like to understand what
your argument entails, and I would be the first one to
admit I don't know
On Wed, Mar 12, 2008 at 6:21 PM, Mark Waser [EMAIL PROTECTED] wrote:
From: Vladimir Nesov [EMAIL PROTECTED]
I give up.
with or without conceding the point (or declaring that I've convinced you
enough that you are now unsure but not enough that you're willing to concede
it just yet --
On Tue, Mar 11, 2008 at 4:47 AM, Mark Waser [EMAIL PROTECTED] wrote:
I can't prove a negative but if you were more familiar with Information
Theory, you might get a better handle on why your approach is ludicrously
expensive.
Please reformulate what you mean by my approach independently
On Mon, Mar 10, 2008 at 3:04 AM, Mark Waser [EMAIL PROTECTED] wrote:
1) If I physically destroy every other intelligent thing, what is
going to threaten me?
Given the size of the universe, how can you possibly destroy every other
intelligent thing (and be sure that no others ever
Mark Waser wrote:
Part 4.
... Eventually, you're going to get down to Don't mess with
anyone's goals, be forced to add the clause unless absolutely
necessary, and then have to fight over what when absolutely necessary
means. But what we've got here is what I would call the goal of a
On Mon, Mar 10, 2008 at 6:13 PM, Mark Waser [EMAIL PROTECTED] wrote:
I can destroy all Earth-originated life if I start early enough. If
there is something else out there, it can similarly be hostile and try
destroy me if it can, without listening to any friendliness prayer.
All
On Mon, Mar 10, 2008 at 8:10 PM, Mark Waser [EMAIL PROTECTED] wrote:
Information Theory is generally accepted as
correct and clearly indicates that you are wrong.
Note that you are trying to use a technical term in a non-technical
way to fight a non-technical argument. Do you really think
Note that you are trying to use a technical term in a non-technical
way to fight a non-technical argument. Do you really think that I'm
asserting that virtual environment can be *exactly* as capable as
physical environment?
No, I think that you're asserting that the virtual environment is close
On Mon, Mar 10, 2008 at 11:36 PM, Mark Waser [EMAIL PROTECTED] wrote:
Note that you are trying to use a technical term in a non-technical
way to fight a non-technical argument. Do you really think that I'm
asserting that virtual environment can be *exactly* as capable as
physical
errata:
On Tue, Mar 11, 2008 at 12:13 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
I'm sure that
for computational efficiency it should be a very strict limitation.
it *shouldn't* be a very strict limitation
--
Vladimir Nesov
[EMAIL PROTECTED]
---
On Tue, Mar 11, 2008 at 12:37 AM, Mark Waser [EMAIL PROTECTED] wrote:
How do we get from here to there? Without a provable path, it's all
just
magical hand-waving to me. (I like it but it's ultimately an
unsatifying
illusion)
It's an independent statement.
No, it
My second point that you omitted from this response doesn't need there
to be universal substrate, which is what I mean. Ditto for
significant resources.
I didn't omit your second point, I covered it as part of the difference
between our views.
You believe that certain tasks/options are
Part 5. The nature of evil or The good, the bad, and the evil
Since we've got the (slightly revised :-) goal of a Friendly individual and the
Friendly society -- Don't act contrary to anyone's goals unless absolutely
necessary -- we now can evaluate actions as good or bad in relation to that
On Sun, Mar 9, 2008 at 2:09 AM, Mark Waser [EMAIL PROTECTED] wrote:
What is different in my theory is that it handles the case where the
dominant theory turns unfriendly. The core of my thesis is that the
particular Friendliness that I/we are trying to reach is an
attractor --
Sure! Friendliness is a state which promotes an entity's own goals;
therefore, any entity will generally voluntarily attempt to return to that
(Friendly) state since it is in it's own self-interest to do so.
In my example it's also explicitly in dominant structure's
self-interest to
On Sun, Mar 9, 2008 at 8:13 PM, Mark Waser [EMAIL PROTECTED] wrote:
Sure! Friendliness is a state which promotes an entity's own goals;
therefore, any entity will generally voluntarily attempt to return to
that
(Friendly) state since it is in it's own self-interest to do so.
In my
From: Mark Waser [EMAIL PROTECTED]:
Hmm. Bummer. No new feedback. I wonder if a) I'm still in Well
duh land, b) I'm so totally off the mark that I'm not even worth
replying to, or c) I hope being given enough rope to hang myself.
:-)
I'll read the paper if you post a URL to the finished
Agree... I have not followed this discussion in detail, but if you have
a concrete proposal written up somewhere in a reasonably compact
format, I'll read it and comment
-- Ben G
On Sun, Mar 9, 2008 at 1:48 PM, Tim Freeman [EMAIL PROTECTED] wrote:
From: Mark Waser [EMAIL PROTECTED]:
Hmm.
My impression was that your friendliness-thing was about the strategy
of avoiding being crushed by next big thing that takes over.
My friendliness-thing is that I believe that a sufficiently intelligent
self-interested being who has discovered the f-thing or had the f-thing
explained to it
OK. Sorry for the gap/delay between parts. I've been doing a substantial
rewrite of this section . . . .
Part 4.
Despite all of the debate about how to *cause* Friendly behavior, there's
actually very little debate about what Friendly behavior looks like. Human
beings actually have had the
On Mon, Mar 10, 2008 at 12:35 AM, Mark Waser [EMAIL PROTECTED] wrote:
Because you're *NEVER* going to be sure that you're in a position where you
can prevent that from ever happening.
That's a current point of disagreement then. Let's iterate from here.
I'll break it up this way:
1) If I
1) If I physically destroy every other intelligent thing, what is
going to threaten me?
Given the size of the universe, how can you possibly destroy every other
intelligent thing (and be sure that no others ever successfully arise
without you crushing them too)?
Plus, it seems like an
On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote:
1) If I physically destroy every other intelligent thing, what is
going to threaten me?
Given the size of the universe, how can you possibly destroy every other
intelligent thing (and be sure that no others ever successfully arise
Pack your bags foaks, we're headed toward damnation and hellfire! haha!
Nathan
On Sun, Mar 9, 2008 at 7:10 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote:
1) If I physically destroy every other intelligent thing, what is
going to
This raises another point for me though. In another post (2008-03-06
14:36) you said:
It would *NOT* be Friendly if I have a goal that I not be turned into
computronium even if your clause (which I hereby state that I do)
Yet, if I understand our recent exchange correctly, it is possible for
On Sat, Mar 8, 2008 at 6:30 PM, Mark Waser [EMAIL PROTECTED] wrote:
This sounds like magic thinking, sweeping the problem under the rug of
'attractor' word. Anyway, even if this trick somehow works, it doesn't
actually address the problem of friendly AI. The problem with
unfriendly AI is
What is different in my theory is that it handles the case where the
dominant theory turns unfriendly. The core of my thesis is that the
particular Friendliness that I/we are trying to reach is an
attractor --
which means that if the dominant structure starts to turn unfriendly, it
is
On Thursday 06 March 2008 08:45:00 pm, Vladimir Nesov wrote:
On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:
The scenario takes on an entirely different tone if you replace weed out
some
wild carrots with kill all the old people who are economically
Whether humans conspire to weed out wild carrots impacts whether humans
are
classified as Friendly (or, it would if the wild carrots were sentient).
Why does it matter what word we/they assign to this situation?
My vision of Friendliness places many more constraints on the behavior
How do you propose to make humans Friendly? I assume this would also have
the
effect of ending war, crime, etc.
I don't have such a proposal but an obvious first step is
defining/describing Friendliness and why it might be a good idea for us.
Hopefully then, the attractor takes over.
--- Mark Waser [EMAIL PROTECTED] wrote:
How do you propose to make humans Friendly? I assume this would also have
the
effect of ending war, crime, etc.
I don't have such a proposal but an obvious first step is
defining/describing Friendliness and why it might be a good idea for us.
Matt Mahoney wrote:
--- Mark Waser [EMAIL PROTECTED] wrote:
How do you propose to make humans Friendly? I assume this would also have
the
effect of ending war, crime, etc.
I don't have such a proposal but an obvious first step is
defining/describing Friendliness and why it might be a good
--- Stan Nilsen [EMAIL PROTECTED] wrote:
Reprogramming humans doesn't appear to be an option.
We do it all the time. It is called school.
Less commonly, the mentally ill are forced to take drugs or treatment for
their own good. Most notably, this includes drug addicts. Also, it is
common
Matt Mahoney wrote:
--- Stan Nilsen [EMAIL PROTECTED] wrote:
Reprogramming humans doesn't appear to be an option.
We do it all the time. It is called school.
I might be tempted to call this manipulation rather than programming.
The results of schooling are questionable while programming
Comments seem to be dying down and disagreement appears to be minimal, so let
me continue . . . .
Part 3.
Fundamentally, what I'm trying to do here is to describe an attractor that will
appeal to any goal-seeking entity (self-interest) and be beneficial to humanity
at the same time
--- Mark Waser [EMAIL PROTECTED] wrote:
TAKE-AWAY: Having the statement The goal of Friendliness is to promote the
goals of all Friendly entities allows us to make considerable progress in
describing and defining Friendliness.
How does an agent know if another agent is Friendly or not,
On 03/07/2008 08:09 AM,, Mark Waser wrote:
There is one unique attractor in state space.
No. I am not claiming that there is one unique attractor. I am
merely saying that there is one describable, reachable, stable
attractor that has the characteristics that we want. There are
*clearly*
How does an agent know if another agent is Friendly or not, especially if
the
other agent is more intelligent?
An excellent question but I'm afraid that I don't believe that there is an
answer (but, fortunately, I don't believe that this has any effect on my
thesis).
On 03/07/2008 03:20 PM,, Mark Waser wrote:
For there to be another attractor F', it would of necessity have to be
an attractor that is not desirable to us, since you said there is only
one stable attractor for us that has the desired characteristics.
Uh, no. I am not claiming that there is
On Fri, Mar 7, 2008 at 5:24 PM, Mark Waser [EMAIL PROTECTED] wrote:
The core of my thesis is that the
particular Friendliness that I/we are trying to reach is an attractor --
which means that if the dominant structure starts to turn unfriendly, it is
actually a self-correcting situation.
Hmm. Bummer. No new feedback. I wonder if a) I'm still in Well duh land,
b) I'm so totally off the mark that I'm not even worth replying to, or c) I
hope being given enough rope to hang myself. :-)
Since I haven't seen any feedback, I think I'm going to divert to a section
that I'm not
: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 6, 2008 9:01:53 AM
Subject: Re: [agi] What should we do to be prepared?
Hmm. Bummer. No new feedback. I
wonder if a) I'm still in Well duh land, b) I'm so totally off the mark
that I'm not even worth replying to, or c) I
--- Mark Waser [EMAIL PROTECTED] wrote:
And thus, we get back to a specific answer to jk's second question. *US*
should be assumed to apply to any sufficiently intelligent goal-driven
intelligence. We don't need to define *us* because I DECLARE that it
should be assumed to include current
.listbox.com
Sent: Thursday, March 06, 2008 10:01 AM
Subject: Re: [agi] What should we do to be prepared?
Hmm. Bummer. No new feedback. I wonder if a) I'm still in Well duh land,
b) I'm so totally off the mark that I'm not even worth replying to, or c) I
hope being given enough rope
Or should we not worry about the problem because the more intelligent
agent is
more likely to win the fight? My concern is that evolution could favor
unfriendly behavior, just as it has with humans.
I don't believe that evolution favors unfriendly behavior. I believe that
evolution is
On Thursday 06 March 2008 12:27:57 pm, Mark Waser wrote:
TAKE-AWAY: Friendliness is an attractor because it IS equivalent
to enlightened self-interest -- but it only works where all entities
involved are Friendly.
Check out Beyond AI pp 178-9 and 350-352, or the Preface which sums up the
My concern is what happens if a UFAI attacks a FAI. The UFAI has the goal
of
killing the FAI. Should the FAI show empathy by helping the UFAI achieve
its
goal?
Hopefully this concern was answered by my last post but . . . .
Being Friendly *certainly* doesn't mean fatally overriding your
On Thu, Mar 6, 2008 at 8:27 PM, Mark Waser [EMAIL PROTECTED] wrote:
Now, I've just attempted to sneak a critical part of the answer right past
everyone with my plea . . . . so let's go back and review it in slow-motion.
:-)
Part of our environment is that we have peers. And peers become
On 03/06/2008 08:32 AM,, Matt Mahoney wrote:
--- Mark Waser [EMAIL PROTECTED] wrote:
And thus, we get back to a specific answer to jk's second question. *US*
should be assumed to apply to any sufficiently intelligent goal-driven
intelligence. We don't need to define *us* because I DECLARE
Mark, how do you intend to handle the friendliness obligations of the AI
towards vastly different levels of intelligence (above the threshold, of
course)?
Ah. An excellent opportunity for continuation of my previous post rebutting
my personal conversion to computronium . . . .
First off,
On Thu, Mar 6, 2008 at 11:23 PM, Mark Waser [EMAIL PROTECTED] wrote:
Friendliness must include reasonable protection for sub-peers or else there
is no enlightened self-interest or attractor-hood to it -- since any
rational entity will realize that it could *easily* end up as a sub-peer.
On 03/05/2008 05:04 PM,, Mark Waser wrote:
And thus, we get back to a specific answer to jk's second question.
*US* should be assumed to apply to any sufficiently intelligent
goal-driven intelligence. We don't need to define *us* because I
DECLARE that it should be assumed to include current
--- Mark Waser [EMAIL PROTECTED] wrote:
My concern is what happens if a UFAI attacks a FAI. The UFAI has the goal
of
killing the FAI. Should the FAI show empathy by helping the UFAI achieve
its
goal?
Hopefully this concern was answered by my last post but . . . .
Being
--- Mark Waser [EMAIL PROTECTED] wrote:
A Friendly entity does *NOT* snuff
out (objecting/non-self-sacrificing) sub-peers simply because it has decided
that it has a better use for the resources that they represent/are. That
way lies death for humanity when/if become sub-peers (aka
I wonder if this is a substantive difference with Eliezer's position
though, since one might argue that 'humanity' means 'the [sufficiently
intelligent and sufficiently ...] thinking being' rather than 'homo
sapiens sapiens', and the former would of course include SAIs and
intelligent alien
Would it be Friendly to turn you into computronium if your memories were
preserved and the newfound computational power was used to make you
immortal
in a a simulated world of your choosing, for example, one without
suffering,
or where you had a magic genie or super powers or enhanced
On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote:
This is different from what I replied to (comparative advantage, which
J Storrs Hall also assumed), although you did state this point
earlier.
I think this one is a package deal fallacy. I can't see how whether
humans conspire
I think this one is a package deal fallacy. I can't see how whether
humans conspire to weed out wild carrots or not will affect decisions
made by future AGI overlords. ;-)
Whether humans conspire to weed out wild carrots impacts whether humans are
classified as Friendly (or, it would if the
Would an acceptable response be to reprogram the goals of the UFAI to make
it
friendly?
Yes -- but with the minimal possible changes to do so (and preferably done
by enforcing Friendliness and allowing the AI to resolve what to change to
resolve integrity with Friendliness -- i.e. don't mess
And more generally, how is this all to be quantified? Does your paper go
into the math?
All I'm trying to establish and get agreement on at this point are the
absolutes. There is no math at this point because it would be premature and
distracting.
but, a great question . . . . :-
On Fri, Mar 7, 2008 at 1:48 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote:
This is different from what I replied to (comparative advantage, which
J Storrs Hall also assumed), although you did state this point
earlier.
On Fri, Mar 7, 2008 at 1:46 AM, Mark Waser [EMAIL PROTECTED] wrote:
I think this one is a package deal fallacy. I can't see how whether
humans conspire to weed out wild carrots or not will affect decisions
made by future AGI overlords. ;-)
Whether humans conspire to weed out wild
At the risk of oversimplifying or misinterpreting your position, here
are some thoughts that I think follow from what I understand of your
position so far. But I may be wildly mistaken. Please correct my mistakes.
There is one unique attractor in state space. Any individual of a
species that
On 03/06/2008 02:18 PM,, Mark Waser wrote:
I wonder if this is a substantive difference with Eliezer's position
though, since one might argue that 'humanity' means 'the
[sufficiently intelligent and sufficiently ...] thinking being'
rather than 'homo sapiens sapiens', and the former would of
On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote:
My argument doesn't need 'something of a completely different kind'.
Society and human is fine as substitute for human and carrot in my
example, only if society could extract profit from replacing humans
with 'cultivated humans'. But
On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote:
My argument doesn't need 'something of a completely different kind'.
Society and human is fine as substitute for human and carrot in my
example, only
Hi
Again I stress that I am not saying we should
try to stop development (I do not think we can)
But what is wrong with thinking about the
possible outcomes and try to be prepared?
To try to affect the development and stear it
in better directions to take smaller steps to
wherever we are going.
On Wed, Mar 5, 2008 at 2:46 AM, rg [EMAIL PROTECTED] wrote:
Anthony: Do not sociopaths understand the
rules and the justice system ?
Two responses come to mind. Both will be unsatisfactory probably, but oh
well...
1. There's a difference between understanding rules and the justice system
rg wrote:
Hi
Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?
Note, that I am not saying we should not make them!
Because someone will regardless of what we decide.
I am asking for what should do to
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we can't.
Initially I believe that a distributed AGI will do what we want it to do
because it will evolve in a competitive, hostile environment that rewards
Matt Mahoney wrote:
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we can't.
Wrong.
*You* cannot define friendliness for reasons of your own. Others cmay
well be able to do so.
It would be fine to
ok see my responses below..
Matt Mahoney wrote:
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we can't.
We could say behavior that is acceptable in our society then..
In your mail you
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Friendliness, briefly, is a situation in which the motivations of the
AGI are locked into a state of empathy with the human race as a whole.
Which is fine as long as there is a sharp line dividing human from non-human.
When that line goes away,
Hi
You said friendliness was AGIs locked in empathy towards mankind.
How can you make them feel this?
How did we humans get empathy?
Is it not very likely that we have empathy because
it turned out to be an advantage during our evolution
ensuring the survival of groups of humans.
So if an AGI
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Friendliness, briefly, is a situation in which the motivations of the
AGI are locked into a state of empathy with the human race as a whole.
Which is fine as long as
--- rg [EMAIL PROTECTED] wrote:
ok see my responses below..
Matt Mahoney wrote:
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we
can't.
We could say behavior that is
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we
can't.
Why Matt, thank you for such a wonderful opening . . . . :-)
Friendliness *CAN* be defined. Furthermore, it is my contention that
On 03/05/2008 12:36 PM,, Mark Waser wrote:
snip...
The obvious initial starting point is to explicitly recognize that the
point of Friendliness is that we wish to prevent the extinction of the
*human race* and/or to prevent many other horrible nasty things that
would make *us* unhappy.
rg wrote:
Hi
I made some responses below.
Richard Loosemore wrote:
rg wrote:
Hi
Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?
Note, that I am not saying we should not make them!
Because someone will
1. How will the AI determine what is in the set of horrible nasty
thing[s] that would make *us* unhappy? I guess this is related to how you
will define the attractor precisely.
2. Preventing the extinction of the human race is pretty clear today, but
*human race* will become increasingly
On Tue, Mar 4, 2008 at 10:53 AM, rg [EMAIL PROTECTED] wrote:
Hi
Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?
Note, that I am not saying we should not make them!
Because someone will regardless of
: [agi] What should we do to be prepared?
On Tue, Mar 4, 2008 at 10:53 AM, rg [EMAIL PROTECTED] wrote:
Hi
Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?
Note, that I am not saying
-
*From:* Anthony George [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Tuesday, March 04, 2008 2:47 PM
*Subject:* Re: [agi] What should we do to be prepared?
On Tue, Mar 4, 2008 at 10:53 AM, rg [EMAIL PROTECTED] wrote:
Hi
Is anyone discussing what to do in the future when we
have
On Tue, Mar 4, 2008 at 9:53 PM, rg [EMAIL PROTECTED] wrote:
Hi
Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?
Note, that I am not saying we should not make them!
Because someone will regardless
Vlad: How to survive a zombie attack?
I really like that thought :). You're right:we should seriously consider
that possibility. But personally, I don't think we need to be afraid ... I'm
sure they will be friendly zombies...
---
agi
Archives:
--- rg [EMAIL PROTECTED] wrote:
Hi
Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?
Note, that I am not saying we should not make them!
Because someone will regardless of what we decide.
I am
87 matches
Mail list logo