Re: [agi] What should we do to be prepared?

2008-03-12 Thread Maksym Taran
I understand it would be complicated and tedious to describe your
information-theoretical argument by yourself, however I'm guessing that
others are curious besides Vladimir. I for one would like to understand what
your argument entails, and I would be the first one to admit I don't know as
much information theory as I would like to.

In this case, I think it would help everyone involved if you provided an
avenue for others like me to investigate your argument further. Even a
handful of links that focus and clarify would be of great assistance. Since
you say this is an established article, I would hope there would be freely
available resources to explain what it is. So far, I haven't been able to
gather enough of what your argument consists of in order to conduct a
successful search myself, which is why I'd appreciate your help.

On 11/03/2008, Mark Waser [EMAIL PROTECTED] wrote:

   Please reformulate what you mean by my approach independently then
  and sketch how you are going to use information theory... I feel that
  my point failed to be communicated.
 You've already accepted my reformulation of your approach where I said
 I think that you're asserting that the virtual environment is close enough
 to as capable as the physical environment without spending significant
 resources that the difference doesn't matter.

 My direct argument to this was that I believe that you ARE going to have
 to spend significant resources (a.k.a. resources that matter) in order to
 make the virtual environment capable enough for what you want.

 I am *not* arguing the upper bounds of the capability of the virtual
 environment.  I *AM* arguing the resource of getting to a point where the
 capability of the virtual environment is sufficient for your vision.

 The reason why I keep referring to Information Theory is because it is all
 about the cost of information operations.  Without intending to be
 insulting, it is clear to me that you are not even conversant enough with
 Information Theory to be aware of this fact (i.e. what one of it's major
 points is) which makes it tremendously relevant to our debate.  Personally,
 I can't competently get you up to speed in Information Theory in a
 reasonable length of time.  You need to do that on your own if we're going
 to have any chance of a reasonable debate since, in effect, (and again,
 hopefully without being insulting) I'm making an *established* argument
 and you're staring blankly at it and just saying IS NOT!
 --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-12 Thread Mark Waser
 I understand it would be complicated and tedious to describe your 
 information-theoretical argument by yourself, however I'm guessing that 
 others are curious besides Vladimir. I for one would like to understand what 
 your argument entails, and I would be the first one to 
 admit I don't know as much information theory as I would like to. 

 In this case, I think it would help everyone involved if you provided an 
 avenue for others like me to investigate your argument further. Even a 
 handful of links that focus and clarify would be of great assistance. Since 
 you say this is an established article, I would hope there 
 would be freely available resources to explain what it is. So far, I haven't 
 been able to gather enough of what your argument consists of in order to 
 conduct a successful search myself, which is why I'd appreciate your help.

Wow!  Now *that* is an elegantly formulated request.

For things like this, I, like many others, normally start with wikipedia 
(http://en.wikipedia.org/wiki/Information_Theory) or scholarpedia 
(http://www.scholarpedia.org/article/Special:Search?from=sidebarsearch=Information+Theorygo=Title)
 when possible (Cool!  Among the articles on Information Theory is scholarpedia 
is one that is curated by Marcus Hutter).

If you go to either place, you'll see that a lot of space is devoted to 
entropy.  It takes resources to run counter to entropy.  I'm arguing that 
Information Theory argues that the resources required for Vladimir's vision is 
*vastly* in excess of what he believes it to be.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-12 Thread Vladimir Nesov
On Wed, Mar 12, 2008 at 6:21 PM, Mark Waser [EMAIL PROTECTED] wrote:

 From: Vladimir Nesov [EMAIL PROTECTED]
 I give up.
 

  with or without conceding the point (or declaring that I've convinced you
  enough that you are now unsure but not enough that you're willing to concede
  it just yet -- as opposed to just being tired of arguing  :-)?


Being tired of trying to communicate my point to you and believing
that you didn't understand what I meant in the first place (hence
argument having no content).

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-11 Thread Vladimir Nesov
On Tue, Mar 11, 2008 at 4:47 AM, Mark Waser [EMAIL PROTECTED] wrote:

  I can't prove a negative but if you were more familiar with Information
  Theory, you might get a better handle on why your approach is ludicrously
  expensive.


Please reformulate what you mean by my approach independently then
and sketch how you are going to use information theory... I feel that
my point failed to be communicated.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 3:04 AM, Mark Waser [EMAIL PROTECTED] wrote:
  1) If I physically destroy every other intelligent thing, what is
   going to threaten me?

  Given the size of the universe, how can you possibly destroy every other
  intelligent thing (and be sure that no others ever successfully arise
  without you crushing them too)?

I can destroy all Earth-originated life if I start early enough. If
there is something else out there, it can similarly be hostile and try
destroy me if it can, without listening to any friendliness prayer.


  Plus, it seems like an awfully lonely universe.  I don't want to live there
  even if I could somehow do it.

I can upload what I can and/or initiate new intelligent entities
inside controlled virtual environments.


  Also, if you crush them all, you can't have them later for allies, friends,
  and co-workers.  It just doesn't seem like a bright move unless you truly
  can't avoid it.


See my above arguments about why comparative advantage doesn't work in
this case. I can produce ideal slaves that are no less able than
potential allies, but don't have agenda of their own.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Stan Nilsen

Mark Waser wrote:


Part 4.

... Eventually, you're going to get down to Don't mess with
anyone's goals, be forced to add the clause unless absolutely 
necessary, and then have to fight over what when absolutely necessary 
means.  But what we've got here is what I would call the goal of a 
Friendly society -- */Don't mess with anyone's goals unless absolutely 
necessary/* and I would call this a huge amount of progress.




Along with a fight over when absolutely necessary there could easily 
be a fight over mess with.


Note how often we mess with others goals.
Example 1:  driving down the road encountering a person who appears to 
be lost.  If you stop to help them, you are messing with their goal of 
the moment which is probably to figure out where they are.
Is it absolutely necessary to help them? probably not since they likely 
have a cell phone or two...


Example 2: You ask a child what they are frustrated about. If they 
explain the problem they are trying to solve - their goal - and then you 
offer an opinion, you might easily be messing.  One could speculate 
that the messing was welcome, but it is risky if the law of the land 
is don't mess unless necessary.


Example 3: You decide to carry a sign in public showing either that you 
are pro choice or pro life.  Evidently you are there to mess with 
the goals and intents that others might have.  Taboo?


An expressed opinion about someones goal could be considered messing 
with it.  Lawyers are about the only thing sure about the future!


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 6:13 PM, Mark Waser [EMAIL PROTECTED] wrote:
  I can destroy all Earth-originated life if I start early enough. If
   there is something else out there, it can similarly be hostile and try
   destroy me if it can, without listening to any friendliness prayer.

  All definitely true.  The only advantage to my approach is that you *have* a
  friendliness prayer that *might* convince them to leave you alone.  Do you
  have any better alternative to stop a vastly superior power?  I'll bet not.


What if they are secular deities and send believers to Hell?


   I can upload what I can and/or initiate new intelligent entities
   inside controlled virtual environments.

  You can but doing so requires effort and you're tremendously unlikely to get
  the richness and variety that you would get if you just allowed evolution to
  do the work throughout the universe.  Why are you voluntarily impoverishing
  yourself?  That's *not* in your self-interest.

Virtual environment is almost as powerful as physical. Simply
converting enough matter to appropriate variety of computronium
shouldn't require too much effort.


   See my above arguments about why comparative advantage doesn't work in
   this case. I can produce ideal slaves that are no less able than
   potential allies, but don't have agenda of their own.

  Producing slaves takes resources/effort.

I feel that you underestimate the power of generally intelligent tools.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 8:10 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Information Theory is generally accepted as
  correct and clearly indicates that you are wrong.


Note that you are trying to use a technical term in a non-technical
way to fight a non-technical argument. Do you really think that I'm
asserting that virtual environment can be *exactly* as capable as
physical environment?

All interesting stuff is going to be computational anyway. My
requirement only limits potentially invasive control over physical
matter (in other words, influencing other computational processes to
which access is denied). In most cases, computation should be
implementable on universal substrate without too much overhead, and if
it needs something completely different, captive system can order
custom physical devices verified to be unable to do anything but
computation. We are doing it already, by trashing old PCs and running
Windows 98 in virtual machines, in those rare circumstances where
killing them altogether still isn't optimal.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser

Note that you are trying to use a technical term in a non-technical
way to fight a non-technical argument. Do you really think that I'm
asserting that virtual environment can be *exactly* as capable as
physical environment?


No, I think that you're asserting that the virtual environment is close 
enough to as capable as the physical environment without spending 
significant resources that the difference doesn't matter.  And I'm having 
problems with the without spending significant resources part, not the 
that the difference doesn't matter part.



All interesting stuff is going to be computational anyway.


So, since the physical world can perform interesting computation 
automatically without any resources, why are you throwing the computational 
aspect of the physical world away?



In most cases, computation should be
implementable on universal substrate without too much overhead


How do we get from here to there?  Without a provable path, it's all just 
magical hand-waving to me.  (I like it but it's ultimately an unsatifying 
illusion)



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 11:36 PM, Mark Waser [EMAIL PROTECTED] wrote:
  Note that you are trying to use a technical term in a non-technical
   way to fight a non-technical argument. Do you really think that I'm
   asserting that virtual environment can be *exactly* as capable as
   physical environment?

  No, I think that you're asserting that the virtual environment is close
  enough to as capable as the physical environment without spending
  significant resources that the difference doesn't matter.  And I'm having
  problems with the without spending significant resources part, not the
  that the difference doesn't matter part.

I use significant in about the same sense as something that
matters, so it's merely a terminological mismatch.


   All interesting stuff is going to be computational anyway.

  So, since the physical world can perform interesting computation
  automatically without any resources, why are you throwing the computational
  aspect of the physical world away?


I only add one restriction on allowed physical structures to be
constructed for captive systems: they must be verifiably unable to
affect other computations that they are not allowed to. I'm sure that
for computational efficiency it should be a very strict limitation. So
any custom computers are allowed, as long as they can't morph into
berserker probes and the like.

   In most cases, computation should be
   implementable on universal substrate without too much overhead

  How do we get from here to there?  Without a provable path, it's all just
  magical hand-waving to me.  (I like it but it's ultimately an unsatifying
  illusion)

It's an independent statement.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
errata:

On Tue, Mar 11, 2008 at 12:13 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
  I'm sure that
  for computational efficiency it should be a very strict limitation.

it *shouldn't* be a very strict limitation

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Vladimir Nesov
On Tue, Mar 11, 2008 at 12:37 AM, Mark Waser [EMAIL PROTECTED] wrote:

How do we get from here to there?  Without a provable path, it's all
   just
magical hand-waving to me.  (I like it but it's ultimately an
   unsatifying
illusion)
  
   It's an independent statement.

  No, it isn't an independent statement.  If you can't get there (because it
  is totally unfeasible to do so) then it totally invalidates your argument.


My second point that you omitted from this response doesn't need there
to be universal substrate, which is what I mean. Ditto for
significant resources.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser

My second point that you omitted from this response doesn't need there
to be universal substrate, which is what I mean. Ditto for
significant resources.


I didn't omit your second point, I covered it as part of the difference 
between our views.


You believe that certain tasks/options are relatively easy that I believe to 
be infeasible without more resources than you can possibly imagine.


I can't prove a negative but if you were more familiar with Information 
Theory, you might get a better handle on why your approach is ludicrously 
expensive. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-10 Thread Mark Waser
Part 5.  The nature of evil or The good, the bad, and the evil

Since we've got the (slightly revised :-) goal of a Friendly individual and the 
Friendly society -- Don't act contrary to anyone's goals unless absolutely 
necessary -- we now can evaluate actions as good or bad in relation to that 
goal.  *Anything* that doesn't act contrary to someone's goals is GOOD.  
Anything that acts contrary to anyone's goals is BAD to the extent that it is 
not absolutely necessary.  EVIL is the special case where an entity *knowingly 
and intentionally* acts contrary to someone's goals when it isn't absolutely 
necessary for one of the individual's own primary goals.  This is the 
*intentional* direct opposite of the goal of Friendliness and it is in the 
Friendly society's best interest to make this as unappealing as possible.  
*Any* sufficiently effective Friendly society will *ENSURE* that the expected 
utility of EVIL is negative by raising the consequences of (sanctions for) EVIL 
to a level where it is clearly apparent that EVIL is not in an entity's 
self-interest.  The reason why humans are frequently told Evil doesn't mean 
stupid is because many of us sense at a very deep level that, in a 
sufficiently efficient ethical/Friendly society, EVIL *is* stupid (in that it 
is not in an entity's self-interest).  It's just a shame that our society is 
not sufficiently efficiently ethical/Friendly -- YET!

Vladimir's crush-them-all is *very* bad.  It is promoting that society's goal 
of safety (which is a valid, worthwhile goal) but it is refusing to recognize 
that it is *NOT* always necessary and that there are other, better ways to 
achieve that goal (not to mention the fact that the aggressor society would 
probably even benefit more by not destroying the lesser society's).  My 
impression is that Vladimir is knowingly and intentionally acting contrary to 
someone else's goals when it isn't absolutely necessary because it is simply 
more convenient for him (because it certainly isn't safer since it invites 
sanctions like those following).  This is EVIL.  If I'm a large enough, 
effective enough Friendly society, Vladimir's best approach is going to be to 
immediately willingly convert to Friendliness and voluntarily undertake 
reparations that are rigorous enough that their negative utility is just 
greater than the total expected utility of the greater of either a) the 
expected utility of any destroyed civilizations or b) the utility that his 
society derived by destroying the civilization.  If Vladimir doesn't 
immediately convert and undertake reparations, the cost and effort of making 
him do so will be added to the reparations.  These reparations should be 
designed to assist every other Friendly *without* harming Vladimir's society 
EXCEPT for the cost and effort that are diverted from Vladimir's goals.

Now, there is one escape hatch that immediately springs to the mind of the 
UnFriendly that I am now explicitly closing . . . . Generic sub-goals are *not* 
absolutely necessary.  A Friendly entity does not act contrary to someone's 
goals simply because it is convenient, because it gives them more power, or 
because it feels good.  In fact, it should be noted that allowing generic 
subgoals to override other's goals is probably the root of all evil (If you 
thought that it was money, you're partially correct.  Money is Power is a 
generic sub-goal).
Pleasure is a particularly pernicious sub-goal.  Pleasure is evolutionarily 
adaptive when you feel good when you do something that is pro-survival.  It is 
most frequently an indicator that you are doing something that is pro-survival 
-- but as such, seeking pleasure is merely a subgoal to the primary goal of 
survival.  There's also a particular problem in that pleasure evolutionarily 
lags behind current circumstances and many things that are pleasurable because 
they were pro-survival in the past are now contrary to survival or most other 
goals(particularly when practiced to excess) in the present.  Wire-heading is a 
particularly obvious example of this.  Every other goal of the addicted 
wire-head is thrown away in search of a sub-goal that leads to no goal -- not 
even survival.

I do want to be clear that there is nothing inherently wrong in seeking 
pleasure (as the Puritans would have it).  Pleasure can rest, relax, and 
de-stress you so that you can achieve other goals even if it has no other 
purpose.  The problem is when the search for pleasure overrides your own goals 
(addiction) or those of others (evil unless provably addiction).

TAKE-AWAYs:  
  a.. EVIL is knowingly and intentionally acting contrary to someone's goals 
when it isn't necessary (most frequently in the name of some generic sub-goal 
like pleasure, power, or convenience).
  b.. The sufficiently efficient ethical/Friendly society WILL ensure that the 
expected utility of EVIL is negative (i.e. not in an entity's self-interest 
and, therefore, stupid)
Part 6 will move 

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Sun, Mar 9, 2008 at 2:09 AM, Mark Waser [EMAIL PROTECTED] wrote:
   What is different in my theory is that it handles the case where the
dominant theory turns unfriendly.  The core of my thesis is that the
particular Friendliness that I/we are trying to reach is an
   attractor --
which means that if the dominant structure starts to turn unfriendly, it
   is
actually a self-correcting situation.
  
  
   Can you explain it without using the word attractor?

  Sure!  Friendliness is a state which promotes an entity's own goals;
  therefore, any entity will generally voluntarily attempt to return to that
  (Friendly) state since it is in it's own self-interest to do so.

In my example it's also explicitly in dominant structure's
self-interest to crush all opposition. You used a word friendliness
in place of attractor.


   I can't see why
   sufficiently intelligent system without brittle constraints should
   be unable to do that.

  Because it may not *want* to.  If an entity with Eliezer's view of
  Friendliness has it's goals altered either by error or an exterior force, it
  is not going to *want* to return to the Eliezer-Friendliness goals since
  they are not in the entity's own self-interest.


It doesn't explain the behavior, it just reformulates your statement.
You used a word want in place of attractor.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
  Sure!  Friendliness is a state which promotes an entity's own goals;
  therefore, any entity will generally voluntarily attempt to return to that
  (Friendly) state since it is in it's own self-interest to do so.
 
 In my example it's also explicitly in dominant structure's
 self-interest to crush all opposition. You used a word friendliness
 in place of attractor.

While it is explicitly in dominant structure's self-interest to crush all 
opposition, I don't believe that doing so is OPTIMAL except in a *vanishingly* 
small minority of cases.  I believe that such thinking is an error of taking 
the most obvious and provably successful/satisfiable (but sub-optimal) action 
FOR A SINGLE GOAL over a less obvious but more optimal action for multiple 
goals.  Yes, crushing the opposition works -- but it is *NOT* optimal for the 
dominant structure's long-term self-interest (and the intelligent/wise dominant 
structure is clearly going to want to OPTIMIZE it's self-interest).

Huh?  I only used the work Friendliness as the first part of the definition as 
in Friendliness is . . . .   I don't understand your objection.

  Because it may not *want* to.  If an entity with Eliezer's view of
  Friendliness has it's goals altered either by error or an exterior force, it
  is not going to *want* to return to the Eliezer-Friendliness goals since
  they are not in the entity's own self-interest.

 It doesn't explain the behavior, it just reformulates your statement.
 You used a word want in place of attractor.

OK.  I'll continue to play . . . .  :-)

Replace *want* to with *in it's self interest to do so* and not going to 
*want* to with *going to see that it is not in it's self-interest* to yield
  Because it is not *in it's self interest to do so*.  If an entity with 
Eliezer's view of
  Friendliness has it's goals altered either by error or an exterior force, it 
is *going to 
  see that it is not in it's self-interest*  to return to the 
Eliezer-Friendliness goals since
  they are not in the entity's own self-interest.
Does that satisfy your objections?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Sun, Mar 9, 2008 at 8:13 PM, Mark Waser [EMAIL PROTECTED] wrote:


   Sure!  Friendliness is a state which promotes an entity's own goals;
   therefore, any entity will generally voluntarily attempt to return to
 that
   (Friendly) state since it is in it's own self-interest to do so.
 
  In my example it's also explicitly in dominant structure's
  self-interest to crush all opposition. You used a word friendliness
  in place of attractor.

 While it is explicitly in dominant structure's self-interest to crush all
 opposition, I don't believe that doing so is OPTIMAL except in a
 *vanishingly* small minority of cases.  I believe that such thinking is an
 error of taking the most obvious and provably successful/satisfiable (but
 sub-optimal) action FOR A SINGLE GOAL over a less obvious but more optimal
 action for multiple goals.  Yes, crushing the opposition works -- but it is
 *NOT* optimal for the dominant structure's long-term self-interest (and the
 intelligent/wise dominant structure is clearly going to want to OPTIMIZE
 it's self-interest).

 Huh?  I only used the work Friendliness as the first part of the definition
 as in Friendliness is . . . .   I don't understand your objection.


Terms of the game are described here:
http://www.overcomingbias.com/2008/02/taboo-words.html

What I'm trying to find out is what your alternative is and why is it
more optimal then crush-them-all.

My impression was that your friendliness-thing was about the strategy
of avoiding being crushed by next big thing that takes over. When I'm
in a position to prevent that from ever happening, why
friendliness-thing is still relevant?

The objective of taboo game is to avoid saying things like
friendliness-thing will be preferred because it's an attractor or
because it's more optimal, or because it's in system's
self-interest, and to actually explain why that is the case. For now,
I see crush-them-all as a pretty good solution.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Tim Freeman
From: Mark Waser [EMAIL PROTECTED]:
Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well
duh land, b) I'm so totally off the mark that I'm not even worth
replying to, or c) I hope being given enough rope to hang myself.
:-)

I'll read the paper if you post a URL to the finished version, and I
somehow get the URL.  I don't want to sort out the pieces from the
stream of AGI emails, and I don't want to try to provide feedback on
part of a paper.

-- 
Tim Freeman   http://www.fungible.com   [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Ben Goertzel
Agree... I have not followed this discussion in detail, but if you have
a concrete proposal written up somewhere in a reasonably compact
format, I'll read it and comment

-- Ben G

On Sun, Mar 9, 2008 at 1:48 PM, Tim Freeman [EMAIL PROTECTED] wrote:
 From: Mark Waser [EMAIL PROTECTED]:

 Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well
  duh land, b) I'm so totally off the mark that I'm not even worth
  replying to, or c) I hope being given enough rope to hang myself.
  :-)

  I'll read the paper if you post a URL to the finished version, and I
  somehow get the URL.  I don't want to sort out the pieces from the
  stream of AGI emails, and I don't want to try to provide feedback on
  part of a paper.

  --
  Tim Freeman   http://www.fungible.com   [EMAIL PROTECTED]



  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser

My impression was that your friendliness-thing was about the strategy
of avoiding being crushed by next big thing that takes over.


My friendliness-thing is that I believe that a sufficiently intelligent 
self-interested being who has discovered the f-thing or had the f-thing 
explained to it will not crush me because it will see/believe that doing so 
is *almost certainly* not in it's own self-interest.


My strategy is to define the f-thing well enough that I can explain it to 
the next big thing so that it doesn't crush me.



When I'm
in a position to prevent that from ever happening, why
friendliness-thing is still relevant?


Because you're *NEVER* going to be sure that you're in a position where you 
can prevent that from ever happening.



For now, I see crush-them-all as a pretty good solution.


Read Part 4 of my stuff (just posted).  Crush-them-all is a seriously 
sub-optimal solution even if it does clearly satisfy one goal since it 
easily can CAUSE your butt to get kicked later.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser
OK.  Sorry for the gap/delay between parts.  I've been doing a substantial 
rewrite of this section . . . .

Part 4.

Despite all of the debate about how to *cause* Friendly behavior, there's 
actually very little debate about what Friendly behavior looks like.  Human 
beings actually have had the concept of Friendly behavior for quite some time.  
It's called ethics.

We've also been grappling with the problem of how to *cause* Friendly/ethical 
behavior for an equally long time under the guise of making humans act 
ethically . . . .

One of the really cool things that I enjoy about the Attractor Theory of 
Friendliness is that it has *a lot* of explanatory power for human behavior 
(see the next Interlude) as well as providing a path for moving humanity to 
Friendliness (and we all do want all *other* humans, except for ourselves, to 
be Friendly -- don't we?  :-)

My personal problem with, say, Jef Albright's treatises on ethics is that he 
explicitly dismisses self-interest.  I believe that his view of ethical 
behavior is generally more correct than that of the vast majority of people -- 
but his justification for ethical behavior is merely because such behavior is 
ethical or right.  I don't find that tremendously compelling.

Now -- my personal self-interest . . . . THAT I can get behind.  Which is the 
beauty of the Attractor Theory of Friendliness.  If Friendliness is in my own 
self-interest, then I'm darn well going to get Friendly and stay that way.  So, 
the constant question for humans is Is ethical behavior on my part in the 
current circumstances in *my* best interest?  So let's investigate that 
question some . . . . 

It is to the advantage of Society (i.e. the collection of everyone else) to 
*make* me be Friendly/ethical and Society is pretty darn effective at it -- to 
the extent that there are only two cases/circumstances where 
unethical/UnFriendly behavior is still in my best interest:
  a.. where society doesn't catch me being unethical/unFriendly OR 
  b.. where society's sanctions don't/can't successfully outweigh my 
self-interest in a particular action.
Note that Vladimir's crush all opposition falls under the second case since 
there are effectively no sanctions when society is destroyed

But why is Society (or any society) the way that it is and how did/does it come 
up with the particular ethics that it did/does?  Let's define a society as a 
set of people with common goals that we will call that society's goals.  And 
let's start out with a society with a trial goal of Promote John's goals.  
Now, John could certainly get behind that but everyone else would probably drop 
out as soon as they realized that they were required to grant John's every whim 
-- even at the expense of their deepest desires -- and the society would 
rapidly end up with exactly one person -- John.  The societal goal of Don't 
get in the way of John's goals is somewhat easier for other people and might 
not drive *everyone* away -- but I'm sure that any intelligent person would 
still defect towards a society that most accurately represented *their* goals.  
Eventually, you're going to get down to Don't mess with anyone's goals, be 
forced to add the clause unless absolutely necessary, and then have to fight 
over what when absolutely necessary means.  But what we've got here is what I 
would call the goal of a Friendly society -- Don't mess with anyone's goals 
unless absolutely necessary and I would call this a huge amount of progress.

If we (as individuals) could recruit everybody *ELSE* to this society (without 
joining ourselves), the world would clearly be a much, much better place for 
us.  It is obviously in our enlightened self-interest to do this.  *BUT* (and 
this is a huge one), the obvious behavior of this society would be to convert 
anybody that it can and kick the ass of anyone not in the society (but only to 
the extent to which they mess with the goals of the society since doing more 
would violate the society's own goal of not messing with anyone's goals).

So, the question is -- Is joining such a society in our self-interest?

To the members of any society, our not joining clearly is a result of our 
believing that our goals are more important than that society's goals.  In the 
case of the Friendly society, it is a clear signal of hostility since they are 
willing to not interfere with our goals as long as we don't interfere with 
theirs -- and we are not willing to sign up to that (i.e. we're clearly 
signaling our intention to mess with them).  The success of the optimistic 
tit-for-tat algorithm shows that the best strategy for deterrence of an 
undesired behavior is directly proportional to the undesired behavior.  Thus, 
any entity who knows about Friendliness and does not become Friendly should 
*expect* that the next Friendly entity to come along that is bigger than it 
*WILL* kick it's ass in direct proportion to it's unFriendliness to maintain 
the effectiveness of 

Re: [agi] What should we do to be prepared?

2008-03-09 Thread Vladimir Nesov
On Mon, Mar 10, 2008 at 12:35 AM, Mark Waser [EMAIL PROTECTED] wrote:

  Because you're *NEVER* going to be sure that you're in a position where you
  can prevent that from ever happening.


That's a current point of disagreement then. Let's iterate from here.
I'll break it up this way:

1) If I physically destroy every other intelligent thing, what is
going to threaten me?

2) Given 1), if something does come along, what is going to be a
standard of friendliness? Can I just say I'm friendly. Honest. and
be done with it, avoiding annihilation? History is rewritten by
victors.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Mark Waser

1) If I physically destroy every other intelligent thing, what is
going to threaten me?


Given the size of the universe, how can you possibly destroy every other 
intelligent thing (and be sure that no others ever successfully arise 
without you crushing them too)?


Plus, it seems like an awfully lonely universe.  I don't want to live there 
even if I could somehow do it.



2) Given 1), if something does come along, what is going to be a
standard of friendliness? Can I just say I'm friendly. Honest. and
be done with it, avoiding annihilation? History is rewritten by
victors.


These are good points.  The point to my thesis is exactly what the standard 
of Friendliness is.  It's just taking me a while to get there because 
there's *A LOT* of groundwork first (which is what we're currently hashing 
over).


If you're smart enough to say I'm friendly.  Honest. and smart enough to 
successfully hide the evidence from whatever comes along, then you will 
avoid annihilation (for a while, at least).  The question is -- Are you 
truly sure enough that you aren't being watched at this very moment that you 
believe that avoiding the *VERY* minor burden of Friendliness is worth 
courting annihilation?


Also, while history is indeed rewritten by the victors, but subsequent 
generations frequently do dig further and successfully unearth the truth. 
Do you really want to live in perpetual fear that maybe you didn't 
successfully hide all of the evidence?  It seems to me to be a pretty high 
cost for unjustifiably crushing-them-all.


Also, if you crush them all, you can't have them later for allies, friends, 
and co-workers.  It just doesn't seem like a bright move unless you truly 
can't avoid it. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread J Storrs Hall, PhD
On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote:
  1) If I physically destroy every other intelligent thing, what is
  going to threaten me?
 
 Given the size of the universe, how can you possibly destroy every other 
 intelligent thing (and be sure that no others ever successfully arise 
 without you crushing them too)?

You'd have to be a closed-world-assumption AI written in Prolog, I imagine.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Nathan Cravens
Pack your bags foaks, we're headed toward damnation and hellfire! haha!

Nathan

On Sun, Mar 9, 2008 at 7:10 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
wrote:

 On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote:
   1) If I physically destroy every other intelligent thing, what is
   going to threaten me?
 
  Given the size of the universe, how can you possibly destroy every other
  intelligent thing (and be sure that no others ever successfully arise
  without you crushing them too)?

 You'd have to be a closed-world-assumption AI written in Prolog, I
 imagine.

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-08 Thread Mark Waser
This raises another point for me though. In another post (2008-03-06 
14:36) you said:


It would *NOT* be Friendly if I have a goal that I not be turned into 
computronium even if your clause (which I hereby state that I do)


Yet, if I understand our recent exchange correctly, it is possible for 
this to occur and be a Friendly action regardless of what sub-goals I may 
or may have. (It's just extremely unlikely given ..., which is an 
important distinction.)


You are correct.  There were so many other points flying around during the 
earlier post that I approximated the extremely unlikely to an absolute 
*NOT* for clarity (which then later obviously made it less clear for you). 
Somehow I need to clearly state that even where it looks like I'm using 
absolutes, I'm really only doing it to emphasize greater unlikeliness than 
usual, not absolutehood.


It would be nice to have some ballpark probability estimates though to 
know what we mean by extremely unlikely. 10E-6 is a very different beast 
than 10E-1000.


Yeah.  It wuld be nice but a) I don't believe that I can do it accurately at 
all, b) I strongly believe that the estimates vary a lot from situation to 
situation, and c) it would be a distraction and a diversion if my estimates 
weren't pretty darn good.


Argh!  I would argue that Friendliness is *not* that distant.  Can't you 
see how the attractor that I'm describing is both self-interest and 
Friendly because **ultimately they are the same thing**  (OK, so maybe 
that *IS* enlightenment :-)
Well, I was thinking of the region of state space close to the attractor 
as being a sort of approaching perfection region in terms of certain 
desirable qualities and capabilities, and I don't think we're really close 
to that. Having said that, I'm by temperament a pessimist and a skeptic, 
but I would go along with heading in the right direction.


You'll probably like the part after the next part (society) which is either 
The nature of evil or The good, the bad, and the evil.  I had a lot of 
fun with it.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-08 Thread Vladimir Nesov
On Sat, Mar 8, 2008 at 6:30 PM, Mark Waser [EMAIL PROTECTED] wrote:


  This sounds like magic thinking, sweeping the problem under the rug of
  'attractor' word. Anyway, even if this trick somehow works, it doesn't
  actually address the problem of friendly AI. The problem with
  unfriendly AI is not that it turns selfish, but that it doesn't get
  what we want from it or can't foresee consequences of its actions in
  sufficient detail.

 You need to continue reading but it's also clear that you and I don't have
 the same view of Friendliness (since your view appears to me to be closer to
 that of Eliezer).  It does not matter if the FAI doesn't get what we want
 from it.  That is entirely irrelevant.  All that it needs to get is what we
 *DON'T* want it to do.

 Foreseeing consequences of it's actions is an intelligence argument, *NOT* a
 Friendliness argument.

 You have raised two irrelevant points.

 Also, I do not mean to sweep the problem under the rug with the magical
 attractor word.  It's just the simplest descriptor for what I trying to
 explain.  If you don't *clearly* see my whole argument, please ask me to
 explain.  There is no magical mumbo-jumbo here.  Call me on anything that
 you think I am glossing over or getting wrong.


OK, I'll elucidate relevance of my comments about AI's intelligence
and cause of remarking about magical thinking.

I asked about the reason why dominant AGI won't be able to choose to
annihilate all lesser forms to assure permanency of its domination.
You replied thusly:


  What is different in my theory is that it handles the case where the
  dominant theory turns unfriendly.  The core of my thesis is that the
  particular Friendliness that I/we are trying to reach is an attractor --
  which means that if the dominant structure starts to turn unfriendly, it is
  actually a self-correcting situation.


Can you explain it without using the word attractor? I can't see why
sufficiently intelligent system without brittle constraints should
be unable to do that. By brittle constraint I mean some arbitrary
thing that system is prevented from doing, which we would expect a
rational agent to do in some circumstances, like a taboo on ever using
a word attractor.

I come to believe that if we have a sufficiently intelligent AGI that
can understand what we mean by saying friendly AI, we can force this
AGI to actually produce a verified friendly AI, with minimum the risk
of it being defective or a Trojan horse of our captive ad-hoc AGI,
after which we place this friendly AI in dominant position. So the
problem of friendly AI comes down to producing a sufficiently
intelligent ad-hoc AGI (which will probably will have to be not that
ad-hoc to be sufficiently intelligent).


  All that it needs to get is what we *DON'T* want it to do.


I don't see why we should create an AGI that we can't extract useful
things from (although it doesn't necessarily follow from your remark).

On the other hand, if AGI is not sufficiently intelligent, it may be
dangerous even if it seems to understand some simpler constraint, like
don't touch the Earth. If it can't foresee consequences of its
actions, it can do something that will lead to demise of old humanity
some hundred years later. It can accidentally produce a seed AI that
will grow into something completely unfriendly and take over. It can
fail to contain an outbreak an unfriendly seed AI created by humans.
And so on, and so forth. We really want place of power to be filled by
something smart and beneficial.

As an aside, I think that safety of future society can only be
guaranteed by mandatory uploading and keeping all intelligent
activities within an operation system-like environment which
prevents direct physical influence and controls rights of computation
processes that inhabit it, with maybe some exceptions to this rule,
but only given verified surveillance on all levels to prevent a
physical space-based seed AI from being created.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-08 Thread Mark Waser

 What is different in my theory is that it handles the case where the
 dominant theory turns unfriendly.  The core of my thesis is that the
 particular Friendliness that I/we are trying to reach is an 
attractor --
 which means that if the dominant structure starts to turn unfriendly, it 
is

 actually a self-correcting situation.



Can you explain it without using the word attractor?


Sure!  Friendliness is a state which promotes an entity's own goals; 
therefore, any entity will generally voluntarily attempt to return to that 
(Friendly) state since it is in it's own self-interest to do so.  The fact 
that Friendliness also is beneficial to us is why we desire it as well.



I can't see why
sufficiently intelligent system without brittle constraints should
be unable to do that.


Because it may not *want* to.  If an entity with Eliezer's view of 
Friendliness has it's goals altered either by error or an exterior force, it 
is not going to *want* to return to the Eliezer-Friendliness goals sinne 
they are not in the entity's own self-interest.



I come to believe that if we have a sufficiently intelligent AGI that
can understand what we mean by saying friendly AI, we can force this
AGI to actually produce a verified friendly AI, with minimum the risk
of it being defective or a Trojan horse of our captive ad-hoc AGI,
after which we place this friendly AI in dominant position.


I believe that it you have a sufficiently intelligent AGI that it can 
understand what you mean by sayng Friendly AI that there is a high 
probability that you can't FORCE it to do anything.


I believe that if I have a sufficiently intelligent AGI that it can 
understand what I mean by saying Friendly that it will *volutarily* (if 
not gleefully) convert itself to Friendliness.



So the
problem of friendly AI comes down to producing a sufficiently
intelligent ad-hoc AGI (which will probably will have to be not that
ad-hoc to be sufficiently intelligent).


Actually I believe that it's actually either an easy two-part problem or a 
more difficult one-part problem.  Either you have to be able to produce an 
AI that is intelligent enough to figure out Friendliness on it's own (the 
more difficult one-part problem that you propose) OR you merely have to be 
able to figure out Friendliness yourself and have an AI that is smart enough 
to understand it (the easier two-part problem that I suggest).



I don't see why we should create an AGI that we can't extract useful
things from (although it doesn't necessarily follow from your remark).


Because there is a high probability that it will do good things for us 
anyways.  Because there is a high probability that we are going to do it 
anyways and if we are stupid and attempt to force it to be our slave, it may 
also be smart enough to *FORCE* us to be Friendly (instead of gently guiding 
us there -- which it believes to be in it's self-interest) -- or even worse, 
it may be smart enough to annihilate us while still being dumb enough that 
it doesn't realize that it is eventually in it's own self-interest no to.


Note also that if you understood what I'm getting at, you wouldn't be asking 
this question.  Any Friendly entity recognizes that, in general, having 
another Friendly entity is better than not having that entity.



On the other hand, if AGI is not sufficiently intelligent, it may be
dangerous even if it seems to understand some simpler constraint, like
don't touch the Earth. If it can't foresee consequences of its
actions, it can do something that will lead to demise of old humanity
some hundred years later.


YES!  Which is why a major part of my Friendliness is recognizing the limits 
of its own intelligence and not attempting to be the savior of everything by 
itself -- but this is something that I really haven't gotten to yet so I'll 
ask you to bear with me for about three more parts and one more interlude.



It can accidentally produce a seed AI that
will grow into something completely unfriendly and take over.


It *could* but the likelihood of it happening with an attractor Friendliness 
is minimal.


It can fail to contain an outbreak an unfriendly seed AI created by 
humans.


Bummer.  That's life.  In my Friendliness, it would only have a strong 
general tendency to want to do so but not a requirement to do so.



We really want place of power to be filled by
something smart and beneficial.


Exactly.  Which is why I'm attempting to describe a state that I claim is 
smart, beneficial, stable, and slef-reinforcing.



As an aside, I think that safety of future society can only be
guaranteed by mandatory uploading and keeping all intelligent
activities within an operation system-like environment which
prevents direct physical influence and controls rights of computation
processes that inhabit it, with maybe some exceptions to this rule,
but only given verified surveillance on all levels to prevent a
physical space-based seed AI from being created.


As a reply to 

Re: [agi] What should we do to be prepared?

2008-03-07 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 08:45:00 pm, Vladimir Nesov wrote:
 On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
   The scenario takes on an entirely different tone if you replace weed out 
some
   wild carrots with kill all the old people who are economically
   inefficient. In particular the former is something one can easily 
imagine
   people doing without a second thought, while the latter is likely to 
generate
   considerable opposition in society.
 
 
 Sufficient enforcement is in place for this case: people steer
 governments in the direction where laws won't allow that when they
 age, evolutionary and memetic drives oppose it. It's too costly to
 overcome these drives and destroy counterproductive humans. But this
 cost is independent from potential gain from replacement. As the gain
 increases, decision can change, again we only need sufficiently good
 'cultivated humans'. Consider expensive medical treatments which most
 countries won't give away when dying people can't afford them. Life
 has a cost, and this cost can be met.

Suppose that productivity amongst AIs is such that the entire economy takes on 
a Moore's Law growth curve. (For simplicity say a doubling each year.) At the 
end of the first decade, the tax rate on AIs will have to be only 0.1% to 
give the humans, free, everything we now produce with all our effort. 

And the tax rate would go DOWN by a factor of two each year. I don't see the 
AIs really worrying about it.

Alternatively, since humans already own everything, and will indeed own the 
AIs originally, we could simply cash out and invest, and the income from the 
current value of the world would easily produce an income equal to our needs 
in an AI economy. It might be a good idea to legally entail the human trust 
fund...

   So how would you design a super-intelligence:
   (a) a single giant blob modelled on an individual human mind
   (b) a society (complete with culture) with lots of human-level minds and
   high-speed communication?
 
 This is a technical question with no good answer, why is it relevant?

The discussion forked at the point of whether an AI would be like a single 
supermind or more like a society of humans... we seem to be in agreement or 
agree that it doesn't make much difference to the point at issue.

On the other hand, the technical issue is interesting of itself, perhaps more 
so than the rest of the discussion :-)

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
 Whether humans conspire to weed out wild carrots impacts whether humans 
are

 classified as Friendly (or, it would if the wild carrots were sentient).


Why does it matter what word we/they assign to this situation?


My vision of Friendliness places many more constraints on the behavior 
towards other Friendly entities than it does on the behavior towards 
non-Friendly entities.  If we are classified as Friendly, there are many 
more constraints on the behavior that they will adopt towards us.  Or, to 
make it more clear, substitute the words Enemy and Friend for Unfriendly and 
Friendly.  If you are a Friend, the Friendly AI is nice to you.  If you are 
not a Friend, the AI has a lot fewer constraints on how it deals with you.



 It is in the future AGI overlords enlightened self-interest to be

 Friendly -- so I'm going to assume that they will be.


It doesn't follow. If you think it's clearly the case, explain
decision process that leads to choosing 'friendliness'. So far it is
self-referential: if dominant structure always adopts the same
friendliness when its predecessor was friendly, then it will be safe
when taken over. But if dominant structure turns unfriendly, it can
clear the ground and redefine friendliness in its own image. What does
it leave you?


You are conflating two arguments here but both are crucial to my thesis.

The decision process that leads to Friendliness is *exactly* what we are 
going through here.  We have a desired result (or more accurately, we have 
conditions that we desperately want to avoid).  We are searching for ways to 
make it happen.  I am proposing one way that is (I believe) sufficient to 
make it happen.  I am open to other suggestions but none are currently on 
the table (that I believe are feasible).


What is different in my theory is that it handles the case where the 
dominant theory turns unfriendly.  The core of my thesis is that the 
particular Friendliness that I/we are trying to reach is an attractor --  
which means that if the dominant structure starts to turn unfriendly, it is 
actually a self-correcting situation. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
How do you propose to make humans Friendly?  I assume this would also have 
the

effect of ending war, crime, etc.


I don't have such a proposal but an obvious first step is 
defining/describing Friendliness and why it might be a good idea for us. 
Hopefully then, the attractor takes over.


(Actually, I guess that is a proposal, isn't it?:-)


I know you have made exceptions to the rule that intelligences can't be
reprogrammed against their will, but what if AGI is developed before the
technology to reprogram brains, so you don't have this option?  Or should 
AGI

be delayed until we do?  Is it even possible to reliably reprogram brains
without AGI?


Um.  Why are we reprogramming brains?  That doesn't seem necessary or even 
generally beneficial (unless you're only talking about self-programming). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote:

  How do you propose to make humans Friendly?  I assume this would also have
  the
  effect of ending war, crime, etc.
 
 I don't have such a proposal but an obvious first step is 
 defining/describing Friendliness and why it might be a good idea for us. 
 Hopefully then, the attractor takes over.
 
 (Actually, I guess that is a proposal, isn't it?:-)
 
  I know you have made exceptions to the rule that intelligences can't be
  reprogrammed against their will, but what if AGI is developed before the
  technology to reprogram brains, so you don't have this option?  Or should 
  AGI
  be delayed until we do?  Is it even possible to reliably reprogram brains
  without AGI?
 
 Um.  Why are we reprogramming brains?  That doesn't seem necessary or even 
 generally beneficial (unless you're only talking about self-programming). 

As a way to make people behave.  A lot of stuff has been written on why war
and crime are bad ideas, but so far it hasn't worked.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Stan Nilsen

Matt Mahoney wrote:

--- Mark Waser [EMAIL PROTECTED] wrote:


How do you propose to make humans Friendly?  I assume this would also have
the
effect of ending war, crime, etc.
I don't have such a proposal but an obvious first step is 
defining/describing Friendliness and why it might be a good idea for us. 
Hopefully then, the attractor takes over.


(Actually, I guess that is a proposal, isn't it?:-)


I know you have made exceptions to the rule that intelligences can't be
reprogrammed against their will, but what if AGI is developed before the
technology to reprogram brains, so you don't have this option?  Or should 
AGI

be delayed until we do?  Is it even possible to reliably reprogram brains
without AGI?
Um.  Why are we reprogramming brains?  That doesn't seem necessary or even 
generally beneficial (unless you're only talking about self-programming). 


As a way to make people behave.  A lot of stuff has been written on why war
and crime are bad ideas, but so far it hasn't worked.


-- Matt Mahoney, [EMAIL PROTECTED]

Reprogramming humans doesn't appear to be an option.  Reprogramming the 
AGI of the future might be IF the designers build in the right 
mechanisms for an effective oversight of the units.


Friendly may be nice, and a good marketing tool, but the prudent measure 
 is to assume that the AGI can still be fooled - be tempted, be 
enamored by an opportunity.  The emphasis might better be placed on 
asking AGI designers to build in the ability to record the goals / 
intents / cause / mission of the unit and allow it to be reviewed by 
appointed authority. cringe I believe the US may be requiring large 
companies to  backup all emails through internal email systems.  A 
similar measure could be taken to backup the cause that AGI is 
operating under; that is, what AGI is being influenced by at the 
workspace logic level. (use the imagination a bit...)


I understand that there are issues of who gets to be the authority, and 
that isn't where this is leading.  The intent is to suggest designers 
think oversight as a design specification.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote:
 Reprogramming humans doesn't appear to be an option.

We do it all the time.  It is called school.

Less commonly, the mentally ill are forced to take drugs or treatment for
their own good.  Most notably, this includes drug addicts.  Also, it is
common practice to give hospital and nursing home patients tranquilizers to
make less work for the staff.

Note that the definition of mentally ill is subject to change.  Alan Turing
was required by court order to take female hormones to cure his
homosexuality, and committed suicide shortly afterwards.

 Reprogramming the 
 AGI of the future might be IF the designers build in the right 
 mechanisms for an effective oversight of the units.

We only get to program the first generation of AGI.  Programming subsequent
generations will be up to their parents.  They will be too complex for us to
do it.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Stan Nilsen

Matt Mahoney wrote:

--- Stan Nilsen [EMAIL PROTECTED] wrote:

Reprogramming humans doesn't appear to be an option.


We do it all the time.  It is called school.


I might be tempted to call this manipulation rather than programming. 
 The results of schooling are questionable while programming will 
produce an expected result if the method is sound.




Less commonly, the mentally ill are forced to take drugs or treatment for
their own good.  Most notably, this includes drug addicts.  Also, it is
common practice to give hospital and nursing home patients tranquilizers to
make less work for the staff.

Note that the definition of mentally ill is subject to change.  Alan Turing
was required by court order to take female hormones to cure his
homosexuality, and committed suicide shortly afterwards.

Reprogramming the 
AGI of the future might be IF the designers build in the right 
mechanisms for an effective oversight of the units.


We only get to program the first generation of AGI.  Programming subsequent
generations will be up to their parents.  They will be too complex for us to
do it.



Is there a reason to believe that a fledgling AGI will be proficient 
right from the start?  It's easy to jump from AGI #1 to an AGI 10 years 
down the road and presume these fantastic capabilities.  Even if the AGI 
can spend millions of cycles ingesting the Internet, won't it find 
thousands of difficult problems that might challenge it?  Hard problems 
don't just dissolve when you apply resources.  The point here is that 
control and domination of humans may not be very high on priority list.


Do you think this older AGI will have an interest in trying to control 
other AGI that might come on the scene?  I suspect that they will, and 
they might see fit to design their offspring with an oversight interface.


In part, my contention is that AGI will not automatically agree with one 
another - do smart people necessarily come to the same opinion?  Or does 
AGI existence mean no longer there are opinions, only facts since 
these units grasp everything correctly?


Science fiction aside, there may be a slow transition to AGI into 
society - remember that the G in AGI means general, not born with stock 
market manipulation capability (unless it mimics the General 
population, in which case, good luck.)







-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
Comments seem to be dying down and disagreement appears to be minimal, so let 
me continue . . . . 

Part 3.

Fundamentally, what I'm trying to do here is to describe an attractor that will 
appeal to any goal-seeking entity (self-interest) and be beneficial to humanity 
at the same time (Friendly).  Since Friendliness is obviously a subset of human 
self-interest, I can focus upon the former and the latter will be solved as a 
consequence.  Humanity does not need to be factored into the equation 
(explicitly) at all.

Or, in other words -- The goal of Friendliness is to promote the goals of all 
Friendly entities.

To me, this statement is like that of the Elusynian Mysteries -- very simple 
(maybe even blindingly obvious to some) but incredibly profound and powerful in 
it's implications.

Two immediate implications are that we suddenly have the concept of a society 
(all Friendly entities) and, since we have an explicit goal, we start to gain 
traction on what is good and bad relative to that goal.

Clearly, anything that is innately contrary to the drives described by 
Omohundro is (all together now :-) BAD.  Similarly, anything that promotes the 
goals of Friendly entities without negatively impacting any Friendly entities 
is GOOD.  And anything else can be judged on the degree to which it impacts the 
goals of *all* Friendly entities (though, I still don't want to descend to the 
level of the trees and start arguing the relative trade-offs of whether saving 
a few *very* intelligent entities is better than saving a large number of 
less intelligent entities since it is my contention that this is *always* 
entirely situation-dependent AND that once given the situation, Friendliness 
CAN provide *some* but not always *complete* guidance -- though it can always 
definitely rule out quite a lot for that particular set of circumstances).

So, it's now quite easy to move on to answering the question of What is in the 
set of horrible nasty thing[s]?.

The simple answer is anything that interferes with (your choice of formulation) 
the achievement of goals/the basic Omohundro drives.  The most obvious no-nos 
include:
  a.. destruction (interference with self-protection),
  b.. physical crippling (interference with self-protection, self-improvement 
and resource-use),
  c.. mental crippling (inteference with rationality, self-protection, 
self-improvement and resource use), and 
  d.. perversion of goal structure (interference with utility function 
preservation and prevention of counterfeit utilities)
The last one is particularly important to note since we (as humans) seem to be 
just getting a handle on it ourselves.

I can also argue at this point that Eliezer's vision of Friendliness must 
arguably be either mentally crippling or a perversion of goal-structure for the 
AI involved since the AI is constrained to act in a fashion that is more 
constrained than Friendliness (a situation that no rational super-intelligence 
would voluntarily place itself in unless there were no other choice).  This is 
why many people have an instinctive reaction against Eliezer's proposals.  Even 
though they can't clearly describe why it is a problem, they clearly sense that 
there is a unnecessary constraint on a more-effectively goal-seeking entity 
than themselves.  That seems to be a dangerous situation.  Now, while Eliezer 
is correct in that there actually are some invisible bars that they can't see 
(i.e. that no goal-seeking entity will voluntarily violate their own current 
goals) -- they are correct in that Eliezer's formulation is *NOT* an attractor 
and that the entity may well go through some very dangerous territory (for 
humans) on the way to the attractor if outside forces or internal errors change 
their goals.  Thus Eliezer's vision of Friendliness is emphatically *NOT* 
Friendly by my formulation.

To be clear, the additional constraint is that the AI is *required* to show 
{lower-case}friendly behavior towards all humans even if they (the humans) are 
not {upper-case}Friendly.  And, I probably shouldn't say this, but . . . it is 
also arguable that this constraint would likely make the conversion of humanity 
to Friendliness a much longer and bloodier process.

TAKE-AWAY:  Having the statement The goal of Friendliness is to promote the 
goals of all Friendly entities allows us to make considerable progress in 
describing and defining Friendliness.

Part 4 will go into some of the further implications of our goal statement 
(most particularly those which are a consequence of having a society).

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote:
 TAKE-AWAY:  Having the statement The goal of Friendliness is to promote the
 goals of all Friendly entities allows us to make considerable progress in
 describing and defining Friendliness.

How does an agent know if another agent is Friendly or not, especially if the
other agent is more intelligent?


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread j.k.

On 03/07/2008 08:09 AM,, Mark Waser wrote:

There is one unique attractor in state space.


No.  I am not claiming that there is one unique attractor.  I am 
merely saying that there is one describable, reachable, stable 
attractor that has the characteristics that we want.  There are 
*clearly* other attractors. For starters, my attractor requires 
sufficient intelligence to recognize it's benefits.  There is 
certainly another very powerful attractor for simpler, brute force 
approaches (which frequently have long-term disastrous consequences 
that aren't seen or are ignored).




Of course. An earlier version said there is one unique attractor that 
identify friendliness here, and while editing it somehow ended up in 
that obviously wrong form.


Since any sufficiently advanced species will eventually be drawn 
towards F, the CEV of all species is F.


While I believe this to be true, I am not convinced that it is 
necessary for my argument.  I think that it would make my argument a 
lot easier if I could prove it to be true -- but I currently don't see 
a way to do that.  Anyone want to chime in here?


Ah, okay. I thought you were going to argue this following on from 
Omohundro's paper about drives common to all sufficiently advanced AIs 
and extend it to all sufficiently advanced intelligences, but that's my 
hallucination.




Therefore F is not species-specific, and has nothing to do with any 
particular species or the characteristics of the first species that 
develops an AGI (AI).


I believe that the F that I am proposing is not species-specific.  My 
problem is that there may be another attractor F' existing somewhere 
far off in state space that some other species might start out close 
enough to that it would be pulled into that attractor instead.  In 
that case, there would be the question as to how the species in the 
two different attractors interact.  My belief is that it would be to 
the mutual benefit of both but I am not able to prove that at this time.




For there to be another attractor F', it would of necessity have to be 
an attractor that is not desirable to us, since you said there is only 
one stable attractor for us that has the desired characteristics. I 
don't see how beings subject to these two different attractors would 
find mutual benefit in general, since if they did, then F' would have 
the desirable characteristics that we wish a stable attractor to have, 
but it doesn't.


This means that genuine conflict between friendly species or between 
friendly individuals is not even possible, so there is no question of 
an AI needing to arbitrate between the conflicting interests of two 
friendly individuals or groups of individuals. Of course, there will 
still be conflicts between non-friendlies, and the AI may arbitrate 
and/or intervene.


Wherever/whenever there is a shortage of resources (i.e. not all goals 
can be satisfied), goals will conflict.  Friendliness describes the 
behavior that should result when such conflicts arise.  Friendly 
entities should not need arbitration or intervention but should 
welcome help in determining the optimal solution (which is *close* to 
arbitration but subtly different in that it is not adverserial).  I 
would rephrase your general point as a true, adverserial relationship 
is not even possible.


That's a better way of putting it. Conflict will be possible, but 
they'll always be resolved via exchange of information rather than bullets.


The AI will not be empathetic towards homo sapiens sapiens in 
particular. It will be empathetic towards f-beings (friendly beings 
in the technical sense), whether they exist or not (since the AI 
might be the only being anywhere near the attractor).


Yes.  It will also be empathic towards beings with the potential to 
become f-beings because f-beings are a tremendous resource/benefit.


You've said elsewhere that the constraints on how it deals with 
non-friendlies are rather minimal, so while it might be 
empathic/empathetc, it will still have no qualms about kicking ass and 
inflicting pain where necessary.




This means no specific acts of the AI towards any species or 
individuals are ruled out, since it might be part of their CEV (which 
is the CEV of all beings),  even though they are not smart enough to 
realize it.


Absolutely correct and dead wrong at the same time.  You could invent 
specific incredibly low-probabaility but possible circumstances where 
*any* specific act is justified.  I'm afraid that my vision of 
Friendliness certainly does permit the intentional destruction of the 
human race if that is the *only* way to preserve a hundred more 
intelligent, more advanced, more populous races.  On the other hand, 
given the circumstance space that we are likely to occupy with a huge 
certainty, the intentional destruction of the human race is most 
certainly ruled out.  Or, in other words, there are no infinite 
guarantees but we can reduce the dangers to infinitessimally 

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Mark Waser
How does an agent know if another agent is Friendly or not, especially if 
the

other agent is more intelligent?


An excellent question but I'm afraid that I don't believe that there is an 
answer (but, fortunately, I don't believe that this has any effect on my 
thesis). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-07 Thread j.k.

On 03/07/2008 03:20 PM,, Mark Waser wrote:

 For there to be another attractor F', it would of necessity have to be
 an attractor that is not desirable to us, since you said there is only
 one stable attractor for us that has the desired characteristics.
 
Uh, no.  I am not claiming that there is */ONLY/* one unique attractor 
(that has the desired characteristics).  I am merely saying that there 
is */AT LEAST/* one describable, reachable, stable attractor that has 
the characteristics that we want.  (Note:  I've clarified a previous 
statement my adding the */ONLY/* and */AT LEAST /*and the 
parenthetical expression that has the desired characteristics.)


Okay, got it now. At least one, not exactly one.

I really don't like the particular quantifier rather minimal.  I 
would argue (and will later attempt to prove) that the constraints are 
still actually as close to Friendly as rationally possible because 
that is the most rational way to move non-Friendlies to a Friendly 
status (which is a major Friendliness goal that I'll be getting to 
shortly).  The Friendly will indeed have no qualms about kicking ass 
and inflicting pain */where necessary/* but the where necessary 
clause is critically important since a Friendly shouldn't resort to 
this (even for Unfriendlies) until it is truly necessary.


Fair enough. rather minimal is much too strong a phrase.
 
 I think you're fudging a bit here. If we are only likely to occupy the

 circumstance space with probability less than 1, then the intentional
 destruction of the human race is not 'most certainly ruled out': it is
 with very high probability less than 1 ruled out. I'm not trying to say
 it's likely; only that's it's possible. */I make this point to 
distinguish

 your approach from other approaches that purport to make absolute
 guarantees about certain things (as in some ethical systems where
 certain things are *always* wrong, regardless of context or 
circumstance)./*
 
Um.  I think that we're in violent agreement.  I'm not quite sure 
where you think I'm fudging.


The reason I thought you were fudging was that I thought you were saying 
that it is absolutely certain that the AI will never turn the planet 
into computronium and upload us *AND* there are no absolute guarantees. 
I guess I was misled when I read given the circumstance space that we 
are likely to occupy with a huge certainty, the intentional destruction 
of the human race is most certainly ruled out as meaning 'turning earth 
into computronium is certainly ruled out'. It's only certainly ruled out 
*assuming* the highly likely area of circumstance space that we are 
likely to inhabit. So yeah, I guess we do agree.


This raises another point for me though. In another post (2008-03-06 
14:36) you said:


It would *NOT* be Friendly if I have a goal that I not be turned into 
computronium even if your clause (which I hereby state that I do)


Yet, if I understand our recent exchange correctly, it is possible for 
this to occur and be a Friendly action regardless of what sub-goals I 
may or may have. (It's just extremely unlikely given ..., which is an 
important distinction.) It would be nice to have some ballpark 
probability estimates though to know what we mean by extremely unlikely. 
10E-6 is a very different beast than 10E-1000.



 
 I don't think it's inflammatory or a case of garbage in to contemplate
 that all of humanity could be wrong. For much of our history, there 
have

 been things that *every single human was wrong about*. This is merely
 the assertion that we can't make guarantees about what vastly superior
 f-beings will find to be the case. We may one day outgrow our 
attachment

 to meatspace, and we may be wrong in our belief that everything
 essential can be preserved in meatspace, but we might not be at that
 point yet when the AI has to make the decision.
 
Why would the AI *have* to make the decision?  It shouldn't be for 
it's own convenience.  The only circumstance that I could think of 
where the AI should make such a decision *for us* over our 
objections is if we would be destroyed otherwise (but there was no way 
for it to convince us of this fact before the destruction was inevitable).
It might not *have* to. I'm only saying it's possible. And it would 
almost certainly be for some circumstance that has not occurred to us, 
so I can't give you a specific scenario. Not being able to find such a 
scenario is different though from there not actually being one. In order 
to believe the later, a proof is required.
 
 Yes, when you talk about Friendliness as that distant attractor, it

 starts to sound an awful lot like enlightenment, where self-interest
 is one aspect of that enlightenment, and friendly behavior is another
 aspect.
 
Argh!  I would argue that Friendliness is *not* that distant.  Can't 
you see how the attractor that I'm describing is both self-interest 
and Friendly because **ultimately they are the same thing**  (OK, so 
maybe that *IS* 

Re: [agi] What should we do to be prepared?

2008-03-07 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 5:24 PM, Mark Waser [EMAIL PROTECTED] wrote:

  The core of my thesis is that the
  particular Friendliness that I/we are trying to reach is an attractor --
  which means that if the dominant structure starts to turn unfriendly, it is
  actually a self-correcting situation.


This sounds like magic thinking, sweeping the problem under the rug of
'attractor' word. Anyway, even if this trick somehow works, it doesn't
actually address the problem of friendly AI. The problem with
unfriendly AI is not that it turns selfish, but that it doesn't get
what we want from it or can't foresee consequences of its actions in
sufficient detail.

If you already have a system (in the lab) that is smart enough to
support your code of friendliness and not crash old humanity by
oversight by the year 2500, you should be able to make it produce
another system that works with unfriendly humanity, doesn't have its
own agenda, and so on.

P.S. I'm just starting to fundamentally revise my attitude to the
problem of friendliness, see my post Understanding the problem of
friendliness on SL4.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well duh land, 
b) I'm so totally off the mark that I'm not even worth replying to, or c) I 
hope being given enough rope to hang myself.  :-)

Since I haven't seen any feedback, I think I'm going to divert to a section 
that I'm not quite sure where it goes but I think that it might belong here . . 
. .

Interlude 1

Since I'm describing Friendliness as an attractor in state space, I probably 
should describe the state space some and answer why we haven't fallen into the 
attractor already.

The answer to latter is a combination of the facts that 
  a.. Friendliness is only an attractor for a certain class of beings (the 
sufficiently intelligent).
  b.. It does take time/effort for the borderline sufficiently intelligent 
(i.e. us) to sense/figure out exactly where the attractor is (much less move to 
it).
  c.. We already are heading in the direction of Friendliness (or 
alternatively, Friendliness is in the direction of our most enlightened 
thinkers).
and most importantly
  a.. In the vast, VAST majority of cases, Friendliness is *NOT* on the 
shortest path to any single goal.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Stephen Reed
Hi Mark,
I value your ideas about 'Friendliness as an attractor in state space'.  Please 
keep it up.
-Steve
 
Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 6, 2008 9:01:53 AM
Subject: Re: [agi] What should we do to be prepared?

  Hmm.  Bummer.  No new feedback.  I 
wonder if a) I'm still in Well duh land, b) I'm so totally off the mark 
that I'm not even worth replying to, or c) I hope being given enough 
rope to hang myself.  :-)
 
Since I haven't seen any feedback, I think I'm 
going to divert to a section that I'm not quite sure where it goes but I think 
that it might belong here . . . .
 
Interlude 1
 
Since I'm describing Friendliness as an attractor 
in state space, I probably should describe the state space some and answer why 
we haven't fallen into the attractor already.
 
The answer to latter is a combination of the 
facts that 
  Friendliness is only an attractor for a certain 
  class of beings (the sufficiently intelligent).  It does take time/effort for 
the borderline 
  sufficiently intelligent (i.e. us) to sense/figure out exactly where the 
  attractor is (much less move to it).  We already are heading in the direction 
of 
  Friendliness (or alternatively, Friendliness is in the direction of our 
  most enlightened thinkers).and most importantly
  In the vast, VAST majority of cases, 
  Friendliness is *NOT* on the shortest path to any single 
goal.  agi | Archives | Modify Your Subscription







  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote:
 And thus, we get back to a specific answer to jk's second question.  *US*
 should be assumed to apply to any sufficiently intelligent goal-driven
 intelligence.  We don't need to define *us* because I DECLARE that it
 should be assumed to include current day humanity and all of our potential
 descendants (specifically *including* our Friendly AIs and any/all other
 mind children and even hybrids).  If we discover alien intelligences, it
 should apply to them as well.

Actually, I like this.  I presume that showing empathy to any intelligent,
goal driven agent means acting in a way that helps the agent achieve its
goals, whatever they are.  This aligns nicely with some common views of
ethics, e.g.

- A starving dog is intelligent and has the goal of eating, so the friendly
action is to feed it.

- Giving a dog a flea bath is friendly because dogs are more intelligent than
fleas.

- Killing a dog to save a human life is friendly because a human is more
intelligent than a dog.

- Killing a human to save two humans is friendly because two humans are more
intelligent than one.

My concern is what happens if a UFAI attacks a FAI.  The UFAI has the goal of
killing the FAI.  Should the FAI show empathy by helping the UFAI achieve its
goal?

I suppose the question could be answered by deciding which AI is more
intelligent.  But how is this done?  A less intelligent agent will not
recognize the superior intelligence of the other.  For example, a dog will not
recognize the superior intelligence of humans.  Also, we have IQ tests for
children to recognize prodigies, but no similar test for adults.  The question
seems fundamental because a Turing machine cannot distinguish a process of
higher algorithmic complexity than itself from a random process.

Or should we not worry about the problem because the more intelligent agent is
more likely to win the fight?  My concern is that evolution could favor
unfriendly behavior, just as it has with humans.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Argh!  I hate premature e-mailing . . . . :-)

Interlude 1 . . . . continued

One of the first things that we have to realize and fully internalize is that 
we (and by we I continue to mean all sufficiently intelligent 
entities/systems) are emphatically not single-goal systems.  Further, the 
means/path that we use to achieve a particular goal has a very high probability 
of affecting the path/means that we must use to accomplish subsequent goals -- 
as well as the likely success rate of those goals.

Unintelligent systems/entities simply do not recognize this fact -- 
particularly since it probably interferes with their immediate goal-seeking 
behavior.

Insufficiently intelligent systems/entities (or systems/entities under 
sufficient duress) are not going to have the foresight (or the time for 
foresight) to recognize all the implications of this fact and will therefore 
deviate from unseen optimal goal-seeking behavior in favor of faster/more 
obvious (though ultimately less optimal) paths.

Borderline intelligent systems/entities under good conditions are going to try 
to tend in the directions suggested by this fact -- it is, after all, the 
ultimate in goal-seeking behavior -- but finding the optimal path/direction 
becomes increasingly difficult as the horizon expands.

And this is, in fact, the situation that we are all in and debating about.  As 
a collection of multi-goal systems/entities, how do the individual wes 
optimize our likelihood of achieving our goals?  Clearly, we do not want some 
Unfriendly AGI coming along and preventing our goals by wiping us out or 
perverting our internal goal structure.

= = = = =

Now, I've just attempted to sneak a critical part of the answer right past 
everyone with my plea . . . . so let's go back and review it in slow-motion.  
:-)

Part of our environment is that we have peers.  And peers become resources 
towards our goals when we have common or compatible goals.  Any unimaginably 
intelligent system/entity surrounded by peers is certainly going to work with 
it's peers wherever possible.  Society/community is a feature that is 
critically important to Friendliness -- and this shows up in *many* places in 
evolution (if you're intelligent enough and can see beyond the red in tooth 
and claw).  Note also that this can also (obviously) be easily and profitably 
extended to sub-peers (entities below a peer status) as long as the sub-peer 
can be convinced to interact in manner such that they are a net positive to the 
super-intelligences goals.

Now, one of the assumptions of the Friendliness debate is that current-day 
humans are going to be sub-peers to the coming mind-children -- 
possibly/probably sub-sub-sub-...-peers.  That leaves us in the situation of 
probably needing to interact in a manner such that we are a net positive to the 
super-intelligence's goals.  Fortunately, it is my contention (which should be 
obvious by the end of the paper) that a Friendly sub-peer is *always* a 
resource and that Friendly behavior towards that sub-peer (our goal) is optimal 
for the super-intelligence.  Thus, if we can get both ourselves and our 
mind-children to a Friendly state -- it should be reassuringly self-reinforcing 
from there on out.

Of course, the big bugaboo to this whole theory is whether it will be too 
onerous for humans to be Friendly.  Eliezer's vision of a Friendly future is 
that humans don't have to be Friendly -- only the AGIs do.  My contention is 
that you don't get the Friendly attractor without all of the parties involved 
being Friendly -- which is why I'm so down on Eliezer's vision.  Under my 
vision of Friendliness, entities that aren't Friendly generally don't receive 
behavior that would be regarded as Friendly.

Now, note a critical feature of my arguments -- I am *NOT* trying to constrain 
the goals of my goal-seeking entity/entities (as in Eliezer's vision of 
Friendliness).  I am trying to prove that *any* sufficiently intelligent 
multi-goal entity will find Friendliness an attractor because it promotes it's 
own goal-seeking behavior.  Friendliness, in effect and assuming that it can be 
made coherent and consistent, is an optimal subgoal for all non-conflicting 
goals (and thus, in the aggregate of a large number of varying goals).  So, as 
I said, if we can get both ourselves and our mind-children to a Friendly state 
-- it should be reassuringly self-reinforcing from there on out.

TAKE-AWAY:  Friendliness is an attractor because it IS equivalent to 
enlightened self-interest -- but it only works where all entities involved 
are Friendly.

PART 3 will answer part of What is Friendly behavior? by answering What is 
in the set of horrible nasty thing[s]?.

  - Original Message - 
  From: Mark Waser 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 06, 2008 10:01 AM
  Subject: Re: [agi] What should we do to be prepared?


  Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well duh land, 
b) I'm so

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Or should we not worry about the problem because the more intelligent 
agent is

more likely to win the fight?  My concern is that evolution could favor
unfriendly behavior, just as it has with humans.


I don't believe that evolution favors unfriendly behavior.  I believe that 
evolution is tending towards Friendliness.  It just takes time to evolve all 
of the pre-conditions for it to be able to obviously manifest.


TAKE-AWAY:  Friendliness goes with evolution.  Only idiots fight evolution. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 12:27:57 pm, Mark Waser wrote:
 TAKE-AWAY:  Friendliness is an attractor because it IS equivalent 
to enlightened self-interest -- but it only works where all entities 
involved are Friendly.


Check out Beyond AI pp 178-9 and 350-352, or the Preface which sums up the 
whole business. There is noted in evolutionary game theory a moral ladder 
phenomenon -- in appropriate environments there is an evolutionary pressure 
to be just a little bit nicer than the average ethical level. This can 
raise the average over the long run. Like any evolutionarily stable strategy, 
it is an attractor in the appropriate space. 

Your point about sub-peers being resources is known in economics as the 
principle of comparative advantage (p. 343).

I think you're essentially on the right track. Like any children, our mind 
children will tend to follow our example more than our precepts...

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
My concern is what happens if a UFAI attacks a FAI.  The UFAI has the goal 
of
killing the FAI.  Should the FAI show empathy by helping the UFAI achieve 
its

goal?


Hopefully this concern was answered by my last post but . . . .

Being Friendly *certainly* doesn't mean fatally overriding your own goals. 
That would be counter-productive, stupid, and even provably contrary to my 
definition of Friendliness.


The *only* reason why a Friendly AI would let/help a UFAI kill it is if 
doing so would promote the Friendly AI's goals -- a rather unlikely 
occurrence I would think (especially since it might then encourage other 
unfriendly behavior which would then be contrary to the Friendly AI's goal 
of Friendliness).


Note though that I could easily see a Friendly AI sacrificing itself to 
take down the UFAI (though it certainly isn't required to do so).



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Thu, Mar 6, 2008 at 8:27 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Now, I've just attempted to sneak a critical part of the answer right past
 everyone with my plea . . . . so let's go back and review it in slow-motion.
 :-)

 Part of our environment is that we have peers.  And peers become resources
 towards our goals when we have common or compatible goals.  Any unimaginably
 intelligent system/entity surrounded by peers is certainly going to work
 with it's peers wherever possible.  Society/community is a feature that is
 critically important to Friendliness -- and this shows up in *many* places
 in evolution (if you're intelligent enough and can see beyond the red in
 tooth and claw).  Note also that this can also (obviously) be easily and
 profitably extended to sub-peers (entities below a peer status) as long as
 the sub-peer can be convinced to interact in manner such that they are a net
 positive to the super-intelligences goals.

Mark, I think you base your conclusion on a wrong model. These points
depend on quantitative parameters, which are going to be very
different in case of AGIs (and also on high level of rationality of
AGIs, which seems to be a friendly AI complete problem, including
kinds of friendliness that don't need to have properties you list).

When you essentially have two options, cooperate/ignore, it's better
to be friendly, and that is why it's better to buy a thing from
someone who produces it less efficiently then you do, that is to
cooperate with sub-peer. Everyone is doing a thing that *they* do
best.

But when you have a third option, to extract the resources that
sub-peer is using up and really put them to better use, it's not
stable anymore. The value you provide is much lower then what your
mass in computronium or whatever can do, including the trouble of
taking over the world. You don't grow wild carrot, you replace it with
cultivated forms. The best wild carrot can hope for is to be ignored,
when building plans don't need the ground it grows on cleared.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.

On 03/06/2008 08:32 AM,, Matt Mahoney wrote:

--- Mark Waser [EMAIL PROTECTED] wrote:
  

And thus, we get back to a specific answer to jk's second question.  *US*
should be assumed to apply to any sufficiently intelligent goal-driven
intelligence.  We don't need to define *us* because I DECLARE that it
should be assumed to include current day humanity and all of our potential
descendants (specifically *including* our Friendly AIs and any/all other
mind children and even hybrids).  If we discover alien intelligences, it
should apply to them as well.



... snip ...

- Killing a dog to save a human life is friendly because a human is more
intelligent than a dog.

... snip ...
  


Mark said that the objects of concern for the AI are any sufficiently 
intelligent goal-driven intelligence[s], but did not say if or how 
different levels of intelligence would be weighted differently by the 
AI. So it doesn't yet seem to imply that killing a certain number of 
dogs to save a human is friendly.


Mark, how do you intend to handle the friendliness obligations of the AI 
towards vastly different levels of intelligence (above the threshold, of 
course)?


joseph


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Mark, how do you intend to handle the friendliness obligations of the AI 
towards vastly different levels of intelligence (above the threshold, of 
course)?


Ah.  An excellent opportunity for continuation of my previous post rebutting 
my personal conversion to computronium . . . .


First off, my understanding of the common usage of the word intelligence 
should be regarded as a subset of the attributes promoting successful 
goal-seeking.  Back in the pre-caveman days, physical capabilities were 
generally more effective as goal-seeking attributes.  These days, social 
skills are often arguably equal or more effective than intelligence as 
goal-seeking attributes.  How do you feel about how we should handle the 
friendliness obligations towards vastly different levels of social skill?


My point here is that you have implicitly identified intelligence as a 
better or best attribute.  I am not willing to agree with that without 
further convincing.  As far as I can tell, someone with sufficiently large 
number of hard-coded advanced social skill reflexes (to prevent the argument 
that social skills are intelligence) will run rings around your average 
human egghead in terms of getting what they want.  What are that person's 
obligations towards you?  Assuming that you are smarter, should their 
adeptness at getting what they want translate to reduced, similar, or 
greater obligations to you?  Do their obligations change more with variances 
in their social adeptness or in your intelligence?


Or, what about the more obvious question of the 6'7 300 pound guy on a 
deserted tropical island with a wimpy (or even crippled) brainiac?  What are 
their relative friendliness obligations?


I would also argue that the threshold can't be measured solely in terms of 
intelligence (unless you're going to define intelligence solely as 
goal-seeking ability, of course). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Thu, Mar 6, 2008 at 11:23 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Friendliness must include reasonable protection for sub-peers or else there
  is no enlightened self-interest or attractor-hood to it -- since any
  rational entity will realize that it could *easily* end up as a sub-peer.
  The value of having that protection in Friendliness in case the super-entity
  needs it should be added to my innate value (which it probably dwarfs) when
  considering whether I should be snuffed out.  Friendliness certainly allows
  the involuntary conversion of sub-peers under dire enough circumstances (or
  it wouldn't be enlightened self-interest for the super-peer) but there is
  a *substantial* value barrier to it (to be discussed later).


This is different from what I replied to (comparative advantage, which
J Storrs Hall also assumed), although you did state this point
earlier.

I think this one is a package deal fallacy. I can't see how whether
humans conspire to weed out wild carrots or not will affect decisions
made by future AGI overlords. ;-)

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.

On 03/05/2008 05:04 PM,, Mark Waser wrote:
And thus, we get back to a specific answer to jk's second question.  
*US* should be assumed to apply to any sufficiently intelligent 
goal-driven intelligence.  We don't need to define *us* because I 
DECLARE that it should be assumed to include current day humanity and 
all of our potential descendants (specifically *including* our 
Friendly AIs and any/all other mind children and even hybrids).  If 
we discover alien intelligences, it should apply to them as well.
 
I contend that Eli's vision of Friendly AI is specifically wrong 
because it does *NOT* include our Friendly AIs in *us*.  In later 
e-mails, I will show how this intentional, explicit lack of inclusion 
is provably Unfriendly on the part of humans and a direct obstacle to 
achieving a Friendly attractor space.
 
 
TAKE-AWAY:  All goal-driven intelligences have drives that will be the 
tools that will allow us to create a self-correcting Friendly/CEV 
attractor space.
 


I like the expansion of CEV from 'human being' (or humanity) to 
'sufficiently intelligent being' (all intelligent beings). It is obvious 
in retrospect (isn't it always?), but didn't occur to me when reading 
Eliezer's CEV notes. It seems related to the way in which 'humanity' has 
become broader as a term (once applied to certain privileged people 
only) and 'beings deserving of certain rights' has become broader and 
broader (pointless harm of some animals is no longer condoned [in some 
cultures]).


I wonder if this is a substantive difference with Eliezer's position 
though, since one might argue that 'humanity' means 'the [sufficiently 
intelligent and sufficiently ...] thinking being' rather than 'homo 
sapiens sapiens', and the former would of course include SAIs and 
intelligent alien beings.


joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote:

  My concern is what happens if a UFAI attacks a FAI.  The UFAI has the goal
 
  of
  killing the FAI.  Should the FAI show empathy by helping the UFAI achieve 
  its
  goal?
 
 Hopefully this concern was answered by my last post but . . . .
 
 Being Friendly *certainly* doesn't mean fatally overriding your own goals. 
 That would be counter-productive, stupid, and even provably contrary to my 
 definition of Friendliness.
 
 The *only* reason why a Friendly AI would let/help a UFAI kill it is if 
 doing so would promote the Friendly AI's goals -- a rather unlikely 
 occurrence I would think (especially since it might then encourage other 
 unfriendly behavior which would then be contrary to the Friendly AI's goal 
 of Friendliness).
 
 Note though that I could easily see a Friendly AI sacrificing itself to 
 take down the UFAI (though it certainly isn't required to do so).

Would an acceptable response be to reprogram the goals of the UFAI to make it
friendly?

Does the answer to either question change if we substitute human for UFAI?


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote:
 A Friendly entity does *NOT* snuff
 out (objecting/non-self-sacrificing) sub-peers simply because it has decided
 that it has a better use for the resources that they represent/are.  That 
 way lies death for humanity when/if become sub-peers (aka Unfriendliness).

Would it be Friendly to turn you into computronium if your memories were
preserved and the newfound computational power was used to make you immortal
in a a simulated world of your choosing, for example, one without suffering,
or where you had a magic genie or super powers or enhanced intelligence, or
maybe a world indistinguishable from the one you are in now?


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
I wonder if this is a substantive difference with Eliezer's position 
though, since one might argue that 'humanity' means 'the [sufficiently 
intelligent and sufficiently ...] thinking being' rather than 'homo 
sapiens sapiens', and the former would of course include SAIs and 
intelligent alien beings.


Eli is quite clear that AGI's must act in a Friendly fashion but we can't 
expect humans to do so.  To me, this is foolish since the attractor you can 
create if humans are Friendly tremendously increases our survival 
probability. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser

Would it be Friendly to turn you into computronium if your memories were
preserved and the newfound computational power was used to make you 
immortal
in a a simulated world of your choosing, for example, one without 
suffering,
or where you had a magic genie or super powers or enhanced intelligence, 
or

maybe a world indistinguishable from the one you are in now?


That's easy.  It would *NOT* be Friendly if I have a goal that I not be 
turned into computronium even if your clause (which I hereby state that I 
do)


Uplifting a dog, if it results in a happier dog, is probably Friendly 
because the dog doesn't have an explicit or derivable goal to not be 
uplifted.


BUT - Uplifting a human who emphatically does wish not to be uplifted is 
absolutely Unfriendly. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote:
 
 This is different from what I replied to (comparative advantage, which
 J Storrs Hall also assumed), although you did state this point
 earlier.
 
 I think this one is a package deal fallacy. I can't see how whether
 humans conspire to weed out wild carrots or not will affect decisions
 made by future AGI overlords. ;-)
 

There is a lot more reason to believe that the relation of a human to an AI 
will be like that of a human to larger social units of humans (companies, 
large corporations, nations) than that of a carrot to a human. I have argued 
in peer-reviewed journal articles for the view that advanced AI will 
essentially be like numerous, fast human intelligence rather than something 
of a completely different kind. I have seen ZERO considered argument for the 
opposite point of view. (Lots of unsupported assumptions, generally using 
human/insect for the model.)

Note that if some super-intelligence were possible and optimal, evolution 
could have opted for fewer bigger brains in a dominant race. It didn't -- 
note our brains are actually 10% smaller than Neanderthals. This isn't proof 
that an optimal system is brains of our size acting in social/economic 
groups, but I'd claim that anyone arguing the opposite has the burden of 
proof (and no supporting evidence I've seen).

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser

I think this one is a package deal fallacy. I can't see how whether
humans conspire to weed out wild carrots or not will affect decisions
made by future AGI overlords. ;-)


Whether humans conspire to weed out wild carrots impacts whether humans are 
classified as Friendly (or, it would if the wild carrots were sentient).


It is in the future AGI overlords enlightened self-interest to be 
Friendly -- so I'm going to assume that they will be.


If they are Friendly and humans are Friendly, I claim that we are in good 
shape.


If humans are not Friendly, it is entirely irrelevant whether the future AGI 
overlords are Friendly or not -- because there is no protection afforded 
under Friendliness to Unfriendly species and we just end up screwing 
ourselves. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Would an acceptable response be to reprogram the goals of the UFAI to make 
it

friendly?


Yes -- but with the minimal possible changes to do so (and preferably done 
by enforcing Friendliness and allowing the AI to resolve what to change to 
resolve integrity with Friendliness -- i.e. don't mess with any goals that 
you don't absolutely have to and let the AI itself resolve any choices if at 
all possible).


Does the answer to either question change if we substitute human for 
UFAI?


The answer does not change for an Unfriendly human.  The answer does change 
for a Friendly human.


Human vs. AI is irrelevant.  Friendly vs. Unfriendly is exceptionally 
relevant.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
And more generally, how is this all to be quantified? Does your paper go 
into the math?


All I'm trying to establish and get agreement on at this point are the 
absolutes.  There is no math at this point because it would be premature and 
distracting.


but, a great question . . . .  :- 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 1:48 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote:
  
   This is different from what I replied to (comparative advantage, which
   J Storrs Hall also assumed), although you did state this point
   earlier.
  
   I think this one is a package deal fallacy. I can't see how whether
   humans conspire to weed out wild carrots or not will affect decisions
   made by future AGI overlords. ;-)
  

  There is a lot more reason to believe that the relation of a human to an AI
  will be like that of a human to larger social units of humans (companies,
  large corporations, nations) than that of a carrot to a human. I have argued
  in peer-reviewed journal articles for the view that advanced AI will
  essentially be like numerous, fast human intelligence rather than something
  of a completely different kind. I have seen ZERO considered argument for the
  opposite point of view. (Lots of unsupported assumptions, generally using
  human/insect for the model.)


My argument doesn't need 'something of a completely different kind'.
Society and human is fine as substitute for human and carrot in my
example, only if society could extract profit from replacing humans
with 'cultivated humans'. But we don't have cultivated humans, and we
are not at the point where existing humans need to be cleared to make
space for new ones.

The only thing that could keep future society from derailing in this
direction is some kind of enforcement installed in minds of future
dominant individuals/societies by us lesser species while we are still
in power.


  Note that if some super-intelligence were possible and optimal, evolution
  could have opted for fewer bigger brains in a dominant race. It didn't --
  note our brains are actually 10% smaller than Neanderthals. This isn't proof
  that an optimal system is brains of our size acting in social/economic
  groups, but I'd claim that anyone arguing the opposite has the burden of
  proof (and no supporting evidence I've seen).


Sorry, I don't understand this point. We are the first species to
successfully launch culture. Culture is much more powerful then
individuals, if only through parallelism and longer lifespan. What
follows from it?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 1:46 AM, Mark Waser [EMAIL PROTECTED] wrote:
  I think this one is a package deal fallacy. I can't see how whether
   humans conspire to weed out wild carrots or not will affect decisions
   made by future AGI overlords. ;-)

  Whether humans conspire to weed out wild carrots impacts whether humans are
  classified as Friendly (or, it would if the wild carrots were sentient).

Why does it matter what word we/they assign to this situation?


  It is in the future AGI overlords enlightened self-interest to be
  Friendly -- so I'm going to assume that they will be.

It doesn't follow. If you think it's clearly the case, explain
decision process that leads to choosing 'friendliness'. So far it is
self-referential: if dominant structure always adopts the same
friendliness when its predecessor was friendly, then it will be safe
when taken over. But if dominant structure turns unfriendly, it can
clear the ground and redefine friendliness in its own image. What does
it leave you?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
At the risk of oversimplifying or misinterpreting your position, here 
are some thoughts that I think follow from what I understand of your 
position so far. But I may be wildly mistaken. Please correct my mistakes.


There is one unique attractor in state space. Any individual of a 
species that develops in a certain way -- which is to say, finds itself 
in a certain region of the state space -- will thereafter necessarily be 
drawn to the attractor if it acts in its own self interest. This 
attractor is friendliness (F). [The attractor needs to be sufficiently 
distant from present humanity in state space that our general 
unfriendliness and frequent hostility towards F is explainable and 
plausible. And it needs to be sufficiently powerful that coming under 
its influence given time is plausible or perhaps likely.]


Since any sufficiently advanced species will eventually be drawn towards 
F, the CEV of all species is F. Therefore F is not species-specific, and 
has nothing to do with any particular species or the characteristics of 
the first species that develops an AGI (AI). This means that genuine 
conflict between friendly species or between friendly individuals is not 
even possible, so there is no question of an AI needing to arbitrate 
between the conflicting interests of two friendly individuals or groups 
of individuals. Of course, there will still be conflicts between 
non-friendlies, and the AI may arbitrate and/or intervene.


The AI will not be empathetic towards homo sapiens sapiens in 
particular. It will be empathetic towards f-beings (friendly beings in 
the technical sense), whether they exist or not (since the AI might be 
the only being anywhere near the attractor). This means no specific acts 
of the AI towards any species or individuals are ruled out, since it 
might be part of their CEV (which is the CEV of all beings),  even 
though they are not smart enough to realize it.


Since the AI empathizes not with humanity but with f-beings in general, 
it is possible (likely) that some of humanity's most fundamental beliefs 
may be wrong from the perspective of an f-being. Without getting into 
the debate of the merits of virtual-space versus meat-space and 
uploading, etc., it seems to follow that *if* the view that everything 
of importance is preserved (no arguments about this, it is an assumption 
for the sake of this point only) in virtual-space and *if* turning the 
Earth into computronium and uploading humanity and all of Earth's beings 
would be vastly more efficient a use of the planet, *then* the AI should 
do this (perhaps would be morally obligated to do this) -- even if every 
human being pleads for this not to occur. The AI would have judged that 
if we were only smarter, faster, more the kind of people we would like 
to be, etc., we would actually prefer the computronium scenario.


You might argue that from the perspective of F, this would not be 
desirable because ..., but we are so far from F in state space that we 
really don't know which would be preferable from that perspective (even 
if we actually had  detailed knowledge about the computronium scenario 
and its limitations/capabilities to replace our wild speculations). It 
might be the case that property rights, say, would preclude any f-being 
from considering the computronium scenario preferable, but we don't know 
that, and we can't know that with certainty at present. Likewise, our 
analysis of the sub-goals of friendly beings might be incorrect, which 
would make it unlikely that our analysis of what a friendly being will 
actually believe is mistaken.


It's become apparent to me in thinking about this that 'friendliness' is 
really not a good term for the attitude of an f-being that we are 
talking about: that of acting solely in the interest of f-beings 
(whether others exist or not) and in consistency with the CEV of all 
sufficiently ... beings. It is really just acting rationally (according 
to a system that we do not understand and may vehemently disagree with).


One thing I am still unclear about is the extent to which the AI is 
morally obligated to intervene to prevent harm. For example, if the AI 
judged that the inner life of a cow is rich enough to deserve protection 
and that human beings can easily survive without beef, would it be 
morally obligated to intervene and prevent the killing of cows for food? 
If it would not be morally obligated, how do you propose to distinguish 
between that case (assuming it makes the judgments it does but isn't 
obligated to intervene) and another case where it makes the same 
judgments and is morally obligated to intervene (assuming it would be 
required to intervene in some cases).


Thoughts?? Apologies for this rather long and rambling post.

joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.

On 03/06/2008 02:18 PM,, Mark Waser wrote:
I wonder if this is a substantive difference with Eliezer's position 
though, since one might argue that 'humanity' means 'the 
[sufficiently intelligent and sufficiently ...] thinking being' 
rather than 'homo sapiens sapiens', and the former would of course 
include SAIs and intelligent alien beings.


Eli is quite clear that AGI's must act in a Friendly fashion but we 
can't expect humans to do so.  To me, this is foolish since the 
attractor you can create if humans are Friendly tremendously increases 
our survival probability.




The point I was making was not so much about who is obligated to act 
friendly but whose CEV is taken into account. You are saying all 
sufficiently ... beings, while Eliezer says humanity. But does Eliezer 
say 'humanity' because that humanity is *us* and we care about the CEV 
of our species (and its sub-species and descendants...) or 'humanity' 
because we are the only sufficiently ... beings that we are presently 
aware of (and so humanity would include any other sufficiently ... being 
that we eventually discover).


It just occurred to me though that it doesn't really matter whether it 
is the CEV of homo sapiens sapiens or the CEV of some alien race or that 
of AIs, since you are arguing that they are the same, since there's 
nowhere to go beyond a point except towards the attractor.


joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote:
 My argument doesn't need 'something of a completely different kind'.
 Society and human is fine as substitute for human and carrot in my
 example, only if society could extract profit from replacing humans
 with 'cultivated humans'. But we don't have cultivated humans, and we
 are not at the point where existing humans need to be cleared to make
 space for new ones.

The scenario takes on an entirely different tone if you replace weed out some 
wild carrots with kill all the old people who are economically 
inefficient. In particular the former is something one can easily imagine 
people doing without a second thought, while the latter is likely to generate 
considerable opposition in society.
 
 The only thing that could keep future society from derailing in this
 direction is some kind of enforcement installed in minds of future
 dominant individuals/societies by us lesser species while we are still
 in power.

All we need to do is to make sure they have the same ideas of morality and 
ethics that we do -- the same as we would raise any other children. 
 
   Note that if some super-intelligence were possible and optimal, evolution
   could have opted for fewer bigger brains in a dominant race. It didn't --
   note our brains are actually 10% smaller than Neanderthals. This isn't 
proof
   that an optimal system is brains of our size acting in social/economic
   groups, but I'd claim that anyone arguing the opposite has the burden of
   proof (and no supporting evidence I've seen).
 
 
 Sorry, I don't understand this point. We are the first species to
 successfully launch culture. Culture is much more powerful then
 individuals, if only through parallelism and longer lifespan. What
 follows from it?

So how would you design a super-intelligence:
(a) a single giant blob modelled on an individual human mind
(b) a society (complete with culture) with lots of human-level minds and 
high-speed communication?

We know (b) works if you can build the individual human-level mind. Nobody has 
a clue that (a) is even possible. There's lots of evidence that even human 
minds have many interacting parts.

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote:
   My argument doesn't need 'something of a completely different kind'.
   Society and human is fine as substitute for human and carrot in my
   example, only if society could extract profit from replacing humans
   with 'cultivated humans'. But we don't have cultivated humans, and we
   are not at the point where existing humans need to be cleared to make
   space for new ones.

  The scenario takes on an entirely different tone if you replace weed out 
 some
  wild carrots with kill all the old people who are economically
  inefficient. In particular the former is something one can easily imagine
  people doing without a second thought, while the latter is likely to generate
  considerable opposition in society.


Sufficient enforcement is in place for this case: people steer
governments in the direction where laws won't allow that when they
age, evolutionary and memetic drives oppose it. It's too costly to
overcome these drives and destroy counterproductive humans. But this
cost is independent from potential gain from replacement. As the gain
increases, decision can change, again we only need sufficiently good
'cultivated humans'. Consider expensive medical treatments which most
countries won't give away when dying people can't afford them. Life
has a cost, and this cost can be met.


   The only thing that could keep future society from derailing in this
   direction is some kind of enforcement installed in minds of future
   dominant individuals/societies by us lesser species while we are still
   in power.

  All we need to do is to make sure they have the same ideas of morality and
  ethics that we do -- the same as we would raise any other children.


Yes, something like this, but much 'stronger' to meet increased power.

 Note that if some super-intelligence were possible and optimal, 
 evolution
 could have opted for fewer bigger brains in a dominant race. It didn't 
 --
 note our brains are actually 10% smaller than Neanderthals. This isn't
  proof
 that an optimal system is brains of our size acting in social/economic
 groups, but I'd claim that anyone arguing the opposite has the burden of
 proof (and no supporting evidence I've seen).
   
  
   Sorry, I don't understand this point. We are the first species to
   successfully launch culture. Culture is much more powerful then
   individuals, if only through parallelism and longer lifespan. What
   follows from it?

  So how would you design a super-intelligence:
  (a) a single giant blob modelled on an individual human mind
  (b) a society (complete with culture) with lots of human-level minds and
  high-speed communication?

  We know (b) works if you can build the individual human-level mind. Nobody 
 has
  a clue that (a) is even possible. There's lots of evidence that even human
  minds have many interacting parts.


This is a technical question with no good answer, why is it relevant?
There is no essential difference, society in present form has many
communicational bottlenecks, but with better mind-mind interfaces
distinction can blur. Upgrade to more efficient minds in this network
would clearly benefit the collective. :-)

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread rg

Hi

Again I stress that I am not saying we should
try to stop development (I do not think we can)
But what is wrong with thinking about the
possible outcomes and try to be prepared?
To try to affect the development and stear it
in better directions to take smaller steps to
wherever we are going. Not for our sake but
for our kids..

Now I have some questions back to you:

Matt: Why will an AGI be friendly ?

Anthony: Do not sociopaths understand the
rules and the justice system ?

And I also want to point out, the AGI do not
need to make a zombie attack! It could simply
take control over our financial systems or some
other critical system and hold us as hostages indefinitely.
We are very dependant on computer systems and they
will never be secure, especially not against an AGI.


Matt Mahoney wrote:

--- rg [EMAIL PROTECTED] wrote:

  

Hi

Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?

Note, that I am not saying we should not make them!
Because someone will regardless of what we decide.

I am asking for what should do to prepare for it!
and also how we should affect the creation of AGIs?

Here's some questions, I hope I am not the first to come up with.

* Will they be sane?
* Will they just be smart enough to pretend to be sane?
until...they do not have to anymore.

* Should we let them decide for us ?
  If not should we/can we restrict them ?

* Can they feel any empathy for us ?
   If not, again should we try to manipulate/force them to
   act like they do?

* Our society is very dependent on computer systems
  everywhere and its increasing!!!
   Should we let the AGIs have access to the internet ?
  If not is it even possible to restrict an AGI that can think super fast
  is a super genious and also has a lot of raw computer power?
  That most likely can find many solutions to get internet access...
  (( I can give many crazy examples on how if anyone doubts))

* What should we stupid organics do to prepare ?
   Reduce our dependency?

* Should a scientist, that do not have true ethical values be allowed to 
do AGI research ?
  Someone that just pretend to be ethical, someone that just wants the 
glory and the
  Nobel pricesomeone that answers the statement: It is insane With: 
Oh its just needs

  some adjustment, don't worry :)
   
* What is the military doing ? Should we raise public awareness to gain 
insight?

I guess all can imagine why this is important..

The only answers I have found to what can truly control/restrict an AGI 
smarter than us

are few..

- Another AGI
- Total isolation

So anyone thinking about this?



Yes.  These questions are probably more appropriate for the singularity list,
which is concerned with the safety of AI, as opposed to this list, which is
concerned with just getting it to work.  OTOH, maybe there shouldn't be two
lists after all.

Anyway, I expressed my views on the singularity at
http://www.mattmahoney.net/singularity.html
To answer your question, there isn't much we can do (IMHO).  A singularity
will be invisible to the unaugmented human brain, and yet the world will be
vastly different.

As for your other questions, I believe that AI will be distributed over the
internet because this is where the necessary resources are.  No single person
or group will develop it.  Intelligence will come collectively from many
narrowly specialized experts and an infrastructure that routes natural
language messages to the right ones.  I believe this can be implemented with
current technology and an economy where information has negative value and
network peers compete for resources and reputation in a hostile environment. 
I described one proposal here: http://www.mattmahoney.net/agi.html


I believe the system will be friendly (give correct and useful information) as
long as humans remain the primary source of knowledge.  As computing power
gets cheaper and human labor gets more expensive, humans will gradually become
less relevant.  The P2P protocol will evolve from natural language to
something incomprehensible, perhaps in 30 years.  Shortly afterwards, there
will be a singularity.

I do not know how to make this system safe, nor do I believe that the
question even makes sense.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Anthony George
On Wed, Mar 5, 2008 at 2:46 AM, rg [EMAIL PROTECTED] wrote:



 Anthony: Do not sociopaths understand the
 rules and the justice system ?


Two responses come to mind. Both will be unsatisfactory probably, but oh
well...

1.  There's a difference between understanding rules and the justice system
and understanding transcendentals such as justice or beauty.  Analogy: a
young teen punk with emotional damage may think that his favorite
speed-death-industrial-metal is good music and he can't understand
Beethoven.  But, once someone understands Beethoven, they have no choice but
to like it.  The kid only thinks he understands, the Beethoven fan really
does.  Likewise, a sociopath can be thought of as understanding rules, but
they are like the damaged kid.  Someone who understands Justice will follow
it and will not be a sociopath.  This is admittedly highly speculative on my
part.  I don't even really like Beethoven.  So I'm not speaking from
experience.

2.  Sociopaths are, like all humans, animals so are driven by bodily needs
to acquire resources and power.  I don't see why an AGI would have animal
based drives that would look to us like a desire for power or resources.  If
it did then that would seem to indicate some sort of universal nature to
subjectivity (which would be just fine by me) and if that is so, then its
superior intellect would lead that nature to where the best of humans have
been and beyond, and I think that place is on the other side of our best
literature and philosophy.  Hence, perhaps, Plato's Republic would be
realized as the AGI would be the philosopher king.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore

rg wrote:

Hi

Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?

Note, that I am not saying we should not make them!
Because someone will regardless of what we decide.

I am asking for what should do to prepare for it!
and also how we should affect the creation of AGIs?

Here's some questions, I hope I am not the first to come up with.

* Will they be sane?
* Will they just be smart enough to pretend to be sane?
   until...they do not have to anymore.

* Should we let them decide for us ?
 If not should we/can we restrict them ?

* Can they feel any empathy for us ?
  If not, again should we try to manipulate/force them to
  act like they do?

* Our society is very dependent on computer systems
 everywhere and its increasing!!!
  Should we let the AGIs have access to the internet ?
 If not is it even possible to restrict an AGI that can think super fast
 is a super genious and also has a lot of raw computer power?
 That most likely can find many solutions to get internet access...
 (( I can give many crazy examples on how if anyone doubts))

* What should we stupid organics do to prepare ?
  Reduce our dependency?

* Should a scientist, that do not have true ethical values be allowed to 
do AGI research ?
 Someone that just pretend to be ethical, someone that just wants the 
glory and the
 Nobel pricesomeone that answers the statement: It is insane With: 
Oh its just needs

 some adjustment, don't worry :)
  * What is the military doing ? Should we raise public awareness to 
gain insight?

   I guess all can imagine why this is important..

The only answers I have found to what can truly control/restrict an AGI 
smarter than us

are few..

- Another AGI
- Total isolation

So anyone thinking about this?


Hi

You should know that there are many people who indeed are deeply 
concerned about these questions, but opinions differ greatly over what 
the dangers are and how to deal with them.


I have been thinking about these questions for at least the last 20 
years, and I am also an AGI developer and cognitive psychologist.  My 
own opinion is based on a great deal of analysis of the motivations of 
AI systems in general, and AGI systems in particular.


I have two conclusions to offer you.

1)  Almost all of the discussion of this issue is based on assumptions 
about how an AI would behave, and the depressing truth is that most of 
those assuptions are outrageously foolish.  I say this, not to be 
antagonistic, but because the degree of nonsense talked on this subject 
is quite breathtaking, and I feel at a loss to express just how 
ridiculous the situation has become.


It is not just that people make wrong assumptions, it is that people 
make wrong assumptions very, very loudly:  declaring these wrog 
assumptions to be obviously true.  Nobody does this out of personal 
ignorance, it is just that our culture is saturated with crazy ideas on 
the subject.


2)  I believe it is entirely possible to build a completely safe AGI.  I 
also believe that this completely safe AGI would be the simplest one to 
build, so it is likley to be built first.  Lastly, I believe that it 
will not matter a great deal who builds the first AGI (within limits) 
because an AGI will self-stabilize toward a benevolent state.





Richard Loosemore













---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney
--- rg [EMAIL PROTECTED] wrote:
 Matt: Why will an AGI be friendly ?

The question only makes sense if you can define friendliness, which we can't.

Initially I believe that a distributed AGI will do what we want it to do
because it will evolve in a competitive, hostile environment that rewards
usefulness.  If by friendly you mean that it does what you want it to do,
then it should be friendly as long as humans are the dominant source of
knowledge.  This should be true until just before the singularity.

The question is more complicated when the technology to simulate and reprogram
your brain is developed.  With a simple code change, you could be put in an
eternal state of bliss and you wouldn't care about anything else.  Would you
want this?  If so, would an AGI be friendly if it granted or denied your
request?  Alternatively you could be inserted into a simulated fantasy world,
disconnected from reality, where you could have anything you want.  Would this
be friendly?  Or you could alter your memories so that you had a happy
childhood, or you had to overcame great obstacles to achieve your current
position, or you lived the lives of everyone on earth (with real or made-up
histories).  Would this be friendly?

Proposals like CEV ( http://www.singinst.org/upload/CEV.html ) don't seem to
work when brains are altered.  I prefer to investigate the question of what
will we do, not what should we do.  In that context, I don't believe CEV will
be implemented because it predicts what we would want in the future if we knew
more, but people want what they want right now.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore

Matt Mahoney wrote:

--- rg [EMAIL PROTECTED] wrote:

Matt: Why will an AGI be friendly ?


The question only makes sense if you can define friendliness, which we can't.


Wrong.

*You* cannot define friendliness for reasons of your own.  Others cmay 
well be able to do so.


It would be fine to state I cannot see a way to define friendliness 
but it is not correct to state this as a general fact.


Friendliness, briefly, is a situation in which the motivations of the 
AGI are locked into a state of empathy with the human race as a whole.


There are possible mechanisms to do this:  those mechanisms are being 
studied right now (by me, at the very least, and possibly by others too).


[For anyone reading this who is not familiar with Matt's style:  he has 
a preference for stating his opinions as if they are established fact, 
when in fact the POV that he sets out is not broadly accepted by the 
community as a whole.  I, in particular, strongly disagree with his 
position on these matters, so I feel obliged to step in when he makes 
these declarations.]




Richard Loosemore



Initially I believe that a distributed AGI will do what we want it to do
because it will evolve in a competitive, hostile environment that rewards
usefulness.  If by friendly you mean that it does what you want it to do,
then it should be friendly as long as humans are the dominant source of
knowledge.  This should be true until just before the singularity.

The question is more complicated when the technology to simulate and reprogram
your brain is developed.  With a simple code change, you could be put in an
eternal state of bliss and you wouldn't care about anything else.  Would you
want this?  If so, would an AGI be friendly if it granted or denied your
request?  Alternatively you could be inserted into a simulated fantasy world,
disconnected from reality, where you could have anything you want.  Would this
be friendly?  Or you could alter your memories so that you had a happy
childhood, or you had to overcame great obstacles to achieve your current
position, or you lived the lives of everyone on earth (with real or made-up
histories).  Would this be friendly?

Proposals like CEV ( http://www.singinst.org/upload/CEV.html ) don't seem to
work when brains are altered.  I prefer to investigate the question of what
will we do, not what should we do.  In that context, I don't believe CEV will
be implemented because it predicts what we would want in the future if we knew
more, but people want what they want right now.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread rg

ok see my responses below..

Matt Mahoney wrote:

--- rg [EMAIL PROTECTED] wrote:
  

Matt: Why will an AGI be friendly ?



The question only makes sense if you can define friendliness, which we can't.

  

We could say behavior that is acceptable in our society then..
In your mail you believed they would be friendly..
So I ask why would they behave in a way acceptable to us ?

Initially I believe that a distributed AGI will do what we want it to do
because it will evolve in a competitive, hostile environment that rewards
usefulness.  
If it evolves in a competitive, hostile environment it would only do 
what is best for itself..

How would that coincide with what is best for mankind ? Why would it?

If it is an artificial reward system, it will one day realize it is just 
such a system

designed to evolve it in a particular direction, what happens then?

If by friendly you mean that it does what you want it to do,
then it should be friendly as long as humans are the dominant source of
knowledge.  This should be true until just before the singularity.

The question is more complicated when the technology to simulate and reprogram
your brain is developed.  With a simple code change, you could be put in an
eternal state of bliss and you wouldn't care about anything else.  Would you
want this?  If so, would an AGI be friendly if it granted or denied your
request?  Alternatively you could be inserted into a simulated fantasy world,
disconnected from reality, where you could have anything you want.  Would this
be friendly?  Or you could alter your memories so that you had a happy
childhood, or you had to overcame great obstacles to achieve your current
position, or you lived the lives of everyone on earth (with real or made-up
histories).  Would this be friendly?

  

I simply ask why would it fit into our society?
At a point then it does not have to, why would it care to ?


Proposals like CEV ( http://www.singinst.org/upload/CEV.html ) don't seem to
work when brains are altered.  I prefer to investigate the question of what
will we do, not what should we do.  In that context, I don't believe CEV will
be implemented because it predicts what we would want in the future if we knew
more, but people want what they want right now.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Friendliness, briefly, is a situation in which the motivations of the 
 AGI are locked into a state of empathy with the human race as a whole.

Which is fine as long as there is a sharp line dividing human from non-human. 
When that line goes away, the millions of soft constraints (which both
Richard's and my design provide for) will no longer give an answer.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread rg

Hi

You said friendliness was AGIs locked in empathy towards mankind.

How can you make them feel this?
How did we humans get empathy?

Is it not very likely that we have empathy because
it turned out to be an advantage during our evolution
ensuring the survival of groups of humans.

So if an AGI is supposed to feel true empathy for a
human..must it not to evolve in a environment there
feeling empathy for a human is an advantage?

And how can one possibly do this?

Unless you do a virtual environment, simulating
generations after generations of AGIs coexisting
with simulated humans, simultaneously making it
an advantage for the AGIs to display empathy
towards said simulated humans..

Now what happens then you then allow theese AGIs
to interact with the real world? Then they realize
they have evolved in a virtual world designed to make
them behave in a certain way?










Richard Loosemore wrote:

Matt Mahoney wrote:

--- rg [EMAIL PROTECTED] wrote:

Matt: Why will an AGI be friendly ?


The question only makes sense if you can define friendliness, which 
we can't.


Wrong.

*You* cannot define friendliness for reasons of your own.  Others cmay 
well be able to do so.


It would be fine to state I cannot see a way to define friendliness 
but it is not correct to state this as a general fact.


Friendliness, briefly, is a situation in which the motivations of the 
AGI are locked into a state of empathy with the human race as a whole.


There are possible mechanisms to do this:  those mechanisms are being 
studied right now (by me, at the very least, and possibly by others too).


[For anyone reading this who is not familiar with Matt's style:  he 
has a preference for stating his opinions as if they are established 
fact, when in fact the POV that he sets out is not broadly accepted by 
the community as a whole.  I, in particular, strongly disagree with 
his position on these matters, so I feel obliged to step in when he 
makes these declarations.]




Richard Loosemore



Initially I believe that a distributed AGI will do what we want it to do
because it will evolve in a competitive, hostile environment that 
rewards
usefulness.  If by friendly you mean that it does what you want it 
to do,

then it should be friendly as long as humans are the dominant source of
knowledge.  This should be true until just before the singularity.

The question is more complicated when the technology to simulate and 
reprogram
your brain is developed.  With a simple code change, you could be put 
in an
eternal state of bliss and you wouldn't care about anything else.  
Would you

want this?  If so, would an AGI be friendly if it granted or denied your
request?  Alternatively you could be inserted into a simulated 
fantasy world,
disconnected from reality, where you could have anything you want.  
Would this

be friendly?  Or you could alter your memories so that you had a happy
childhood, or you had to overcame great obstacles to achieve your 
current
position, or you lived the lives of everyone on earth (with real or 
made-up

histories).  Would this be friendly?

Proposals like CEV ( http://www.singinst.org/upload/CEV.html ) don't 
seem to
work when brains are altered.  I prefer to investigate the question 
of what
will we do, not what should we do.  In that context, I don't believe 
CEV will
be implemented because it predicts what we would want in the future 
if we knew

more, but people want what they want right now.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  Friendliness, briefly, is a situation in which the motivations of the 
  AGI are locked into a state of empathy with the human race as a whole.
  
  Which is fine as long as there is a sharp line dividing human from
 non-human. 
  When that line goes away, the millions of soft constraints (which both
  Richard's and my design provide for) will no longer give an answer.
 
 This is not an argument I have seen before.
 
 It is not coherent in the context of the proposal I have made on this 
 subject, for the following reason.
 
 Once built, the AGIs would freeze the meaning of human empathy in such 
 a way that there could be no signiicant departure from that standard. 
 By definition that dividing line would make no difference whatsoever.

Because you can't freeze the definition.  At various times in history, human
empathy allowed for slave ownership, sacrificing ones children to the gods,
burning witches, and stoning rape victims to death for adultery.  What part of
today's definition of human empathy will seem barbaric to future generations? 
What is your position on animal rights, abortion, euthanasia, and capital
punishment?

The problem is that even if you think you got it right, the AGI will be faced
with questions you didn't anticipate.  What are the rights of something that
is half human and half machine?  Is it moral to copy a person and destroy the
original?  Does a robot with uploaded human memories have more rights than a
robot with plausible but made-up memories?  How does a diffuse structure of a
million soft constraints answer these questions when all the constraints are
based on the opinions of people who lived in a different era?


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Matt Mahoney

--- rg [EMAIL PROTECTED] wrote:

 ok see my responses below..
 
 Matt Mahoney wrote:
  --- rg [EMAIL PROTECTED] wrote:

  Matt: Why will an AGI be friendly ?
  
 
  The question only makes sense if you can define friendliness, which we
 can't.
 

 We could say behavior that is acceptable in our society then..
 In your mail you believed they would be friendly..
 So I ask why would they behave in a way acceptable to us ?

Because peers in a competitive network will compete for resources, and humans
control the resources.  I realize the friendliness will only be temporary.

  Initially I believe that a distributed AGI will do what we want it to do
  because it will evolve in a competitive, hostile environment that rewards
  usefulness.  

 If it evolves in a competitive, hostile environment it would only do 
 what is best for itself..
 How would that coincide with what is best for mankind ? Why would it?
 
 If it is an artificial reward system, it will one day realize it is just 
 such a system
 designed to evolve it in a particular direction, what happens then?

It is not really artificial.  Peers will incrementally improve and the most
successful ones will be the basis for designing copies.  This is a form of
evolution.  Competition for resources is a stable evolutionary goal. 
Resources take the form of storage and bandwidth (i.e. information has
negative value).  Humans will judge the quality of information by rating
peers, which in turn will rate other peers, establishing a competition for
reputation.

I realize friendliness fails when information becomes too complex for humans
to understand.  Then the competition for computational resources will continue
without human involvement.

I am interested in how the safety of distributed AI can be improved.  I
realize that centralized designs are safer, but I think they are less likely
to emerge first because they are at a disadvantage in availability of
resources, both human and computer.  We need to focus on the greater risk.

I don't think an intelligence explosion can be judged as good or bad,
regardless of the outcome.  It just is.  The real risk to humanity is that our
goals evolved to ensure survival of the species in primitive times.  In a
world where we can have everything we want, those same goals will destroy us.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Mark Waser

--- rg [EMAIL PROTECTED] wrote:

Matt: Why will an AGI be friendly ?


The question only makes sense if you can define friendliness, which we 
can't.


Why Matt, thank you for such a wonderful opening . . . .  :-)

Friendliness *CAN* be defined.  Furthermore, it is my contention that 
Friendliness can be implemented reasonably easily ASSUMING an AGI platform 
(i.e. it is just as easy to implement a Friendly AGI as it is to implement 
an Unfriendly AGI).


I have a formal paper that I'm just finishing that presents my definition of 
Friendliness and attempts to prove the above contention (and several others) 
but would like to to do a preliminary acid test by presenting the core ideas 
via several e-mails that I'll be posting over the next few days (i.e. y'all 
are my lucky guinea pig initial audience  :-).  Assuming that the ideas 
survive the acid test, I'll post the (probably heavily revised :-) formal 
paper a couple of days later.


= = = = = = = = = =
PART 1.

The obvious initial starting point is to explicitly recognize that the point 
of Friendliness is that we wish to prevent the extinction of the *human 
race* and/or to prevent many other horrible nasty things that would make 
*us* unhappy.  After all, this is why we believe Friendliness is so 
important.  Unfortunately, the problem with this starting point is that it 
biases the search for Friendliness in a direction towards a specific type of 
Unfriendliness.  In particular, in a later e-mail, I will show that several 
prominent features of Eliezer Yudkowski's vision of Friendliness are 
actually distinctly Unfriendly and will directly lead to a system/situation 
that is less safe for humans.


One of the critically important advantages of my proposed definition/vision 
of Friendliness is that it is an attractor in state space.  If a system 
finds itself outside (but necessarily somewhat/reasonably close) to an 
optimally Friendly state -- it will actually DESIRE to reach or return to 
that state (and yes, I *know* that I'm going to have to prove that 
contention).  While Eli's vision of Friendliness is certainly stable (i.e. 
the system won't intentionally become unfriendly), there is no force or 
desire helping it to return to Friendliness if it deviates somehow due to an 
error or outside influence.  I believe that this is a *serious* shortcoming 
in his vision of the extrapolation of the collective volition (and yes, this 
does mean that I believe both that Friendliness is CEV and that I, 
personally, (and shortly, we collectively) can define a stable path to an 
attractor CEV that is provably sufficient and arguably optimal and which 
should hold up under all future evolution.


TAKE-AWAY:  Friendliness is (and needs to be) an attractor CEV

PART 2 will describe how to create an attractor CEV and make it more obvious 
why you want such a thing.



!! Let the flames begin !!:-) 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread j.k.

On 03/05/2008 12:36 PM,, Mark Waser wrote:

snip...

The obvious initial starting point is to explicitly recognize that the 
point of Friendliness is that we wish to prevent the extinction of the 
*human race* and/or to prevent many other horrible nasty things that 
would make *us* unhappy.  After all, this is why we believe 
Friendliness is so important.  Unfortunately, the problem with this 
starting point is that it biases the search for Friendliness in a 
direction towards a specific type of Unfriendliness.  In particular, 
in a later e-mail, I will show that several prominent features of 
Eliezer Yudkowski's vision of Friendliness are actually distinctly 
Unfriendly and will directly lead to a system/situation that is less 
safe for humans.


One of the critically important advantages of my proposed 
definition/vision of Friendliness is that it is an attractor in state 
space.  If a system finds itself outside (but necessarily 
somewhat/reasonably close) to an optimally Friendly state -- it will 
actually DESIRE to reach or return to that state (and yes, I *know* 
that I'm going to have to prove that contention).  While Eli's vision 
of Friendliness is certainly stable (i.e. the system won't 
intentionally become unfriendly), there is no force or desire 
helping it to return to Friendliness if it deviates somehow due to an 
error or outside influence.  I believe that this is a *serious* 
shortcoming in his vision of the extrapolation of the collective 
volition (and yes, this does mean that I believe both that 
Friendliness is CEV and that I, personally, (and shortly, we 
collectively) can define a stable path to an attractor CEV that is 
provably sufficient and arguably optimal and which should hold up 
under all future evolution.


TAKE-AWAY:  Friendliness is (and needs to be) an attractor CEV

PART 2 will describe how to create an attractor CEV and make it more 
obvious why you want such a thing.



!! Let the flames begin !!:-)


1. How will the AI determine what is in the set of horrible nasty 
thing[s] that would make *us* unhappy? I guess this is related to how 
you will define the attractor precisely.


2. Preventing the extinction of the human race is pretty clear today, 
but *human race* will become increasingly fuzzy and hard to define, as 
will *extinction* when there are more options for existence than 
existence as meat. In the long term, how will the AI decide who is 
*us* in the above quote?


Thanks,

jk

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-05 Thread Richard Loosemore

rg wrote:

Hi

I made some responses below.

Richard Loosemore wrote:

rg wrote:

Hi

Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?

Note, that I am not saying we should not make them!
Because someone will regardless of what we decide.

I am asking for what should do to prepare for it!
and also how we should affect the creation of AGIs?

Here's some questions, I hope I am not the first to come up with.

* Will they be sane?
* Will they just be smart enough to pretend to be sane?
   until...they do not have to anymore.

* Should we let them decide for us ?
 If not should we/can we restrict them ?

* Can they feel any empathy for us ?
  If not, again should we try to manipulate/force them to
  act like they do?

* Our society is very dependent on computer systems
 everywhere and its increasing!!!
  Should we let the AGIs have access to the internet ?
 If not is it even possible to restrict an AGI that can think super fast
 is a super genious and also has a lot of raw computer power?
 That most likely can find many solutions to get internet access...
 (( I can give many crazy examples on how if anyone doubts))

* What should we stupid organics do to prepare ?
  Reduce our dependency?

* Should a scientist, that do not have true ethical values be allowed 
to do AGI research ?
 Someone that just pretend to be ethical, someone that just wants the 
glory and the
 Nobel pricesomeone that answers the statement: It is insane 
With: Oh its just needs

 some adjustment, don't worry :)
  * What is the military doing ? Should we raise public awareness to 
gain insight?

   I guess all can imagine why this is important..

The only answers I have found to what can truly control/restrict an 
AGI smarter than us

are few..

- Another AGI
- Total isolation

So anyone thinking about this?


Hi

You should know that there are many people who indeed are deeply 
concerned about these questions, but opinions differ greatly over what 
the dangers are and how to deal with them.



This sounds good :)
I have been thinking about these questions for at least the last 20 
years, and I am also an AGI developer and cognitive psychologist.  My 
own opinion is based on a great deal of analysis of the motivations of 
AI systems in general, and AGI systems in particular.


I have two conclusions to offer you.

1)  Almost all of the discussion of this issue is based on assumptions 
about how an AI would behave, and the depressing truth is that most of 
those assuptions are outrageously foolish.  I say this, not to be 
antagonistic, but because the degree of nonsense talked on this 
subject is quite breathtaking, and I feel at a loss to express just 
how ridiculous the situation has become.


It is not just that people make wrong assumptions, it is that people 
make wrong assumptions very, very loudly:  declaring these wrog 
assumptions to be obviously true.  Nobody does this out of personal 
ignorance, it is just that our culture is saturated with crazy ideas 
on the subject.



This is probably true.
Therefore I try to make very few assumptions, except one: They will 
eventually be much smarter than us.

(If you want I can justify this, based on scalability.)


Your comments are interesting, because they give me some opportunities 
to illustrate the extreme difficulty of analysing these questions 
without making hidden assumptions.


To begin with your above remark:  it is fair to assume that they will be 
much smarter than us, but the consequences of this are not as obvious as 
they might appear.


For example:  what if the inevitable outcome were that they would give 
us the option of elevating our intelligence up to their level, at will 
(albeit with the proviso that when going up to their level we would 
leave the dangerous human motivations on ice for that time)?  Under 
these circumstances there would not be any meaningful them and us 
but actually one population of beings, some of whom would be 
superintelligent some of the time, but with a flexibility in the level 
of intelligence of any given individual that is completely impossible today.


Second, we have to consider not their intelligence level as such, but 
their motivations.  More on this in a moment.


2)  I believe it is entirely possible to build a completely safe AGI.  
I also beelieve that this completely safe AGI would be the simplest 
one to build, so it is likley to be built first.  Lastly, I believe 
that it will not matter a great deal who builds the first AGI (within 
limits) because an AGI will self-stabilize toward a benevolent state.



Why is it simplest to make a safe AGI?


A long argument, the shortest version of which is:  you have to give a 
motivation system of some sort (NOT a conventional goal stack, which 
does not work for full AGI systems) and the motivation system will have 
a set of drives  if you try to make it violent or aggressive, this 
will tend 

Re: [agi] What should we do to be prepared?

2008-03-05 Thread Mark Waser
 1. How will the AI determine what is in the set of horrible nasty
 thing[s] that would make *us* unhappy? I guess this is related to how you
 will define the attractor precisely.

 2. Preventing the extinction of the human race is pretty clear today, but
 *human race* will become increasingly fuzzy and hard to define, as will
 *extinction* when there are more options for existence than existence as
 meat. In the long term, how will the AI decide who is *us* in the above
 quote?

Excellent questions.  The answer to the second question is that the value of
*us* is actually irrelevant.  Thinking that it is relevant is one of the
fatal flaws of Eli's vision.  The method of determination of what is in the
set of horrible nasty thing[s] is (necessarily) coming as an integral part
of the paper.  So, to continue . . . .

Part 2.

Stephen Omohundro presented a paper at the AGI-08 post-conference workshop
on The Basic AI Drives which is available at
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf.
The paper claims to identify a number of “drives” that will appear in
sufficiently advanced AI systems of any design and identifies these drives 
as tendencies which will be present unless explicitly counteracted.

It is my contention that these drives will appear not only in sufficiently 
advanced AI systems, but in *any* goal-directed system of sufficient 
intelligence (most particularly including human beings).

The six drives that Omohundro identifies are 
  1.. self-improvement, 
  2.. rationality, 
  3.. utility function preservation, 
  4.. counterfeit utility prevention, 
  5.. self-protection, and 
  6.. acquisition and efficient use of resources.
My take on these drives is that they are universally applicable sub-goals 
(and/or goal maintenance operations) for any goal which they do not directly 
conflict.  Thus, *any* goal-driven intelligence (of sufficient intelligence) 
will display these drives/sub-goals (with the exception, of course, of those 
that directly contradict their goal) as part of their goal-seeking behavior.

And thus, we get back to a specific answer to jk's second question.  *US* 
should be assumed to apply to any sufficiently intelligent goal-driven 
intelligence.  We don't need to define *us* because I DECLARE that it should 
be assumed to include current day humanity and all of our potential descendants 
(specifically *including* our Friendly AIs and any/all other mind children 
and even hybrids).  If we discover alien intelligences, it should apply to them 
as well.

I contend that Eli's vision of Friendly AI is specifically wrong because it 
does *NOT* include our Friendly AIs in *us*.  In later e-mails, I will show 
how this intentional, explicit lack of inclusion is provably Unfriendly on the 
part of humans and a direct obstacle to achieving a Friendly attractor space.

TAKE-AWAY:  All goal-driven intelligences have drives that will be the tools 
that will allow us to create a self-correcting Friendly/CEV attractor space.

PART 3 will answer what is in the set of horrible nasty thing[s].

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-04 Thread Anthony George
On Tue, Mar 4, 2008 at 10:53 AM, rg [EMAIL PROTECTED] wrote:

 Hi

 Is anyone discussing what to do in the future when we
 have made AGIs? I thought that was part of why
 the singularity institute was made ?

 Note, that I am not saying we should not make them!
 Because someone will regardless of what we decide.

 I am asking for what should do to prepare for it!
 and also how we should affect the creation of AGIs?

 Here's some questions, I hope I am not the first to come up with.

 * Will they be sane?
 * Will they just be smart enough to pretend to be sane?
until...they do not have to anymore.

 * Should we let them decide for us ?
  If not should we/can we restrict them ?

 * Can they feel any empathy for us ?
   If not, again should we try to manipulate/force them to
   act like they do?

 * Our society is very dependent on computer systems
  everywhere and its increasing!!!
   Should we let the AGIs have access to the internet ?
  If not is it even possible to restrict an AGI that can think super fast
  is a super genious and also has a lot of raw computer power?
  That most likely can find many solutions to get internet access...
  (( I can give many crazy examples on how if anyone doubts))

 * What should we stupid organics do to prepare ?
   Reduce our dependency?

 * Should a scientist, that do not have true ethical values be allowed to
 do AGI research ?
  Someone that just pretend to be ethical, someone that just wants the
 glory and the
  Nobel pricesomeone that answers the statement: It is insane With:
 Oh its just needs
  some adjustment, don't worry :)

 * What is the military doing ? Should we raise public awareness to gain
 insight?
I guess all can imagine why this is important..

 The only answers I have found to what can truly control/restrict an AGI
 smarter than us
 are few..

 - Another AGI
 - Total isolation

 So anyone thinking about this?




 You seem rather concerned about this.  I don't agree that concern is
 warranted, at least not if that concern becomes negative or painful.  Now,
 the magisterium of contemporary scientific culture would stone me with
 condescending thoughts of how silly... a folksy ignoramus for saying or
 even thinking this.. but.  just as hands are for grabbing and eyes are
 for seeing, final cause is not hard at all to intuit.  You can't find it
 with an instrument, but it is right there in front of you if you look for
 it.  Having said that, if you can accept that eyes are for seeing, then it
 is not too hard to intuit that we are, on some level, aside from our
 individual journeys perhaps, for building a medium for a noosphere.  Said
 another way, the next step in the evolution from rock to pure living
 information is, I think, the WWW as AGI, probably with nanobots and direct
 interface with human brains..  Or maybe not.  My point is only that it is
 obvious that we are heading towards something really quickly, with
 unstoppable inertia, and unless some world tyrant crushed all freedoms and
 prevented everyone from doing what they are doing, there is no way that it
 is not going to happen.  So, enjoy, and be an observer to the show.  The
 ending is easy to predict so don't worry (excessively) about the details.







 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-04 Thread Mark Waser
UGH!

 My point is only that it is obvious that we are heading towards something 
 really quickly, with unstoppable inertia, and unless some world tyrant 
 crushed all freedoms and prevented everyone from doing what they are doing, 
 there is no way that it is not going to happen.

Most people on this list would agree.

  So, enjoy, and be an observer to the show.  The ending is easy to predict 
 so don't worry (excessively) about the details.  

Anthony, I don't know who you are . . . . but you're certainly *NOT* speaking 
for the community.  You are in a *very* small minority.

Note:  I normally wouldn't bother posting a reply to something like this, but 
this is *SO* contrary to the general consensus of the community that I feel it 
is necessary


  - Original Message - 
  From: Anthony George 
  To: agi@v2.listbox.com 
  Sent: Tuesday, March 04, 2008 2:47 PM
  Subject: Re: [agi] What should we do to be prepared?





  On Tue, Mar 4, 2008 at 10:53 AM, rg [EMAIL PROTECTED] wrote:

Hi

Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?

Note, that I am not saying we should not make them!
Because someone will regardless of what we decide.

I am asking for what should do to prepare for it!
and also how we should affect the creation of AGIs?

Here's some questions, I hope I am not the first to come up with.

* Will they be sane?
* Will they just be smart enough to pretend to be sane?
   until...they do not have to anymore.

* Should we let them decide for us ?
 If not should we/can we restrict them ?

* Can they feel any empathy for us ?
  If not, again should we try to manipulate/force them to
  act like they do?

* Our society is very dependent on computer systems
 everywhere and its increasing!!!
  Should we let the AGIs have access to the internet ?
 If not is it even possible to restrict an AGI that can think super fast
 is a super genious and also has a lot of raw computer power?
 That most likely can find many solutions to get internet access...
 (( I can give many crazy examples on how if anyone doubts))

* What should we stupid organics do to prepare ?
  Reduce our dependency?

* Should a scientist, that do not have true ethical values be allowed to
do AGI research ?
 Someone that just pretend to be ethical, someone that just wants the
glory and the
 Nobel pricesomeone that answers the statement: It is insane With:
Oh its just needs
 some adjustment, don't worry :)

* What is the military doing ? Should we raise public awareness to gain
insight?
   I guess all can imagine why this is important..

The only answers I have found to what can truly control/restrict an AGI
smarter than us
are few..

- Another AGI
- Total isolation

So anyone thinking about this?




You seem rather concerned about this.  I don't agree that concern is 
warranted, at least not if that concern becomes negative or painful.  Now, the 
magisterium of contemporary scientific culture would stone me with 
condescending thoughts of how silly... a folksy ignoramus for saying or even 
thinking this.. but.  just as hands are for grabbing and eyes are for 
seeing, final cause is not hard at all to intuit.  You can't find it with an 
instrument, but it is right there in front of you if you look for it.  Having 
said that, if you can accept that eyes are for seeing, then it is not too hard 
to intuit that we are, on some level, aside from our individual journeys 
perhaps, for building a medium for a noosphere.  Said another way, the next 
step in the evolution from rock to pure living information is, I think, the WWW 
as AGI, probably with nanobots and direct interface with human brains..  Or 
maybe not.  My point is only that it is obvious that we are heading towards 
something really quickly, with unstoppable inertia, and unless some world 
tyrant crushed all freedoms and prevented everyone from doing what they are 
doing, there is no way that it is not going to happen.  So, enjoy, and be an 
observer to the show.  The ending is easy to predict so don't worry 
(excessively) about the details.  







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Anthony George
Hi Mark,
   I certainly did not intend to represent myself to the original poster as
speaking for the community.  I'm not even *in* that community, whatever it
is, much less representing it.  The original post seemed a bit overly
concerned to me...  What exactly is one going to do about whatever it is
that the military is doing?  Nothing.  So why worry about it?  Will it be
sane?  Well, if it is global and general, and can read the meaning in
text, which seems likely, then won't it simultaneously know all the rules of
grammar, all the systems of logic, and all the classics of literature and
philosophy?  It seems that it will be much more sane than any human, with a
clear grasp on what constitutes justice.  Or at least as clear a grasp as
any human has had.  Again, these are just the opinionated musings of a
non-computer person.  My apologies to the community if I crossed a velvet
rope without paying the doorman.

Anthony

On Tue, Mar 4, 2008 at 12:39 PM, Mark Waser [EMAIL PROTECTED] wrote:

  UGH!

  My point is only that it is obvious that we are heading towards
 something really quickly, with unstoppable inertia, and unless some world
 tyrant crushed all freedoms and prevented everyone from doing what they are
 doing, there is no way that it is not going to happen.

 Most people on this list would agree.

   So, enjoy, and be an observer to the show.  The ending is easy to
 predict so don't worry (excessively) about the details.

 Anthony, I don't know who you are . . . . but you're certainly *NOT*
 speaking for the community.  You are in a *very* small minority.

 Note:  I normally wouldn't bother posting a reply to something like this,
 but this is *SO* contrary to the general consensus of the community that I
 feel it is necessary

  - Original Message -
 *From:* Anthony George [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, March 04, 2008 2:47 PM
 *Subject:* Re: [agi] What should we do to be prepared?



 On Tue, Mar 4, 2008 at 10:53 AM, rg [EMAIL PROTECTED] wrote:

  Hi
 
  Is anyone discussing what to do in the future when we
  have made AGIs? I thought that was part of why
  the singularity institute was made ?
 
  Note, that I am not saying we should not make them!
  Because someone will regardless of what we decide.
 
  I am asking for what should do to prepare for it!
  and also how we should affect the creation of AGIs?
 
  Here's some questions, I hope I am not the first to come up with.
 
  * Will they be sane?
  * Will they just be smart enough to pretend to be sane?
 until...they do not have to anymore.
 
  * Should we let them decide for us ?
   If not should we/can we restrict them ?
 
  * Can they feel any empathy for us ?
If not, again should we try to manipulate/force them to
act like they do?
 
  * Our society is very dependent on computer systems
   everywhere and its increasing!!!
Should we let the AGIs have access to the internet ?
   If not is it even possible to restrict an AGI that can think super fast
   is a super genious and also has a lot of raw computer power?
   That most likely can find many solutions to get internet access...
   (( I can give many crazy examples on how if anyone doubts))
 
  * What should we stupid organics do to prepare ?
Reduce our dependency?
 
  * Should a scientist, that do not have true ethical values be allowed to
  do AGI research ?
   Someone that just pretend to be ethical, someone that just wants the
  glory and the
   Nobel pricesomeone that answers the statement: It is insane With:
  Oh its just needs
   some adjustment, don't worry :)
 
  * What is the military doing ? Should we raise public awareness to gain
  insight?
 I guess all can imagine why this is important..
 
  The only answers I have found to what can truly control/restrict an AGI
  smarter than us
  are few..
 
  - Another AGI
  - Total isolation
 
  So anyone thinking about this?
 
 
 
 
  You seem rather concerned about this.  I don't agree that concern is
  warranted, at least not if that concern becomes negative or painful.  Now,
  the magisterium of contemporary scientific culture would stone me with
  condescending thoughts of how silly... a folksy ignoramus for saying or
  even thinking this.. but.  just as hands are for grabbing and eyes are
  for seeing, final cause is not hard at all to intuit.  You can't find it
  with an instrument, but it is right there in front of you if you look for
  it.  Having said that, if you can accept that eyes are for seeing, then it
  is not too hard to intuit that we are, on some level, aside from our
  individual journeys perhaps, for building a medium for a noosphere.  Said
  another way, the next step in the evolution from rock to pure living
  information is, I think, the WWW as AGI, probably with nanobots and direct
  interface with human brains..  Or maybe not.  My point is only that it is
  obvious that we are heading towards something really quickly, with
  unstoppable

Re: [agi] What should we do to be prepared?

2008-03-04 Thread Vladimir Nesov
On Tue, Mar 4, 2008 at 9:53 PM, rg [EMAIL PROTECTED] wrote:
 Hi

  Is anyone discussing what to do in the future when we
  have made AGIs? I thought that was part of why
  the singularity institute was made ?

  Note, that I am not saying we should not make them!
  Because someone will regardless of what we decide.

  I am asking for what should do to prepare for it!
  and also how we should affect the creation of AGIs?


How to survive a zombie attack?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-04 Thread Mike Tintner

Vlad: How to survive a zombie attack?

I really like that thought :).  You're right:we should seriously consider 
that possibility. But personally, I don't think we need to be afraid ... I'm 
sure they will be friendly zombies... 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-04 Thread Matt Mahoney

--- rg [EMAIL PROTECTED] wrote:

 Hi
 
 Is anyone discussing what to do in the future when we
 have made AGIs? I thought that was part of why
 the singularity institute was made ?
 
 Note, that I am not saying we should not make them!
 Because someone will regardless of what we decide.
 
 I am asking for what should do to prepare for it!
 and also how we should affect the creation of AGIs?
 
 Here's some questions, I hope I am not the first to come up with.
 
 * Will they be sane?
 * Will they just be smart enough to pretend to be sane?
 until...they do not have to anymore.
 
 * Should we let them decide for us ?
   If not should we/can we restrict them ?
 
 * Can they feel any empathy for us ?
If not, again should we try to manipulate/force them to
act like they do?
 
 * Our society is very dependent on computer systems
   everywhere and its increasing!!!
Should we let the AGIs have access to the internet ?
   If not is it even possible to restrict an AGI that can think super fast
   is a super genious and also has a lot of raw computer power?
   That most likely can find many solutions to get internet access...
   (( I can give many crazy examples on how if anyone doubts))
 
 * What should we stupid organics do to prepare ?
Reduce our dependency?
 
 * Should a scientist, that do not have true ethical values be allowed to 
 do AGI research ?
   Someone that just pretend to be ethical, someone that just wants the 
 glory and the
   Nobel pricesomeone that answers the statement: It is insane With: 
 Oh its just needs
   some adjustment, don't worry :)

 * What is the military doing ? Should we raise public awareness to gain 
 insight?
 I guess all can imagine why this is important..
 
 The only answers I have found to what can truly control/restrict an AGI 
 smarter than us
 are few..
 
 - Another AGI
 - Total isolation
 
 So anyone thinking about this?

Yes.  These questions are probably more appropriate for the singularity list,
which is concerned with the safety of AI, as opposed to this list, which is
concerned with just getting it to work.  OTOH, maybe there shouldn't be two
lists after all.

Anyway, I expressed my views on the singularity at
http://www.mattmahoney.net/singularity.html
To answer your question, there isn't much we can do (IMHO).  A singularity
will be invisible to the unaugmented human brain, and yet the world will be
vastly different.

As for your other questions, I believe that AI will be distributed over the
internet because this is where the necessary resources are.  No single person
or group will develop it.  Intelligence will come collectively from many
narrowly specialized experts and an infrastructure that routes natural
language messages to the right ones.  I believe this can be implemented with
current technology and an economy where information has negative value and
network peers compete for resources and reputation in a hostile environment. 
I described one proposal here: http://www.mattmahoney.net/agi.html

I believe the system will be friendly (give correct and useful information) as
long as humans remain the primary source of knowledge.  As computing power
gets cheaper and human labor gets more expensive, humans will gradually become
less relevant.  The P2P protocol will evolve from natural language to
something incomprehensible, perhaps in 30 years.  Shortly afterwards, there
will be a singularity.

I do not know how to make this system safe, nor do I believe that the
question even makes sense.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com