[agi] First Humanoid Robot That Will Develop Language May Be Coming Soon

2008-03-06 Thread Pei Wang
ScienceDaily (Mar. 4, 2008) — iCub, a one metre-high baby robot which
will be used to study how a robot could quickly pick up language
skills, will be available next year.

http://www.sciencedaily.com/releases/2008/02/080229141032.htm

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] First Humanoid Robot That Will Develop Language May Be Coming Soon

2008-03-06 Thread Mike Tintner

Pei:ScienceDaily (Mar. 4, 2008) — iCub, a one metre-high baby robot which
will be used to study how a robot could quickly pick up language
skills, will be available next year.

http://www.sciencedaily.com/releases/2008/02/080229141032.htm

Thanks - but it looks like here we go again:

now, within a year, we will have the first humanoid robot capable to 
developing language skills.


It looks like there is no special reason, no special idea here, why this 
project should succeed any more than Luc Steels' project, which sounds 
similar,  which we also discussed here a while ago. Is there? 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] First Humanoid Robot That Will Develop Language May Be Coming Soon

2008-03-06 Thread Bob Mottram
On 06/03/2008, Pei Wang [EMAIL PROTECTED] wrote:
 ScienceDaily (Mar. 4, 2008) — iCub, a one metre-high baby robot which
  will be used to study how a robot could quickly pick up language
  skills, will be available next year.

  http://www.sciencedaily.com/releases/2008/02/080229141032.htm


Some thoughts on this.

http://streebgreebling.blogspot.com/2008/03/running-before-you-can-walk.html

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] First Humanoid Robot That Will Develop Language May Be Coming Soon

2008-03-06 Thread Bob Mottram
On 06/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
  now, within a year, we will have the first humanoid robot capable to
  developing language skills.


Unless they have anything up their sleeve which I don't know about I
suspect that this is going to be no more successful than any previous
humanoid robot project.  If they just try to go straight for language
learning without having a base of pre-linguistic skills I expect that
this either won't work, or will only produce very trivial results.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well duh land, 
b) I'm so totally off the mark that I'm not even worth replying to, or c) I 
hope being given enough rope to hang myself.  :-)

Since I haven't seen any feedback, I think I'm going to divert to a section 
that I'm not quite sure where it goes but I think that it might belong here . . 
. .

Interlude 1

Since I'm describing Friendliness as an attractor in state space, I probably 
should describe the state space some and answer why we haven't fallen into the 
attractor already.

The answer to latter is a combination of the facts that 
  a.. Friendliness is only an attractor for a certain class of beings (the 
sufficiently intelligent).
  b.. It does take time/effort for the borderline sufficiently intelligent 
(i.e. us) to sense/figure out exactly where the attractor is (much less move to 
it).
  c.. We already are heading in the direction of Friendliness (or 
alternatively, Friendliness is in the direction of our most enlightened 
thinkers).
and most importantly
  a.. In the vast, VAST majority of cases, Friendliness is *NOT* on the 
shortest path to any single goal.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] First Humanoid Robot That Will Develop Language May Be Coming Soon

2008-03-06 Thread Pei Wang
I feel the same. I post the news to see if anyone in this list has
more info on that project.

Pei

On Thu, Mar 6, 2008 at 9:39 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 On 06/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
now, within a year, we will have the first humanoid robot capable to
developing language skills.


  Unless they have anything up their sleeve which I don't know about I
  suspect that this is going to be no more successful than any previous
  humanoid robot project.  If they just try to go straight for language
  learning without having a base of pre-linguistic skills I expect that
  this either won't work, or will only produce very trivial results.



  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Stephen Reed
Hi Mark,
I value your ideas about 'Friendliness as an attractor in state space'.  Please 
keep it up.
-Steve
 
Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 6, 2008 9:01:53 AM
Subject: Re: [agi] What should we do to be prepared?

  Hmm.  Bummer.  No new feedback.  I 
wonder if a) I'm still in Well duh land, b) I'm so totally off the mark 
that I'm not even worth replying to, or c) I hope being given enough 
rope to hang myself.  :-)
 
Since I haven't seen any feedback, I think I'm 
going to divert to a section that I'm not quite sure where it goes but I think 
that it might belong here . . . .
 
Interlude 1
 
Since I'm describing Friendliness as an attractor 
in state space, I probably should describe the state space some and answer why 
we haven't fallen into the attractor already.
 
The answer to latter is a combination of the 
facts that 
  Friendliness is only an attractor for a certain 
  class of beings (the sufficiently intelligent).  It does take time/effort for 
the borderline 
  sufficiently intelligent (i.e. us) to sense/figure out exactly where the 
  attractor is (much less move to it).  We already are heading in the direction 
of 
  Friendliness (or alternatively, Friendliness is in the direction of our 
  most enlightened thinkers).and most importantly
  In the vast, VAST majority of cases, 
  Friendliness is *NOT* on the shortest path to any single 
goal.  agi | Archives | Modify Your Subscription







  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] First Humanoid Robot That Will Develop Language May Be Coming Soon

2008-03-06 Thread Mike Tintner

Bob:

   http://streebgreebling.blogspot.com/2008/03/running-before-you-can-walk.html


The running before  you walk analogy is interesting. It gives rise to the 
seed of an idea. Basically, a vast amount of what is happening and has 
happened in AGI and robotics is ridiculous - a whole series of enterprises 
which are ridiculous because they are all trying to take a short-cut - to 
try and jump what are usually several steps, if not stages, up the 
evolutionary ladder of intelligence.  Trying to talk in words about the 
world before they can see what they are talking about. Trying to gain 
databases of verbal/symbolic knowledge about the world before they know what 
a word is and learned to attach words to objects.


We really need something like an Evolutionary Framework of Errors in AI.

(And there is something loosely comparable in science - where we have an 
Evolutionary Psychology which *starts* with human beings, (the product, to 
some extent, of science's human exceptionalism), and not a true universal 
Evo-Psych).


An awful lot of people are going to waste an awful lot of time without one. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] A HIGHLY RELEVANT AND interesting Google Tech Talk about Neural Nets

2008-03-06 Thread Ed Porter
Durk,

I am indebted to you for bringing this very interesting Hinton lecture to
the attention of this list.  

It is highly relevant to AGI, since, if it is to be believed, it provides a
general architecture for learning invariant hierarchical representations
(which are currently in vogue--for good reason), from presumably any type of
data.  It can perform both unsupervised and supervised learning.  Hinton
claims this architecture scales well.  He does not mention how his system
would learn temporal patterns, but presumably it could be expanded to do so,
such as by the use of temporal buffers to store sequences of inputs over
time. If it could learn temporal patterns it would seem to be able to
generate behaviors as well as recognizing and generating patterns.

Of course it would require considerably more to become a full AGI, such as
motivational, reinforcement-learning-like, mental behavior, goal selecting,
goal pursuing, and novel pattern formation features.  But it would seem to
provide a system for automatically learning and generating a significant
percent of the patterns and behaviors an AGI would need.

I think the AGI community should be open to adopting such a potentially
powerful idea from machine learning, if it is shown to be as powerful as
Hinton says, because, if so, it would add credence to the possibility of AGI
by making the task of building an AGI seem considerably less complex.

Ed Porter

-Original Message-
From: Kingma, D.P. [mailto:[EMAIL PROTECTED] 
Sent: Sunday, March 02, 2008 12:08 PM
To: agi@v2.listbox.com
Subject: [agi] interesting Google Tech Talk about Neural
Nets

Gentlemen,
For guys interested in vision, neural nets and the like,
there's a very interesting talk by Geoffrey Hinton about unsupervised
learning of low-dimensional codes:
It's been on Youtube since December, but somehow it escaped
my attention for some months.

http://www.youtube.com/watch?v=AyzOUbkUf3M

BTW, the back of Peter Norvig's head makes a guest
appearance throughout most of the video ;)

As an academic I'm quite excited about this technique
because it has the potential of solving non-trivial parts of problems in
perception in a clean, practical, understandable way.

Greets from Utrecht, Netherlands,
Durk

agi | Archives http://www.listbox.com/member/archive/303/=now
http://www.listbox.com/member/archive/rss/303/ | Modify
http://www.listbox.com/member/?;
Your Subscriptionhttp://www.listbox.com   

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com
attachment: winmail.dat

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote:
 And thus, we get back to a specific answer to jk's second question.  *US*
 should be assumed to apply to any sufficiently intelligent goal-driven
 intelligence.  We don't need to define *us* because I DECLARE that it
 should be assumed to include current day humanity and all of our potential
 descendants (specifically *including* our Friendly AIs and any/all other
 mind children and even hybrids).  If we discover alien intelligences, it
 should apply to them as well.

Actually, I like this.  I presume that showing empathy to any intelligent,
goal driven agent means acting in a way that helps the agent achieve its
goals, whatever they are.  This aligns nicely with some common views of
ethics, e.g.

- A starving dog is intelligent and has the goal of eating, so the friendly
action is to feed it.

- Giving a dog a flea bath is friendly because dogs are more intelligent than
fleas.

- Killing a dog to save a human life is friendly because a human is more
intelligent than a dog.

- Killing a human to save two humans is friendly because two humans are more
intelligent than one.

My concern is what happens if a UFAI attacks a FAI.  The UFAI has the goal of
killing the FAI.  Should the FAI show empathy by helping the UFAI achieve its
goal?

I suppose the question could be answered by deciding which AI is more
intelligent.  But how is this done?  A less intelligent agent will not
recognize the superior intelligence of the other.  For example, a dog will not
recognize the superior intelligence of humans.  Also, we have IQ tests for
children to recognize prodigies, but no similar test for adults.  The question
seems fundamental because a Turing machine cannot distinguish a process of
higher algorithmic complexity than itself from a random process.

Or should we not worry about the problem because the more intelligent agent is
more likely to win the fight?  My concern is that evolution could favor
unfriendly behavior, just as it has with humans.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] First Humanoid Robot That Will Develop Language May Be Coming Soon

2008-03-06 Thread Bob Mottram
http://eris.liralab.it/iCub/dox/html/index.html

At least this robot is open source (GPL).  A quick survey of the code
doesn't come up with anything out of the ordinary for many similar
robots built within the previous decade.





On 06/03/2008, Pei Wang [EMAIL PROTECTED] wrote:
 I feel the same. I post the news to see if anyone in this list has
  more info on that project.

  Pei


  On Thu, Mar 6, 2008 at 9:39 AM, Bob Mottram [EMAIL PROTECTED] wrote:
   On 06/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:
  now, within a year, we will have the first humanoid robot capable to
  developing language skills.
  
  
Unless they have anything up their sleeve which I don't know about I
suspect that this is going to be no more successful than any previous
humanoid robot project.  If they just try to go straight for language
learning without having a base of pre-linguistic skills I expect that
this either won't work, or will only produce very trivial results.
  
  
  

   ---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/

   Modify Your Subscription: http://www.listbox.com/member/?;

   Powered by Listbox: http://www.listbox.com
  

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Argh!  I hate premature e-mailing . . . . :-)

Interlude 1 . . . . continued

One of the first things that we have to realize and fully internalize is that 
we (and by we I continue to mean all sufficiently intelligent 
entities/systems) are emphatically not single-goal systems.  Further, the 
means/path that we use to achieve a particular goal has a very high probability 
of affecting the path/means that we must use to accomplish subsequent goals -- 
as well as the likely success rate of those goals.

Unintelligent systems/entities simply do not recognize this fact -- 
particularly since it probably interferes with their immediate goal-seeking 
behavior.

Insufficiently intelligent systems/entities (or systems/entities under 
sufficient duress) are not going to have the foresight (or the time for 
foresight) to recognize all the implications of this fact and will therefore 
deviate from unseen optimal goal-seeking behavior in favor of faster/more 
obvious (though ultimately less optimal) paths.

Borderline intelligent systems/entities under good conditions are going to try 
to tend in the directions suggested by this fact -- it is, after all, the 
ultimate in goal-seeking behavior -- but finding the optimal path/direction 
becomes increasingly difficult as the horizon expands.

And this is, in fact, the situation that we are all in and debating about.  As 
a collection of multi-goal systems/entities, how do the individual wes 
optimize our likelihood of achieving our goals?  Clearly, we do not want some 
Unfriendly AGI coming along and preventing our goals by wiping us out or 
perverting our internal goal structure.

= = = = =

Now, I've just attempted to sneak a critical part of the answer right past 
everyone with my plea . . . . so let's go back and review it in slow-motion.  
:-)

Part of our environment is that we have peers.  And peers become resources 
towards our goals when we have common or compatible goals.  Any unimaginably 
intelligent system/entity surrounded by peers is certainly going to work with 
it's peers wherever possible.  Society/community is a feature that is 
critically important to Friendliness -- and this shows up in *many* places in 
evolution (if you're intelligent enough and can see beyond the red in tooth 
and claw).  Note also that this can also (obviously) be easily and profitably 
extended to sub-peers (entities below a peer status) as long as the sub-peer 
can be convinced to interact in manner such that they are a net positive to the 
super-intelligences goals.

Now, one of the assumptions of the Friendliness debate is that current-day 
humans are going to be sub-peers to the coming mind-children -- 
possibly/probably sub-sub-sub-...-peers.  That leaves us in the situation of 
probably needing to interact in a manner such that we are a net positive to the 
super-intelligence's goals.  Fortunately, it is my contention (which should be 
obvious by the end of the paper) that a Friendly sub-peer is *always* a 
resource and that Friendly behavior towards that sub-peer (our goal) is optimal 
for the super-intelligence.  Thus, if we can get both ourselves and our 
mind-children to a Friendly state -- it should be reassuringly self-reinforcing 
from there on out.

Of course, the big bugaboo to this whole theory is whether it will be too 
onerous for humans to be Friendly.  Eliezer's vision of a Friendly future is 
that humans don't have to be Friendly -- only the AGIs do.  My contention is 
that you don't get the Friendly attractor without all of the parties involved 
being Friendly -- which is why I'm so down on Eliezer's vision.  Under my 
vision of Friendliness, entities that aren't Friendly generally don't receive 
behavior that would be regarded as Friendly.

Now, note a critical feature of my arguments -- I am *NOT* trying to constrain 
the goals of my goal-seeking entity/entities (as in Eliezer's vision of 
Friendliness).  I am trying to prove that *any* sufficiently intelligent 
multi-goal entity will find Friendliness an attractor because it promotes it's 
own goal-seeking behavior.  Friendliness, in effect and assuming that it can be 
made coherent and consistent, is an optimal subgoal for all non-conflicting 
goals (and thus, in the aggregate of a large number of varying goals).  So, as 
I said, if we can get both ourselves and our mind-children to a Friendly state 
-- it should be reassuringly self-reinforcing from there on out.

TAKE-AWAY:  Friendliness is an attractor because it IS equivalent to 
enlightened self-interest -- but it only works where all entities involved 
are Friendly.

PART 3 will answer part of What is Friendly behavior? by answering What is 
in the set of horrible nasty thing[s]?.

  - Original Message - 
  From: Mark Waser 
  To: agi@v2.listbox.com 
  Sent: Thursday, March 06, 2008 10:01 AM
  Subject: Re: [agi] What should we do to be prepared?


  Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well duh land, 
b) I'm so 

Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Or should we not worry about the problem because the more intelligent 
agent is

more likely to win the fight?  My concern is that evolution could favor
unfriendly behavior, just as it has with humans.


I don't believe that evolution favors unfriendly behavior.  I believe that 
evolution is tending towards Friendliness.  It just takes time to evolve all 
of the pre-conditions for it to be able to obviously manifest.


TAKE-AWAY:  Friendliness goes with evolution.  Only idiots fight evolution. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 12:27:57 pm, Mark Waser wrote:
 TAKE-AWAY:  Friendliness is an attractor because it IS equivalent 
to enlightened self-interest -- but it only works where all entities 
involved are Friendly.


Check out Beyond AI pp 178-9 and 350-352, or the Preface which sums up the 
whole business. There is noted in evolutionary game theory a moral ladder 
phenomenon -- in appropriate environments there is an evolutionary pressure 
to be just a little bit nicer than the average ethical level. This can 
raise the average over the long run. Like any evolutionarily stable strategy, 
it is an attractor in the appropriate space. 

Your point about sub-peers being resources is known in economics as the 
principle of comparative advantage (p. 343).

I think you're essentially on the right track. Like any children, our mind 
children will tend to follow our example more than our precepts...

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
My concern is what happens if a UFAI attacks a FAI.  The UFAI has the goal 
of
killing the FAI.  Should the FAI show empathy by helping the UFAI achieve 
its

goal?


Hopefully this concern was answered by my last post but . . . .

Being Friendly *certainly* doesn't mean fatally overriding your own goals. 
That would be counter-productive, stupid, and even provably contrary to my 
definition of Friendliness.


The *only* reason why a Friendly AI would let/help a UFAI kill it is if 
doing so would promote the Friendly AI's goals -- a rather unlikely 
occurrence I would think (especially since it might then encourage other 
unfriendly behavior which would then be contrary to the Friendly AI's goal 
of Friendliness).


Note though that I could easily see a Friendly AI sacrificing itself to 
take down the UFAI (though it certainly isn't required to do so).



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Thu, Mar 6, 2008 at 8:27 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Now, I've just attempted to sneak a critical part of the answer right past
 everyone with my plea . . . . so let's go back and review it in slow-motion.
 :-)

 Part of our environment is that we have peers.  And peers become resources
 towards our goals when we have common or compatible goals.  Any unimaginably
 intelligent system/entity surrounded by peers is certainly going to work
 with it's peers wherever possible.  Society/community is a feature that is
 critically important to Friendliness -- and this shows up in *many* places
 in evolution (if you're intelligent enough and can see beyond the red in
 tooth and claw).  Note also that this can also (obviously) be easily and
 profitably extended to sub-peers (entities below a peer status) as long as
 the sub-peer can be convinced to interact in manner such that they are a net
 positive to the super-intelligences goals.

Mark, I think you base your conclusion on a wrong model. These points
depend on quantitative parameters, which are going to be very
different in case of AGIs (and also on high level of rationality of
AGIs, which seems to be a friendly AI complete problem, including
kinds of friendliness that don't need to have properties you list).

When you essentially have two options, cooperate/ignore, it's better
to be friendly, and that is why it's better to buy a thing from
someone who produces it less efficiently then you do, that is to
cooperate with sub-peer. Everyone is doing a thing that *they* do
best.

But when you have a third option, to extract the resources that
sub-peer is using up and really put them to better use, it's not
stable anymore. The value you provide is much lower then what your
mass in computronium or whatever can do, including the trouble of
taking over the world. You don't grow wild carrot, you replace it with
cultivated forms. The best wild carrot can hope for is to be ignored,
when building plans don't need the ground it grows on cleared.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.

On 03/06/2008 08:32 AM,, Matt Mahoney wrote:

--- Mark Waser [EMAIL PROTECTED] wrote:
  

And thus, we get back to a specific answer to jk's second question.  *US*
should be assumed to apply to any sufficiently intelligent goal-driven
intelligence.  We don't need to define *us* because I DECLARE that it
should be assumed to include current day humanity and all of our potential
descendants (specifically *including* our Friendly AIs and any/all other
mind children and even hybrids).  If we discover alien intelligences, it
should apply to them as well.



... snip ...

- Killing a dog to save a human life is friendly because a human is more
intelligent than a dog.

... snip ...
  


Mark said that the objects of concern for the AI are any sufficiently 
intelligent goal-driven intelligence[s], but did not say if or how 
different levels of intelligence would be weighted differently by the 
AI. So it doesn't yet seem to imply that killing a certain number of 
dogs to save a human is friendly.


Mark, how do you intend to handle the friendliness obligations of the AI 
towards vastly different levels of intelligence (above the threshold, of 
course)?


joseph


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Mark, how do you intend to handle the friendliness obligations of the AI 
towards vastly different levels of intelligence (above the threshold, of 
course)?


Ah.  An excellent opportunity for continuation of my previous post rebutting 
my personal conversion to computronium . . . .


First off, my understanding of the common usage of the word intelligence 
should be regarded as a subset of the attributes promoting successful 
goal-seeking.  Back in the pre-caveman days, physical capabilities were 
generally more effective as goal-seeking attributes.  These days, social 
skills are often arguably equal or more effective than intelligence as 
goal-seeking attributes.  How do you feel about how we should handle the 
friendliness obligations towards vastly different levels of social skill?


My point here is that you have implicitly identified intelligence as a 
better or best attribute.  I am not willing to agree with that without 
further convincing.  As far as I can tell, someone with sufficiently large 
number of hard-coded advanced social skill reflexes (to prevent the argument 
that social skills are intelligence) will run rings around your average 
human egghead in terms of getting what they want.  What are that person's 
obligations towards you?  Assuming that you are smarter, should their 
adeptness at getting what they want translate to reduced, similar, or 
greater obligations to you?  Do their obligations change more with variances 
in their social adeptness or in your intelligence?


Or, what about the more obvious question of the 6'7 300 pound guy on a 
deserted tropical island with a wimpy (or even crippled) brainiac?  What are 
their relative friendliness obligations?


I would also argue that the threshold can't be measured solely in terms of 
intelligence (unless you're going to define intelligence solely as 
goal-seeking ability, of course). 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Thu, Mar 6, 2008 at 11:23 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Friendliness must include reasonable protection for sub-peers or else there
  is no enlightened self-interest or attractor-hood to it -- since any
  rational entity will realize that it could *easily* end up as a sub-peer.
  The value of having that protection in Friendliness in case the super-entity
  needs it should be added to my innate value (which it probably dwarfs) when
  considering whether I should be snuffed out.  Friendliness certainly allows
  the involuntary conversion of sub-peers under dire enough circumstances (or
  it wouldn't be enlightened self-interest for the super-peer) but there is
  a *substantial* value barrier to it (to be discussed later).


This is different from what I replied to (comparative advantage, which
J Storrs Hall also assumed), although you did state this point
earlier.

I think this one is a package deal fallacy. I can't see how whether
humans conspire to weed out wild carrots or not will affect decisions
made by future AGI overlords. ;-)

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.

On 03/05/2008 05:04 PM,, Mark Waser wrote:
And thus, we get back to a specific answer to jk's second question.  
*US* should be assumed to apply to any sufficiently intelligent 
goal-driven intelligence.  We don't need to define *us* because I 
DECLARE that it should be assumed to include current day humanity and 
all of our potential descendants (specifically *including* our 
Friendly AIs and any/all other mind children and even hybrids).  If 
we discover alien intelligences, it should apply to them as well.
 
I contend that Eli's vision of Friendly AI is specifically wrong 
because it does *NOT* include our Friendly AIs in *us*.  In later 
e-mails, I will show how this intentional, explicit lack of inclusion 
is provably Unfriendly on the part of humans and a direct obstacle to 
achieving a Friendly attractor space.
 
 
TAKE-AWAY:  All goal-driven intelligences have drives that will be the 
tools that will allow us to create a self-correcting Friendly/CEV 
attractor space.
 


I like the expansion of CEV from 'human being' (or humanity) to 
'sufficiently intelligent being' (all intelligent beings). It is obvious 
in retrospect (isn't it always?), but didn't occur to me when reading 
Eliezer's CEV notes. It seems related to the way in which 'humanity' has 
become broader as a term (once applied to certain privileged people 
only) and 'beings deserving of certain rights' has become broader and 
broader (pointless harm of some animals is no longer condoned [in some 
cultures]).


I wonder if this is a substantive difference with Eliezer's position 
though, since one might argue that 'humanity' means 'the [sufficiently 
intelligent and sufficiently ...] thinking being' rather than 'homo 
sapiens sapiens', and the former would of course include SAIs and 
intelligent alien beings.


joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote:

  My concern is what happens if a UFAI attacks a FAI.  The UFAI has the goal
 
  of
  killing the FAI.  Should the FAI show empathy by helping the UFAI achieve 
  its
  goal?
 
 Hopefully this concern was answered by my last post but . . . .
 
 Being Friendly *certainly* doesn't mean fatally overriding your own goals. 
 That would be counter-productive, stupid, and even provably contrary to my 
 definition of Friendliness.
 
 The *only* reason why a Friendly AI would let/help a UFAI kill it is if 
 doing so would promote the Friendly AI's goals -- a rather unlikely 
 occurrence I would think (especially since it might then encourage other 
 unfriendly behavior which would then be contrary to the Friendly AI's goal 
 of Friendliness).
 
 Note though that I could easily see a Friendly AI sacrificing itself to 
 take down the UFAI (though it certainly isn't required to do so).

Would an acceptable response be to reprogram the goals of the UFAI to make it
friendly?

Does the answer to either question change if we substitute human for UFAI?


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote:
 A Friendly entity does *NOT* snuff
 out (objecting/non-self-sacrificing) sub-peers simply because it has decided
 that it has a better use for the resources that they represent/are.  That 
 way lies death for humanity when/if become sub-peers (aka Unfriendliness).

Would it be Friendly to turn you into computronium if your memories were
preserved and the newfound computational power was used to make you immortal
in a a simulated world of your choosing, for example, one without suffering,
or where you had a magic genie or super powers or enhanced intelligence, or
maybe a world indistinguishable from the one you are in now?


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
I wonder if this is a substantive difference with Eliezer's position 
though, since one might argue that 'humanity' means 'the [sufficiently 
intelligent and sufficiently ...] thinking being' rather than 'homo 
sapiens sapiens', and the former would of course include SAIs and 
intelligent alien beings.


Eli is quite clear that AGI's must act in a Friendly fashion but we can't 
expect humans to do so.  To me, this is foolish since the attractor you can 
create if humans are Friendly tremendously increases our survival 
probability. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser

Would it be Friendly to turn you into computronium if your memories were
preserved and the newfound computational power was used to make you 
immortal
in a a simulated world of your choosing, for example, one without 
suffering,
or where you had a magic genie or super powers or enhanced intelligence, 
or

maybe a world indistinguishable from the one you are in now?


That's easy.  It would *NOT* be Friendly if I have a goal that I not be 
turned into computronium even if your clause (which I hereby state that I 
do)


Uplifting a dog, if it results in a happier dog, is probably Friendly 
because the dog doesn't have an explicit or derivable goal to not be 
uplifted.


BUT - Uplifting a human who emphatically does wish not to be uplifted is 
absolutely Unfriendly. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote:
 
 This is different from what I replied to (comparative advantage, which
 J Storrs Hall also assumed), although you did state this point
 earlier.
 
 I think this one is a package deal fallacy. I can't see how whether
 humans conspire to weed out wild carrots or not will affect decisions
 made by future AGI overlords. ;-)
 

There is a lot more reason to believe that the relation of a human to an AI 
will be like that of a human to larger social units of humans (companies, 
large corporations, nations) than that of a carrot to a human. I have argued 
in peer-reviewed journal articles for the view that advanced AI will 
essentially be like numerous, fast human intelligence rather than something 
of a completely different kind. I have seen ZERO considered argument for the 
opposite point of view. (Lots of unsupported assumptions, generally using 
human/insect for the model.)

Note that if some super-intelligence were possible and optimal, evolution 
could have opted for fewer bigger brains in a dominant race. It didn't -- 
note our brains are actually 10% smaller than Neanderthals. This isn't proof 
that an optimal system is brains of our size acting in social/economic 
groups, but I'd claim that anyone arguing the opposite has the burden of 
proof (and no supporting evidence I've seen).

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser

I think this one is a package deal fallacy. I can't see how whether
humans conspire to weed out wild carrots or not will affect decisions
made by future AGI overlords. ;-)


Whether humans conspire to weed out wild carrots impacts whether humans are 
classified as Friendly (or, it would if the wild carrots were sentient).


It is in the future AGI overlords enlightened self-interest to be 
Friendly -- so I'm going to assume that they will be.


If they are Friendly and humans are Friendly, I claim that we are in good 
shape.


If humans are not Friendly, it is entirely irrelevant whether the future AGI 
overlords are Friendly or not -- because there is no protection afforded 
under Friendliness to Unfriendly species and we just end up screwing 
ourselves. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
Would an acceptable response be to reprogram the goals of the UFAI to make 
it

friendly?


Yes -- but with the minimal possible changes to do so (and preferably done 
by enforcing Friendliness and allowing the AI to resolve what to change to 
resolve integrity with Friendliness -- i.e. don't mess with any goals that 
you don't absolutely have to and let the AI itself resolve any choices if at 
all possible).


Does the answer to either question change if we substitute human for 
UFAI?


The answer does not change for an Unfriendly human.  The answer does change 
for a Friendly human.


Human vs. AI is irrelevant.  Friendly vs. Unfriendly is exceptionally 
relevant.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Mark Waser
And more generally, how is this all to be quantified? Does your paper go 
into the math?


All I'm trying to establish and get agreement on at this point are the 
absolutes.  There is no math at this point because it would be premature and 
distracting.


but, a great question . . . .  :- 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 1:48 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote:
  
   This is different from what I replied to (comparative advantage, which
   J Storrs Hall also assumed), although you did state this point
   earlier.
  
   I think this one is a package deal fallacy. I can't see how whether
   humans conspire to weed out wild carrots or not will affect decisions
   made by future AGI overlords. ;-)
  

  There is a lot more reason to believe that the relation of a human to an AI
  will be like that of a human to larger social units of humans (companies,
  large corporations, nations) than that of a carrot to a human. I have argued
  in peer-reviewed journal articles for the view that advanced AI will
  essentially be like numerous, fast human intelligence rather than something
  of a completely different kind. I have seen ZERO considered argument for the
  opposite point of view. (Lots of unsupported assumptions, generally using
  human/insect for the model.)


My argument doesn't need 'something of a completely different kind'.
Society and human is fine as substitute for human and carrot in my
example, only if society could extract profit from replacing humans
with 'cultivated humans'. But we don't have cultivated humans, and we
are not at the point where existing humans need to be cleared to make
space for new ones.

The only thing that could keep future society from derailing in this
direction is some kind of enforcement installed in minds of future
dominant individuals/societies by us lesser species while we are still
in power.


  Note that if some super-intelligence were possible and optimal, evolution
  could have opted for fewer bigger brains in a dominant race. It didn't --
  note our brains are actually 10% smaller than Neanderthals. This isn't proof
  that an optimal system is brains of our size acting in social/economic
  groups, but I'd claim that anyone arguing the opposite has the burden of
  proof (and no supporting evidence I've seen).


Sorry, I don't understand this point. We are the first species to
successfully launch culture. Culture is much more powerful then
individuals, if only through parallelism and longer lifespan. What
follows from it?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 1:46 AM, Mark Waser [EMAIL PROTECTED] wrote:
  I think this one is a package deal fallacy. I can't see how whether
   humans conspire to weed out wild carrots or not will affect decisions
   made by future AGI overlords. ;-)

  Whether humans conspire to weed out wild carrots impacts whether humans are
  classified as Friendly (or, it would if the wild carrots were sentient).

Why does it matter what word we/they assign to this situation?


  It is in the future AGI overlords enlightened self-interest to be
  Friendly -- so I'm going to assume that they will be.

It doesn't follow. If you think it's clearly the case, explain
decision process that leads to choosing 'friendliness'. So far it is
self-referential: if dominant structure always adopts the same
friendliness when its predecessor was friendly, then it will be safe
when taken over. But if dominant structure turns unfriendly, it can
clear the ground and redefine friendliness in its own image. What does
it leave you?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.
At the risk of oversimplifying or misinterpreting your position, here 
are some thoughts that I think follow from what I understand of your 
position so far. But I may be wildly mistaken. Please correct my mistakes.


There is one unique attractor in state space. Any individual of a 
species that develops in a certain way -- which is to say, finds itself 
in a certain region of the state space -- will thereafter necessarily be 
drawn to the attractor if it acts in its own self interest. This 
attractor is friendliness (F). [The attractor needs to be sufficiently 
distant from present humanity in state space that our general 
unfriendliness and frequent hostility towards F is explainable and 
plausible. And it needs to be sufficiently powerful that coming under 
its influence given time is plausible or perhaps likely.]


Since any sufficiently advanced species will eventually be drawn towards 
F, the CEV of all species is F. Therefore F is not species-specific, and 
has nothing to do with any particular species or the characteristics of 
the first species that develops an AGI (AI). This means that genuine 
conflict between friendly species or between friendly individuals is not 
even possible, so there is no question of an AI needing to arbitrate 
between the conflicting interests of two friendly individuals or groups 
of individuals. Of course, there will still be conflicts between 
non-friendlies, and the AI may arbitrate and/or intervene.


The AI will not be empathetic towards homo sapiens sapiens in 
particular. It will be empathetic towards f-beings (friendly beings in 
the technical sense), whether they exist or not (since the AI might be 
the only being anywhere near the attractor). This means no specific acts 
of the AI towards any species or individuals are ruled out, since it 
might be part of their CEV (which is the CEV of all beings),  even 
though they are not smart enough to realize it.


Since the AI empathizes not with humanity but with f-beings in general, 
it is possible (likely) that some of humanity's most fundamental beliefs 
may be wrong from the perspective of an f-being. Without getting into 
the debate of the merits of virtual-space versus meat-space and 
uploading, etc., it seems to follow that *if* the view that everything 
of importance is preserved (no arguments about this, it is an assumption 
for the sake of this point only) in virtual-space and *if* turning the 
Earth into computronium and uploading humanity and all of Earth's beings 
would be vastly more efficient a use of the planet, *then* the AI should 
do this (perhaps would be morally obligated to do this) -- even if every 
human being pleads for this not to occur. The AI would have judged that 
if we were only smarter, faster, more the kind of people we would like 
to be, etc., we would actually prefer the computronium scenario.


You might argue that from the perspective of F, this would not be 
desirable because ..., but we are so far from F in state space that we 
really don't know which would be preferable from that perspective (even 
if we actually had  detailed knowledge about the computronium scenario 
and its limitations/capabilities to replace our wild speculations). It 
might be the case that property rights, say, would preclude any f-being 
from considering the computronium scenario preferable, but we don't know 
that, and we can't know that with certainty at present. Likewise, our 
analysis of the sub-goals of friendly beings might be incorrect, which 
would make it unlikely that our analysis of what a friendly being will 
actually believe is mistaken.


It's become apparent to me in thinking about this that 'friendliness' is 
really not a good term for the attitude of an f-being that we are 
talking about: that of acting solely in the interest of f-beings 
(whether others exist or not) and in consistency with the CEV of all 
sufficiently ... beings. It is really just acting rationally (according 
to a system that we do not understand and may vehemently disagree with).


One thing I am still unclear about is the extent to which the AI is 
morally obligated to intervene to prevent harm. For example, if the AI 
judged that the inner life of a cow is rich enough to deserve protection 
and that human beings can easily survive without beef, would it be 
morally obligated to intervene and prevent the killing of cows for food? 
If it would not be morally obligated, how do you propose to distinguish 
between that case (assuming it makes the judgments it does but isn't 
obligated to intervene) and another case where it makes the same 
judgments and is morally obligated to intervene (assuming it would be 
required to intervene in some cases).


Thoughts?? Apologies for this rather long and rambling post.

joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] What should we do to be prepared?

2008-03-06 Thread j.k.

On 03/06/2008 02:18 PM,, Mark Waser wrote:
I wonder if this is a substantive difference with Eliezer's position 
though, since one might argue that 'humanity' means 'the 
[sufficiently intelligent and sufficiently ...] thinking being' 
rather than 'homo sapiens sapiens', and the former would of course 
include SAIs and intelligent alien beings.


Eli is quite clear that AGI's must act in a Friendly fashion but we 
can't expect humans to do so.  To me, this is foolish since the 
attractor you can create if humans are Friendly tremendously increases 
our survival probability.




The point I was making was not so much about who is obligated to act 
friendly but whose CEV is taken into account. You are saying all 
sufficiently ... beings, while Eliezer says humanity. But does Eliezer 
say 'humanity' because that humanity is *us* and we care about the CEV 
of our species (and its sub-species and descendants...) or 'humanity' 
because we are the only sufficiently ... beings that we are presently 
aware of (and so humanity would include any other sufficiently ... being 
that we eventually discover).


It just occurred to me though that it doesn't really matter whether it 
is the CEV of homo sapiens sapiens or the CEV of some alien race or that 
of AIs, since you are arguing that they are the same, since there's 
nowhere to go beyond a point except towards the attractor.


joseph

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote:
 My argument doesn't need 'something of a completely different kind'.
 Society and human is fine as substitute for human and carrot in my
 example, only if society could extract profit from replacing humans
 with 'cultivated humans'. But we don't have cultivated humans, and we
 are not at the point where existing humans need to be cleared to make
 space for new ones.

The scenario takes on an entirely different tone if you replace weed out some 
wild carrots with kill all the old people who are economically 
inefficient. In particular the former is something one can easily imagine 
people doing without a second thought, while the latter is likely to generate 
considerable opposition in society.
 
 The only thing that could keep future society from derailing in this
 direction is some kind of enforcement installed in minds of future
 dominant individuals/societies by us lesser species while we are still
 in power.

All we need to do is to make sure they have the same ideas of morality and 
ethics that we do -- the same as we would raise any other children. 
 
   Note that if some super-intelligence were possible and optimal, evolution
   could have opted for fewer bigger brains in a dominant race. It didn't --
   note our brains are actually 10% smaller than Neanderthals. This isn't 
proof
   that an optimal system is brains of our size acting in social/economic
   groups, but I'd claim that anyone arguing the opposite has the burden of
   proof (and no supporting evidence I've seen).
 
 
 Sorry, I don't understand this point. We are the first species to
 successfully launch culture. Culture is much more powerful then
 individuals, if only through parallelism and longer lifespan. What
 follows from it?

So how would you design a super-intelligence:
(a) a single giant blob modelled on an individual human mind
(b) a society (complete with culture) with lots of human-level minds and 
high-speed communication?

We know (b) works if you can build the individual human-level mind. Nobody has 
a clue that (a) is even possible. There's lots of evidence that even human 
minds have many interacting parts.

Josh

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-06 Thread Vladimir Nesov
On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote:
   My argument doesn't need 'something of a completely different kind'.
   Society and human is fine as substitute for human and carrot in my
   example, only if society could extract profit from replacing humans
   with 'cultivated humans'. But we don't have cultivated humans, and we
   are not at the point where existing humans need to be cleared to make
   space for new ones.

  The scenario takes on an entirely different tone if you replace weed out 
 some
  wild carrots with kill all the old people who are economically
  inefficient. In particular the former is something one can easily imagine
  people doing without a second thought, while the latter is likely to generate
  considerable opposition in society.


Sufficient enforcement is in place for this case: people steer
governments in the direction where laws won't allow that when they
age, evolutionary and memetic drives oppose it. It's too costly to
overcome these drives and destroy counterproductive humans. But this
cost is independent from potential gain from replacement. As the gain
increases, decision can change, again we only need sufficiently good
'cultivated humans'. Consider expensive medical treatments which most
countries won't give away when dying people can't afford them. Life
has a cost, and this cost can be met.


   The only thing that could keep future society from derailing in this
   direction is some kind of enforcement installed in minds of future
   dominant individuals/societies by us lesser species while we are still
   in power.

  All we need to do is to make sure they have the same ideas of morality and
  ethics that we do -- the same as we would raise any other children.


Yes, something like this, but much 'stronger' to meet increased power.

 Note that if some super-intelligence were possible and optimal, 
 evolution
 could have opted for fewer bigger brains in a dominant race. It didn't 
 --
 note our brains are actually 10% smaller than Neanderthals. This isn't
  proof
 that an optimal system is brains of our size acting in social/economic
 groups, but I'd claim that anyone arguing the opposite has the burden of
 proof (and no supporting evidence I've seen).
   
  
   Sorry, I don't understand this point. We are the first species to
   successfully launch culture. Culture is much more powerful then
   individuals, if only through parallelism and longer lifespan. What
   follows from it?

  So how would you design a super-intelligence:
  (a) a single giant blob modelled on an individual human mind
  (b) a society (complete with culture) with lots of human-level minds and
  high-speed communication?

  We know (b) works if you can build the individual human-level mind. Nobody 
 has
  a clue that (a) is even possible. There's lots of evidence that even human
  minds have many interacting parts.


This is a technical question with no good answer, why is it relevant?
There is no essential difference, society in present form has many
communicational bottlenecks, but with better mind-mind interfaces
distinction can blur. Upgrade to more efficient minds in this network
would clearly benefit the collective. :-)

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com