[agi] One super-smart AGI vs more, dumber AGIs???

2003-03-03 Thread Philip Sutton



Ben,

 would you rather 
have one person with an IQ of 200, or 4 people with
 IQ's of 50? Ten 
computers of intelligence N, or one computer with
 intelligence 
10*N ? Sure, the intelligence of the ten computers of
 intelligence 
N will be a little smarter than N, all together, because
 of cooperative 
effects But how much more? You can say that true
 intelligence 
can only develop thru socialization with peers -- but
 why? How 
do you know that will be true for AI's as well as humans? 
 I'm not so sure 

I don't think we are faced with an 
either or situation in the case of AGIs. 
I think AGIs will be able to create pooled intelligence with an efficiency 
that far exceeds what humans can accomplish by group-work.


I can see no reason why a community 
of AGIs wouldn't be able to link 
brains and pool some of the computing power of the platforms that 
each one manages - so by agreement with a groups of AGIs, one AGI 
might be given the right to use some of the computer hardware that is 
normally used by the other AGIs. This of course is the idea behind the 
United Devices grid computing.


Plus the efficiency and potency of 
what can be passed between AGI 
minds is likely to be significantly greater than what can be passed 
between human minds.


And as with humans, pooling brains 
with several different 
perspectives and specialisations 
is likely to yield significant gains in 
intelligence over the simple sum of the parts.


So my guess is that the pursuit of 
the safety in numbers strategy is 
not likely to result in a very large penalty in lost intelligence.


And even if their was a large intelligence 
loss due to dividing up the 
available computing power bewteen multiple AGIs, I'd rather have less 
AGI intelligence, that was much safer, than more intellegence that was 
much less safe.

Cheers, Philip





RE: [agi] One super-smart AGI vs more, dumber AGIs???

2003-03-03 Thread Ben Goertzel




Hi,

I 
don't see that you've made a convincing argument that a society of AI's is safer 
than an individual AI. Certainly among human societies, the only analogue 
we have, society-level violence and madness seems even MORE common than 
individual-level violence and madness. Often societies can make 
intrinsically peaceful humans turn violent, through social pressure, it 
seems. Yeah, you can try to influence an AGI society not to go that way, 
but you can also influence an individual AGI mind not to go that 
way...

And: A 
society of AGI's that frequently engages in mind-merges with each other is 
neither a society nor an individual, it's something inbetween, a new kind of 
mind. This is an exciting prospect which has been discussed before... but 
it doesn't seem to me to solve the Friendliness problem.

-- Ben 
G

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]On Behalf Of Philip 
  SuttonSent: Monday, March 03, 2003 9:35 AMTo: 
  [EMAIL PROTECTED]Subject: [agi] One super-smart AGI vs more, 
  dumber AGIs???
  Ben,
  
   would you rather have one person with an IQ of 
  200, or 4 people with
   IQ's of 50? Ten computers of intelligence N, or 
  one computer with
   intelligence 10*N ? Sure, the intelligence of the 
  ten computers of
   intelligence N will be a little smarter than N, 
  all together, because
   of cooperative effects But how much 
  more? You can say that true
   intelligence can only develop thru socialization 
  with peers -- but
   why? How do you know that will be true for 
  AI's as well as humans? 
   I'm not so sure 
  
  I don't think 
  we are faced with an either or situation in the case of AGIs. I think AGIs 
  will be able to create pooled intelligence with an efficiency that far exceeds 
  what humans can accomplish by group-work.
  
  I can see no 
  reason why a community of AGIs wouldn't be able to link brains and pool some 
  of the computing power of the platforms that each one manages - so by 
  agreement with a groups of AGIs, one AGI might be given the right to use some 
  of the computer hardware that is normally used by the other AGIs. This 
  of course is the idea behind the United Devices grid 
  computing.
  
  Plus the 
  efficiency and potency of what can be passed between AGI minds is likely to be 
  significantly greater than what can be passed between human 
  minds.
  
  And as with 
  humans, pooling brains with several different 
  perspectives and specialisations is likely to yield 
  significant gains in intelligence over the simple sum of the 
  parts.
  
  So my guess is 
  that the pursuit of the "safety in numbers" strategy is not likely to result 
  in a very large penalty in lost intelligence.
  
  And even if 
  their was a large intelligence loss due to dividing up the available computing 
  power bewteen multiple AGIs, I'd rather have less AGI intelligence, that was much 
  safer, than more intellegence that was much less safe.
  
  Cheers, 
  Philip
  


Re: [agi] One super-smart AGI vs more, dumber AGIs???

2003-03-03 Thread Kevin



Hello all..

I was wondering what people thought the relative risks were 
between a super smart AGI that cannot yet self modify(change its own source 
code), and an AGI that can self modify?

Do we see inherently less risk in case one? Perhaps some 
"hard wired" ethics in case 1 would be much more doable then when an AGI can 
self modify. 

It seems that creating version 1 is going to be alot easier 
than version 2. Maybe we can learn alot in the creation and maintenance of 
version 1, that will guide how we go about doing version 2...

Just some random thoughts...

Kevin


  - Original Message - 
  From: 
  Ben Goertzel 
  
  To: [EMAIL PROTECTED] 
  Sent: Monday, March 03, 2003 9:09 
AM
  Subject: RE: [agi] One super-smart AGI vs 
  more, dumber AGIs???
  
  
  Hi,
  
  I 
  don't see that you've made a convincing argument that a society of AI's is 
  safer than an individual AI. Certainly among human societies, the only 
  analogue we have, society-level violence and madness seems even MORE common 
  than individual-level violence and madness. Often societies can make 
  intrinsically peaceful humans turn violent, through social pressure, it 
  seems. Yeah, you can try to influence an AGI society not to go that way, 
  but you can also influence an individual AGI mind not to go that 
  way...
  
  And: 
  A society of AGI's that frequently engages in mind-merges with each other is 
  neither a society nor an individual, it's something inbetween, a new kind of 
  mind. This is an exciting prospect which has been discussed before... 
  but it doesn't seem to me to solve the Friendliness 
  problem.
  
  -- 
  Ben G
  
-Original Message-From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED]On Behalf Of Philip 
SuttonSent: Monday, March 03, 2003 9:35 AMTo: 
[EMAIL PROTECTED]Subject: [agi] One super-smart AGI vs more, 
    dumber AGIs???
Ben,

 would you rather have one person with an IQ of 
200, or 4 people with
 IQ's of 50? Ten computers of intelligence N, or 
one computer with
 intelligence 10*N ? Sure, the intelligence of 
the ten computers of
 intelligence N will be a little smarter than N, 
all together, because
 of cooperative effects But how much 
more? You can say that true
 intelligence can only develop thru 
socialization with peers -- but
 why? How do you know that will be true 
for AI's as well as humans? 
 I'm not so sure 

I don't think 
we are faced with an either or situation in the case of AGIs. I think AGIs 
will be able to create pooled intelligence with an efficiency that far 
exceeds what humans can accomplish by group-work.

I can see no 
reason why a community of AGIs wouldn't be able to link brains and pool some 
of the computing power of the platforms that each one manages - so by 
agreement with a groups of AGIs, one AGI might be given the right to use 
some of the computer hardware that is normally used by the other AGIs. 
This of course is the idea behind the United Devices grid 
computing.

Plus the 
efficiency and potency of what can be passed between AGI minds is likely to 
be significantly greater than what can be passed between human 
minds.

And as with 
humans, pooling brains with several different 
perspectives and specialisations is likely to 
yield significant gains in intelligence over the simple sum of the 
parts.

So my guess 
is that the pursuit of the "safety in numbers" strategy is not likely to 
result in a very large penalty in lost intelligence.

And even if 
their was a large intelligence loss due to dividing up the available 
computing power bewteen multiple AGIs, I'd rather have less AGI intelligence, that was 
much safer, than more intellegence that was much less 
safe.

Cheers, 
Philip



Re: [agi] One super-smart AGI vs more, dumber AGIs???

2003-03-03 Thread Kevin



It seems to me that communication and "thought 
sharing" between various AGI's would be so intertwined that each one will become 
indistinguishable from the other. So in essence you still have "one" 
AGI..

Kevin

  - Original Message - 
  From: 
  Ben Goertzel 
  
  To: [EMAIL PROTECTED] 
  Sent: Monday, March 03, 2003 9:09 
AM
  Subject: RE: [agi] One super-smart AGI vs 
  more, dumber AGIs???
  
  
  Hi,
  
  I 
  don't see that you've made a convincing argument that a society of AI's is 
  safer than an individual AI. Certainly among human societies, the only 
  analogue we have, society-level violence and madness seems even MORE common 
  than individual-level violence and madness. Often societies can make 
  intrinsically peaceful humans turn violent, through social pressure, it 
  seems. Yeah, you can try to influence an AGI society not to go that way, 
  but you can also influence an individual AGI mind not to go that 
  way...
  
  And: 
  A society of AGI's that frequently engages in mind-merges with each other is 
  neither a society nor an individual, it's something inbetween, a new kind of 
  mind. This is an exciting prospect which has been discussed before... 
  but it doesn't seem to me to solve the Friendliness 
  problem.
  
  -- 
  Ben G
  
-Original Message-From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED]On Behalf Of Philip 
SuttonSent: Monday, March 03, 2003 9:35 AMTo: 
[EMAIL PROTECTED]Subject: [agi] One super-smart AGI vs more, 
dumber AGIs???
Ben,

 would you rather have one person with an IQ of 
200, or 4 people with
 IQ's of 50? Ten computers of intelligence N, or 
one computer with
 intelligence 10*N ? Sure, the intelligence of 
the ten computers of
 intelligence N will be a little smarter than N, 
all together, because
 of cooperative effects But how much 
more? You can say that true
 intelligence can only develop thru 
socialization with peers -- but
 why? How do you know that will be true 
for AI's as well as humans? 
 I'm not so sure 

I don't think 
we are faced with an either or situation in the case of AGIs. I think AGIs 
will be able to create pooled intelligence with an efficiency that far 
exceeds what humans can accomplish by group-work.

I can see no 
reason why a community of AGIs wouldn't be able to link brains and pool some 
of the computing power of the platforms that each one manages - so by 
agreement with a groups of AGIs, one AGI might be given the right to use 
some of the computer hardware that is normally used by the other AGIs. 
This of course is the idea behind the United Devices grid 
computing.

Plus the 
efficiency and potency of what can be passed between AGI minds is likely to 
be significantly greater than what can be passed between human 
minds.

And as with 
humans, pooling brains with several different 
perspectives and specialisations is likely to 
yield significant gains in intelligence over the simple sum of the 
parts.

So my guess 
is that the pursuit of the "safety in numbers" strategy is not likely to 
result in a very large penalty in lost intelligence.

And even if 
their was a large intelligence loss due to dividing up the available 
computing power bewteen multiple AGIs, I'd rather have less AGI intelligence, that was 
much safer, than more intellegence that was much less 
safe.

Cheers, 
Philip