Re: [agi] Questions for an AGI

2010-06-28 Thread Ian Parker
On 27 June 2010 22:21, Travis Lenting travlent...@gmail.com wrote:

 I don't like the idea of enhancing human intelligence before the
 singularity.


What do you class as enhancement? Suppose I am in the Middle East and I am
wearing glasses which can give a 3D data screen. Somebody speaks to me. Up
on my glasses are the possible translations. Neither me nor the computer
system understands Arabic, yet together we can achieve comprehension. (PS I
in fact did just that with
http://docs.google.com/Doc?docid=0AQIg8QuzTONQZGZxenF2NnNfNzY4ZDRxcnJ0aHIhl=en_GB
)

I think crime has to be made impossible even for an enhanced humans first.


If our enhancement was Internet based it could be turned off if we were
about to commit a crime. You really should have said unenhanced humans. If
my conversation (see above) was about jihad and terrorism AI would provide a
route for the security services. I think you are muddled here.


 I think life is too adapt to abusing opportunities if possible. I would
 like to see the singularity enabling AI to be as least like a reproduction
 machine as possible. Does it really need to be a general AI to cause a
 singularity?


The idea of the Singularity is that AGI enhances itself. Hence a singularity
*without* AGI is a contradiction in terms. I did not quite get you syntax on
reproduction, but it is perfectly true that you do not need a singularity
for a Von Neumann machine. The singularity is a long way off yet Obama is
going to leave Afghanistan in 2014 leaving robots behind.


 Can it not just stick to scientific data and quantify human uncertainty?
  It seems like it would be less likely to ever care about killing all humans
 so it can rule the galaxy or that its an omnipotent servant.


AGI will not have evolved. It will have been created. It will not anyway
have the desires we might ascribe to it. Scientific data would be a high
priority but you could *never* be exclusively scientific. If human
uncertainty were quantified that would give it, or whoever wielded it
immense power.

There is one other eventuality to consider - a virus. If an AGI system was
truly thinking and introspective, at least to the extent that it understood
what it was doing, a virus would be impossible. Software would in fact be
self repairing.

GT makes a lot of very silly translations. Could I say that no one in Mossad
or any dictator ever told me how to do my French homework. Trivial and naive
remark, yet GT is open to all kinds of hacking. True AGI would not by
definition. This does in fact serve to indicate how far off we are.


  - Ian Parker



 On Sun, Jun 27, 2010 at 11:39 AM, The Wizard key.unive...@gmail.comwrote:

 This is wishful thinking. Wishful thinking is dangerous. How about instead
 of hoping that AGI won't destroy the world, you study the problem and come
 up with a safe design.


 Agreed on this dangerous thought!

 On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 This is wishful thinking. Wishful thinking is dangerous. How about
 instead of hoping that AGI won't destroy the world, you study the problem
 and come up with a safe design.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, June 26, 2010 1:14:22 PM
 *Subject:* Re: [agi] Questions for an AGI

  why should AGIs give a damn about us?


 I like to think that they will give a damn because humans have a unique
 way of experiencing reality and there is no reason to not take advantage of
 that precious opportunity to create astonishment or bliss. If anything is
 important in the universe, its insuring positive experiences for all areas
 in which it is conscious, I think it will realize that. And with the
 resources available in the solar system alone, I don't think we will be much
 of a burden.


 I like that idea.  Another reason might be that we won't crack the
 problem of autonomous general intelligence, but the singularity will proceed
 regardless as a symbiotic relationship between life and AI.  That would be
 beneficial to us as a form of intelligence expansion, and beneficial to the
 artificial entity a way of being alive and having an experience of the
 world.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member

Re: [agi] Questions for an AGI

2010-06-28 Thread Steve Richfield
Ian, Travis, etc.

On Mon, Jun 28, 2010 at 6:42 AM, Ian Parker ianpark...@gmail.com wrote:


 On 27 June 2010 22:21, Travis Lenting travlent...@gmail.com wrote:

 I think crime has to be made impossible even for an enhanced humans first.


 If our enhancement was Internet based it could be turned off if we were
 about to commit a crime. You really should have said unenhanced humans. If
 my conversation (see above) was about jihad and terrorism AI would provide a
 route for the security services. I think you are muddled here.


Anyone who could suggest making crime impossible, anyone who could respond
to such nonsense other than pointing out that it is nonsense, is SO far
removed from reality that it is hard to imagine that they function in
society. Here are some points for those who don't see this as obvious:
1.  Much/most crime is committed by people who see little/no other
rational choice.
2.  Crime is a state of mind. Almost any act would be reasonable under SOME
bizarre circumstances perceived by the perpetrator. It isn't the actions,
but rather the THOUGHT that makes it a crime.
3.  Courts are there to decide complex issues like necessity (e.g. self
defense or defense of others), understanding (e.g. mental competence), and
the myriad other issues needed to establish a particular act as a crime.
4.  Crimes are defined through a legislative process, by the best
government that money can buy. This would simply consign everything (and
everyone) to the wealthy people who have bought the government. Prepare for
slavery.
5.  Our world is already so over-constrained that it is IMPOSSIBLE to live
without violating any laws.

Is the proposal to make impossible anything that could conceivably be
construed as a crime, or to make impossible anything that couldn't be
construed as anything but a crime? Even these two extremes would have
significant implementation problems.

Anyway, I am sending you two back to kindergarten.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-28 Thread Erdal Bektaş
What is the equation and solution method providing solution of every
physical problem?

or

Give me the equation of god, and its solution. (lol)

On Mon, Jun 28, 2010 at 6:02 PM, David Jones davidher...@gmail.com wrote:

 Crime has its purpose just like many other unpleasant behaviors. When
 government is reasonably good, crime causes problems. But, when government
 is bad, crime is good. Given the chance, I might have tried to assassinate
 Hitler. Yet, assassination is a crime.

 On Mon, Jun 28, 2010 at 10:51 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ian, Travis, etc.

 On Mon, Jun 28, 2010 at 6:42 AM, Ian Parker ianpark...@gmail.com wrote:


 On 27 June 2010 22:21, Travis Lenting travlent...@gmail.com wrote:

 I think crime has to be made impossible even for an enhanced humans
 first.


 If our enhancement was Internet based it could be turned off if we were
 about to commit a crime. You really should have said unenhanced humans. If
 my conversation (see above) was about jihad and terrorism AI would provide a
 route for the security services. I think you are muddled here.


 Anyone who could suggest making crime impossible, anyone who could respond
 to such nonsense other than pointing out that it is nonsense, is SO far
 removed from reality that it is hard to imagine that they function in
 society. Here are some points for those who don't see this as obvious:
 1.  Much/most crime is committed by people who see little/no other
 rational choice.
 2.  Crime is a state of mind. Almost any act would be reasonable under
 SOME bizarre circumstances perceived by the perpetrator. It isn't the
 actions, but rather the THOUGHT that makes it a crime.
 3.  Courts are there to decide complex issues like necessity (e.g. self
 defense or defense of others), understanding (e.g. mental competence), and
 the myriad other issues needed to establish a particular act as a crime.
 4.  Crimes are defined through a legislative process, by the best
 government that money can buy. This would simply consign everything (and
 everyone) to the wealthy people who have bought the government. Prepare for
 slavery.
 5.  Our world is already so over-constrained that it is IMPOSSIBLE to live
 without violating any laws.

 Is the proposal to make impossible anything that could conceivably be
 construed as a crime, or to make impossible anything that couldn't be
 construed as anything but a crime? Even these two extremes would have
 significant implementation problems.

 Anyway, I am sending you two back to kindergarten.

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
fizik, metafizikten kendini koru!



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-28 Thread Travis Lenting
What do you class as enhancement?

I'm not talking about shoes making us run faster I'm talking about direct
brain interfacing that significantly increases a persons intelligence that
would allow them to out smart us all for their own good.

The idea of the Singularity is that AGI enhances itself. Hence a
singularity *without* AGI is a contradiction in terms.

Does it really need to be able to figure out anything though? Can it not
just be more narrow in focus? Could it just understand itself without being
able to say, figure out how to navigate an RC through an obstacle course,
could this AI still start a self improvement cycle?

I did not quite get you syntax on reproduction

I don't trust reproduction machines (lifeforms) because even if they are
social animals its only cause its best for themselves. So don't model it
after a brain is basically all I'm saying.

On Mon, Jun 28, 2010 at 6:42 AM, Ian Parker ianpark...@gmail.com wrote:



 On 27 June 2010 22:21, Travis Lenting travlent...@gmail.com wrote:

 I don't like the idea of enhancing human intelligence before the
 singularity.


 What do you class as enhancement? Suppose I am in the Middle East and I am
 wearing glasses which can give a 3D data screen. Somebody speaks to me. Up
 on my glasses are the possible translations. Neither me nor the computer
 system understands Arabic, yet together we can achieve comprehension. (PS I
 in fact did just that with
 http://docs.google.com/Doc?docid=0AQIg8QuzTONQZGZxenF2NnNfNzY4ZDRxcnJ0aHIhl=en_GB
 )

 I think crime has to be made impossible even for an enhanced humans first.


 If our enhancement was Internet based it could be turned off if we were
 about to commit a crime. You really should have said unenhanced humans. If
 my conversation (see above) was about jihad and terrorism AI would provide a
 route for the security services. I think you are muddled here.


 I think life is too adapt to abusing opportunities if possible. I would
 like to see the singularity enabling AI to be as least like a reproduction
 machine as possible. Does it really need to be a general AI to cause a
 singularity?


 The idea of the Singularity is that AGI enhances itself. Hence a
 singularity *without* AGI is a contradiction in terms. I did not quite get
 you syntax on reproduction, but it is perfectly true that you do not need a
 singularity for a Von Neumann machine. The singularity is a long way off yet
 Obama is going to leave Afghanistan in 2014 leaving robots behind.


 Can it not just stick to scientific data and quantify human uncertainty?
  It seems like it would be less likely to ever care about killing all humans
 so it can rule the galaxy or that its an omnipotent servant.


 AGI will not have evolved. It will have been created. It will not anyway
 have the desires we might ascribe to it. Scientific data would be a high
 priority but you could *never* be exclusively scientific. If human
 uncertainty were quantified that would give it, or whoever wielded it
 immense power.

 There is one other eventuality to consider - a virus. If an AGI system was
 truly thinking and introspective, at least to the extent that it understood
 what it was doing, a virus would be impossible. Software would in fact be
 self repairing.

 GT makes a lot of very silly translations. Could I say that no one in
 Mossad or any dictator ever told me how to do my French homework. Trivial
 and naive remark, yet GT is open to all kinds of hacking. True AGI would not
 by definition. This does in fact serve to indicate how far off we are.


   - Ian Parker



 On Sun, Jun 27, 2010 at 11:39 AM, The Wizard key.unive...@gmail.comwrote:

 This is wishful thinking. Wishful thinking is dangerous. How about
 instead of hoping that AGI won't destroy the world, you study the problem
 and come up with a safe design.


 Agreed on this dangerous thought!

 On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.comwrote:

  This is wishful thinking. Wishful thinking is dangerous. How about
 instead of hoping that AGI won't destroy the world, you study the problem
 and come up with a safe design.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, June 26, 2010 1:14:22 PM
 *Subject:* Re: [agi] Questions for an AGI

  why should AGIs give a damn about us?


 I like to think that they will give a damn because humans have a unique
 way of experiencing reality and there is no reason to not take advantage of
 that precious opportunity to create astonishment or bliss. If anything is
 important in the universe, its insuring positive experiences for all areas
 in which it is conscious, I think it will realize that. And with the
 resources available in the solar system alone, I don't think we will be 
 much
 of a burden.


 I like that idea.  Another reason might be that we won't crack the
 problem of autonomous general intelligence, but the singularity

Re: [agi] Questions for an AGI

2010-06-28 Thread Travis Lenting
Anyone who could suggest making crime impossible is SO far removed from
reality that it is hard to imagine that they function in society.

I cleared this obviously confusing statement up with Matt. What I meant to
say was impossible to get away with in public (in America I guess) because
of mass surveillance. Perhaps not feasible in rural areas but in populated
zones I think it could happen if we decided to invest our defense budget
into domestic surveillance programs.

On Mon, Jun 28, 2010 at 7:51 AM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ian, Travis, etc.

 On Mon, Jun 28, 2010 at 6:42 AM, Ian Parker ianpark...@gmail.com wrote:


 On 27 June 2010 22:21, Travis Lenting travlent...@gmail.com wrote:

 I think crime has to be made impossible even for an enhanced humans
 first.


 If our enhancement was Internet based it could be turned off if we were
 about to commit a crime. You really should have said unenhanced humans. If
 my conversation (see above) was about jihad and terrorism AI would provide a
 route for the security services. I think you are muddled here.


 Anyone who could suggest making crime impossible, anyone who could respond
 to such nonsense other than pointing out that it is nonsense, is SO far
 removed from reality that it is hard to imagine that they function in
 society. Here are some points for those who don't see this as obvious:
 1.  Much/most crime is committed by people who see little/no other
 rational choice.
 2.  Crime is a state of mind. Almost any act would be reasonable under SOME
 bizarre circumstances perceived by the perpetrator. It isn't the actions,
 but rather the THOUGHT that makes it a crime.
 3.  Courts are there to decide complex issues like necessity (e.g. self
 defense or defense of others), understanding (e.g. mental competence), and
 the myriad other issues needed to establish a particular act as a crime.
 4.  Crimes are defined through a legislative process, by the best
 government that money can buy. This would simply consign everything (and
 everyone) to the wealthy people who have bought the government. Prepare for
 slavery.
 5.  Our world is already so over-constrained that it is IMPOSSIBLE to live
 without violating any laws.

 Is the proposal to make impossible anything that could conceivably be
 construed as a crime, or to make impossible anything that couldn't be
 construed as anything but a crime? Even these two extremes would have
 significant implementation problems.

 Anyway, I am sending you two back to kindergarten.

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-28 Thread The Wizard


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, June 26, 2010 1:14:22 PM
 *Subject:* Re: [agi] Questions for an AGI

  why should AGIs give a damn about us?


 I like to think that they will give a damn because humans have a
 unique way of experiencing reality and there is no reason to not take
 advantage of that precious opportunity to create astonishment or bliss. If
 anything is important in the universe, its insuring positive experiences 
 for
 all areas in which it is conscious, I think it will realize that. And with
 the resources available in the solar system alone, I don't think we will 
 be
 much of a burden.


 I like that idea.  Another reason might be that we won't crack the
 problem of autonomous general intelligence, but the singularity will 
 proceed
 regardless as a symbiotic relationship between life and AI.  That would be
 beneficial to us as a form of intelligence expansion, and beneficial to 
 the
 artificial entity a way of being alive and having an experience of the
 world.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
This is wishful thinking. Wishful thinking is dangerous. How about instead of 
hoping that AGI won't destroy the world, you study the problem and come up with 
a safe design.

 -- Matt Mahoney, matmaho...@yahoo.com





From: rob levy r.p.l...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sat, June 26, 2010 1:14:22 PM
Subject: Re: [agi] Questions for an AGI

why should AGIs give a damn about us?


I like to think that they will give a damn because humans have a unique way of 
experiencing reality and there is no reason to not take advantage of that 
precious opportunity to create astonishment or bliss. If anything is important 
in the universe, its insuring positive experiences for all areas in which it is 
conscious, I think it will realize that. And with the resources available in 
the solar system alone, I don't think we will be much of a burden. 

I like that idea.  Another reason might be that we won't crack the problem of 
autonomous general intelligence, but the singularity will proceed regardless as 
a symbiotic relationship between life and AI.  That would be beneficial to us 
as a form of intelligence expansion, and beneficial to the artificial entity a 
way of being alive and having an experience of the world.  
agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread rob levy
I definitely agree, however we lack a convincing model or plan of any sort
for the construction of systems demonstrating subjectivity, and it seems
plausible that subjectivity is functionally necessary for general
intelligence. Therefore it is reasonable to consider symbiosis as both a
safe design and potentially the only possible design (at least at first),
depending on how creative and resourceful we get in cog sci/ AGI in coming
years.

On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 This is wishful thinking. Wishful thinking is dangerous. How about instead
 of hoping that AGI won't destroy the world, you study the problem and come
 up with a safe design.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, June 26, 2010 1:14:22 PM
 *Subject:* Re: [agi] Questions for an AGI

  why should AGIs give a damn about us?


 I like to think that they will give a damn because humans have a unique
 way of experiencing reality and there is no reason to not take advantage of
 that precious opportunity to create astonishment or bliss. If anything is
 important in the universe, its insuring positive experiences for all areas
 in which it is conscious, I think it will realize that. And with the
 resources available in the solar system alone, I don't think we will be much
 of a burden.


 I like that idea.  Another reason might be that we won't crack the problem
 of autonomous general intelligence, but the singularity will proceed
 regardless as a symbiotic relationship between life and AI.  That would be
 beneficial to us as a form of intelligence expansion, and beneficial to the
 artificial entity a way of being alive and having an experience of the
 world.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread The Wizard
This is wishful thinking. Wishful thinking is dangerous. How about instead
of hoping that AGI won't destroy the world, you study the problem and come
up with a safe design.


Agreed on this dangerous thought!
On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 This is wishful thinking. Wishful thinking is dangerous. How about instead
 of hoping that AGI won't destroy the world, you study the problem and come
 up with a safe design.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, June 26, 2010 1:14:22 PM
 *Subject:* Re: [agi] Questions for an AGI

  why should AGIs give a damn about us?


 I like to think that they will give a damn because humans have a unique
 way of experiencing reality and there is no reason to not take advantage of
 that precious opportunity to create astonishment or bliss. If anything is
 important in the universe, its insuring positive experiences for all areas
 in which it is conscious, I think it will realize that. And with the
 resources available in the solar system alone, I don't think we will be much
 of a burden.


 I like that idea.  Another reason might be that we won't crack the problem
 of autonomous general intelligence, but the singularity will proceed
 regardless as a symbiotic relationship between life and AI.  That would be
 beneficial to us as a form of intelligence expansion, and beneficial to the
 artificial entity a way of being alive and having an experience of the
 world.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
rob levy wrote:
 This is wishful thinking.
 I definitely agree, however we lack a convincing model or plan of any sort 
 for the construction of systems demonstrating subjectivity, 

Define subjectivity. An objective decision might appear subjective to you only 
because you aren't intelligent enough to understand the decision process.

 Therefore it is reasonable to consider symbiosis

How does that follow?

 as both a safe design 

How do you know that a self replicating organism that we create won't evolve to 
kill us instead? Do we control evolution?

 and potentially the only possible design 

It is not the only possible design. It is possible to create systems that are 
more intelligent than a single human but less intelligent than all of humanity, 
without the capability to modify itself or reproduce without the collective 
permission of the billions of humans that own and maintain control over it. An 
example would be the internet.

 -- Matt Mahoney, matmaho...@yahoo.com





From: rob levy r.p.l...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 27, 2010 2:37:15 PM
Subject: Re: [agi] Questions for an AGI

I definitely agree, however we lack a convincing model or plan of any sort for 
the construction of systems demonstrating subjectivity, and it seems plausible 
that subjectivity is functionally necessary for general intelligence. Therefore 
it is reasonable to consider symbiosis as both a safe design and potentially 
the only possible design (at least at first), depending on how creative and 
resourceful we get in cog sci/ AGI in coming years.


On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:

This is wishful thinking. Wishful thinking is dangerous. How about instead of 
hoping that AGI won't destroy the world, you study the problem and come up with 
a safe design.

 -- Matt Mahoney, matmaho...@yahoo.com






From: rob levy r.p.l...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sat, June 26, 2010 1:14:22 PM
Subject: Re: [agi]
 Questions for an AGI


why should AGIs give a damn about us?


I like to think that they will give a damn because humans have a unique way of 
experiencing reality and there is no reason to not take advantage of that 
precious opportunity to create astonishment or bliss. If anything is important 
in the universe, its insuring positive experiences for all areas in which it 
is conscious, I think it will realize that. And with the resources available 
in the solar system alone, I don't think we will be much of a burden. 


I like that idea.  Another reason might be that we won't crack the problem of 
autonomous general intelligence, but the singularity will proceed regardless 
as a symbiotic relationship between life and AI.  That would be beneficial to 
us as a form of intelligence expansion, and beneficial to the artificial 
entity a way of being alive and having an experience of the world.  

agi | Archives   | Modify  Your Subscription  

agi | Archives   | Modify  Your Subscription  

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
I don't like the idea of enhancing human intelligence before the
singularity. I think crime has to be made impossible even for an enhanced
humans first. I think life is too adapt to abusing opportunities if
possible. I would like to see the singularity enabling AI to be as least
like a reproduction machine as possible. Does it really need to be a general
AI to cause a singularity? Can it not just stick to scientific data and
quantify human uncertainty?  It seems like it would be less likely to ever
care about killing all humans so it can rule the galaxy or that its
an omnipotent servant.

On Sun, Jun 27, 2010 at 11:39 AM, The Wizard key.unive...@gmail.com wrote:

 This is wishful thinking. Wishful thinking is dangerous. How about instead
 of hoping that AGI won't destroy the world, you study the problem and come
 up with a safe design.


 Agreed on this dangerous thought!

 On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 This is wishful thinking. Wishful thinking is dangerous. How about instead
 of hoping that AGI won't destroy the world, you study the problem and come
 up with a safe design.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, June 26, 2010 1:14:22 PM
 *Subject:* Re: [agi] Questions for an AGI

  why should AGIs give a damn about us?


 I like to think that they will give a damn because humans have a unique
 way of experiencing reality and there is no reason to not take advantage of
 that precious opportunity to create astonishment or bliss. If anything is
 important in the universe, its insuring positive experiences for all areas
 in which it is conscious, I think it will realize that. And with the
 resources available in the solar system alone, I don't think we will be much
 of a burden.


 I like that idea.  Another reason might be that we won't crack the problem
 of autonomous general intelligence, but the singularity will proceed
 regardless as a symbiotic relationship between life and AI.  That would be
 beneficial to us as a form of intelligence expansion, and beneficial to the
 artificial entity a way of being alive and having an experience of the
 world.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
Travis Lenting wrote:
 I don't like the idea of enhancing human intelligence before the singularity.

The singularity is a point of infinite collective knowledge, and therefore 
infinite unpredictability. Everything has to happen before the singularity 
because there is no after.

 I think crime has to be made impossible even for an enhanced humans first. 

That is easy. Eliminate all laws.

 I would like to see the singularity enabling AI to be as least like a 
 reproduction machine as possible.

Is there a difference between enhancing our intelligence by uploading and 
creating killer robots? Think about it.

 Does it really need to be a general AI to cause a singularity? Can it not 
 just stick to scientific data and quantify human uncertainty?  It seems like 
 it would be less likely to ever care about killing all humans so it can rule 
 the galaxy or that its an omnipotent servant.   

Assume we succeed. People want to be happy. Depending on how our minds are 
implemented, it's either a matter of rewiring our neurons or rewriting our 
software. Is that better than a gray goo accident?

 -- Matt Mahoney, matmaho...@yahoo.com





From: Travis Lenting travlent...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 27, 2010 5:21:24 PM
Subject: Re: [agi] Questions for an AGI

I don't like the idea of enhancing human intelligence before the singularity. I 
think crime has to be made impossible even for an enhanced humans first. I 
think life is too adapt to abusing opportunities if possible. I would like to 
see the singularity enabling AI to be as least like a reproduction machine as 
possible. Does it really need to be a general AI to cause a singularity? Can it 
not just stick to scientific data and quantify human uncertainty?  It seems 
like it would be less likely to ever care about killing all humans so it can 
rule the galaxy or that its an omnipotent servant.


On Sun, Jun 27, 2010 at 11:39 AM, The Wizard key.unive...@gmail.com wrote:

This is wishful thinking. Wishful thinking is dangerous. How about instead of 
hoping that AGI won't destroy the world, you study the problem and come up with 
a safe design.



Agreed on this dangerous thought! 


On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:


This is wishful thinking. Wishful thinking is dangerous. How about instead of 
hoping that AGI won't destroy the world, you study the problem and come up 
with a safe design.

 -- Matt Mahoney, matmaho...@yahoo.com






 From: rob levy r.p.l...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sat, June 26, 2010 1:14:22 PM
Subject: Re: [agi]
 Questions for an AGI


why should AGIs give a damn about us?



I like to think that they will give a damn because humans have a unique way 
of experiencing reality and there is no reason to not take advantage of that 
precious opportunity to create astonishment or bliss. If anything is 
important in the universe, its insuring positive experiences for all areas in 
which it is conscious, I think it will realize that. And with the resources 
available in the solar system alone, I don't think we will be much of a 
burden. 


I like that idea.  Another reason might be that we won't crack the problem of 
autonomous general intelligence, but the singularity will proceed regardless 
as a symbiotic relationship between life and AI.  That would be beneficial to 
us as a form of intelligence expansion, and beneficial to the artificial 
entity a way of being alive and having an experience of the world.  

agi | Archives   | Modify  Your Subscription  

agi | Archives   | Modify  Your Subscription  



-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com  


agi | Archives   | Modify  Your Subscription  

agi | Archives  | Modify Your Subscription  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
Everything has to happen before the singularity because there is no after.

I meant when machines take over technological evolution.

That is easy. Eliminate all laws.

I would prefer a surveillance state. I should say impossible to get away
with if conducted in public.

Is there a difference between enhancing our intelligence by uploading and
creating killer robots? Think about it.

Well yes, we're not all bad but I think you read me wrong because
thats basically my worry.

Assume we succeed. People want to be happy. Depending on how our minds are
implemented, it's either a matter of rewiring our neurons or rewriting our
software. Is that better than a gray goo accident?

Are you asking if changing your hardware or software ends your
true existence like a grey goo accident would? Assuming the goo
is unconscious, it would be worse because there is the potential for a
peaceful experience free from the power struggle for limited resources even
if humans don't truly exist or not. Does anyone else worry about how we're
going to keep this machine's unprecedented resourcefulness from being abused
by an elite few to further protect and advance their social superiority? To
me it seems like if we can't create a democratic society where people have
real choices concerning the issues that affect them most and it  just ends
up being a continuation of the class war we have today, then maybe grey goo
would be the better option before we start promoting democracy throughout
the universe.

On Sun, Jun 27, 2010 at 2:43 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Travis Lenting wrote:
  I don't like the idea of enhancing human intelligence before the
 singularity.

 The singularity is a point of infinite collective knowledge, and therefore
 infinite unpredictability. Everything has to happen before the singularity
 because there is no after.

  I think crime has to be made impossible even for an enhanced humans
 first.

 That is easy. Eliminate all laws.

  I would like to see the singularity enabling AI to be as least like a
 reproduction machine as possible.

 Is there a difference between enhancing our intelligence by uploading and
 creating killer robots? Think about it.

  Does it really need to be a general AI to cause a singularity? Can it not
 just stick to scientific data and quantify human uncertainty?  It seems like
 it would be less likely to ever care about killing all humans so it can rule
 the galaxy or that its an omnipotent servant.

 Assume we succeed. People want to be happy. Depending on how our minds are
 implemented, it's either a matter of rewiring our neurons or rewriting our
 software. Is that better than a gray goo accident?


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Travis Lenting travlent...@gmail.com

 *To:* agi agi@v2.listbox.com
 *Sent:* Sun, June 27, 2010 5:21:24 PM

 *Subject:* Re: [agi] Questions for an AGI

 I don't like the idea of enhancing human intelligence before the
 singularity. I think crime has to be made impossible even for an enhanced
 humans first. I think life is too adapt to abusing opportunities if
 possible. I would like to see the singularity enabling AI to be as least
 like a reproduction machine as possible. Does it really need to be a general
 AI to cause a singularity? Can it not just stick to scientific data and
 quantify human uncertainty?  It seems like it would be less likely to ever
 care about killing all humans so it can rule the galaxy or that its
 an omnipotent servant.

 On Sun, Jun 27, 2010 at 11:39 AM, The Wizard key.unive...@gmail.comwrote:

 This is wishful thinking. Wishful thinking is dangerous. How about instead
 of hoping that AGI won't destroy the world, you study the problem and come
 up with a safe design.


 Agreed on this dangerous thought!

 On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.comwrote:

 This is wishful thinking. Wishful thinking is dangerous. How about
 instead of hoping that AGI won't destroy the world, you study the problem
 and come up with a safe design.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, June 26, 2010 1:14:22 PM
 *Subject:* Re: [agi] Questions for an AGI

  why should AGIs give a damn about us?


 I like to think that they will give a damn because humans have a unique
 way of experiencing reality and there is no reason to not take advantage of
 that precious opportunity to create astonishment or bliss. If anything is
 important in the universe, its insuring positive experiences for all areas
 in which it is conscious, I think it will realize that. And with the
 resources available in the solar system alone, I don't think we will be much
 of a burden.


 I like that idea.  Another reason might be that we won't crack the
 problem of autonomous general intelligence, but the singularity will proceed
 regardless as a symbiotic relationship between life

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
Travis Lenting wrote:
 Is there a difference between enhancing our intelligence by uploading and 
 creating killer robots? Think about it.

 Well yes, we're not all bad but I think you read me wrong because thats 
 basically my worry.

What I mean is that one way to look at uploading is to create a robot that 
behaves like you and then dying. The question is whether you become the 
robot. But it is a nonsense question. Nothing changes whichever way you answer 
it.

 Assume we succeed. People want to be happy. Depending on how our minds are 
 implemented, it's either a matter of rewiring our neurons or rewriting our 
 software. Is that better than a gray goo accident?

 Are you asking if changing your hardware or software ends your true existence 
 like a grey goo accident would?

A state of maximum happiness or maximum utility is a degenerate mental state 
where any thought or perception would be unpleasant because it would result in 
a different mental state. In a competition with machines that can't have 
everything they want (for example, they fear death and later die), the other 
machines would win because you would have no interest in self preservation and 
they would.

 Assuming the goo is unconscious, 

What do you mean by unconscious?

 it would be worse because there is the potential for a peaceful experience 
 free from the power struggle for limited resources even if humans don't truly 
 exist or not.

That result could be reached by a dead planet, which BTW, is the only stable 
attractor in the chaotic process of evolution.

 Does anyone else worry about how we're going to keep this machine's 
 unprecedented resourcefulness from being abused by an elite few to further 
 protect and advance their social superiority?

If the elite few kill off all their competition, then theirs is the only 
ethical model that matters. From their point of view, it would be a good thing. 
How do you feel about humans currently being at the top of the food chain?

 To me it seems like if we can't create a democratic society where people have 
 real choices concerning the issues that affect them most and it  just ends up 
 being a continuation of the class war we have today, then maybe grey goo 
 would be the better option before we start promoting democracy throughout 
 the universe.

Freedom and fairness are important to us because they were programmed into our 
ethical models, not because they are actually important. As a counterexample, 
they are irrelevant to evolution. Gray goo might be collectively vastly more 
intelligent than humanity, if that makes you feel any better.
 -- Matt Mahoney, matmaho...@yahoo.com





From: Travis Lenting travlent...@gmail.com
To: agi agi@v2.listbox.com
Sent: Sun, June 27, 2010 6:53:14 PM
Subject: Re: [agi] Questions for an AGI

Everything has to happen before the singularity because there is no after.

I meant when machines take over technological evolution. 

That is easy. Eliminate all laws.

I would prefer a surveillance state. I should say impossible to get away with 
if conducted in public. 

Is there a difference between enhancing our intelligence by uploading and 
creating killer robots? Think about it.

Well yes, we're not all bad but I think you read me wrong because thats 
basically my worry.

Assume we succeed. People want to be happy. Depending on how our minds are 
implemented, it's either a matter of rewiring our neurons or rewriting our 
software. Is that better than a gray goo accident?

Are you asking if changing your hardware or software ends your true existence 
like a grey goo accident would? Assuming the goo is unconscious, it would be 
worse because there is the potential for a peaceful experience free from the 
power struggle for limited resources even if humans don't truly exist or not. 
Does anyone else worry about how we're going to keep this machine's 
unprecedented resourcefulness from being abused by an elite few to further 
protect and advance their social superiority? To me it seems like if we can't 
create a democratic society where people have real choices concerning the 
issues that affect them most and it  just ends up being a continuation of the 
class war we have today, then maybe grey goo would be the better option before 
we start promoting democracy throughout the universe.


On Sun, Jun 27, 2010 at 2:43 PM, Matt Mahoney matmaho...@yahoo.com wrote:

Travis Lenting wrote:
 I don't like the idea of enhancing human intelligence before the singularity.


The singularity is a point of infinite collective knowledge, and therefore 
infinite unpredictability. Everything has to happen before the singularity 
because there is no after.


 I think crime has to be made impossible even for an enhanced humans first. 


That is easy. Eliminate all laws.


 I would like to see the singularity enabling AI to be as least like a 
 reproduction machine as possible.


Is there a difference between enhancing our intelligence

Re: [agi] Questions for an AGI

2010-06-26 Thread Steve Richfield
Travis,

The AGI world seems to be cleanly divided into two groups:

1.  People (like Ben) who feel as you do, and aren't at all interested or
willing to look at the really serious lapses in logic that underlie this
approach. Note that there is a similar belief in Buddhism, akin to the
prisoners dilemma, that if everyone just decides to respect everyone else,
that the world will be a really nice place. The problem is, it doesn't work,
and it can't work for some sound logical reasons that were unknown thousands
of years ago when those beliefs were first advanced, and are STILL unknown
to most of the present-day population, and...

2.  People (like me) who see that this is a really insane, dangerous, and
delusional belief system, as it encourages activities that are every bit as
dangerous as DIY thermonuclear weapons. Sure, you aren't likely to build a
successful H-bomb in your basement using heavy water that you separated
using old automobile batteries, but should we encourage you to even try?

Unfortunately, there is ~zero useful communication between these two groups.
For example, Ben explains that he has heard all of the horror scenarios for
AGIs, and I believe that he has, yet he continues in this direction for
reasons that he is too busy to explain in detail. I have viewed some of
his presentations, e.g. at the 2009 Singularity conference. There, he
provides no glimmer of any reason why his approach isn't predictably
suicidal if/when an AGI ever comes into existence, beyond what you outlined,
e.g. imperfect protective mechanisms that would only serve to become their
own points of contention between future AGIs. What if some accident disables
an AGI's protective mechanisms? Would there be some major contention between
Ben's AGI and Osama bin Laden's AGI? How about those nasty little areas
where our present social rules enforce specie-destroying dysgenic activity?
Ultimately and eventually, why should AGIs give a damn about us?

Steve
=
On Fri, Jun 25, 2010 at 1:25 PM, Travis Lenting travlent...@gmail.comwrote:

 I hope I don't miss represent him but I agree with Ben (at
 least my interpretation) when he said, We can ask it questions like, 'how
 can we make a better A(G)I that can serve us in more different ways without
 becoming dangerous'...It can help guide us along the path to a
 positive singularity. I'm pretty sure he was also saying at first it
 should just be a question answering machine with a reliable goal system and
 stop the development if it has an unstable one before it gets to smart. I
 like the idea that we should create an automated
 cross disciplinary scientist and engineer (if you even separate the two) and
 that NLP not modeled after the human brain is the best proposal for
 a benevolent and resourceful super intelligence that enables a positive
 singularity and all its unforeseen perks.
 On Wed, Jun 23, 2010 at 11:04 PM, The Wizard key.unive...@gmail.comwrote:


 If you could ask an AGI anything, what would you ask it?
 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-26 Thread Steve Richfield
Fellow Cylons,

I sure hope SOMEONE is assembling a list from these responses, because this
is exactly the sort of stuff that I (or someone) would need to run a Reverse
Turing Test (RTT) competition.

Steve



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-26 Thread Travis Lenting
Well, the existence of different contingencies is one reason I don't wont
the first one modeled after a brain. I would like it to be a bit simpler in
the sense that it only tries to answer questions from the most scientific
perspective as possible. To me it seems like there isn't someone stable
enough to model the first AGI after; so perhaps understanding a brain
completely shouldn't be top on the priority list. I think the focus should
be on NLP so it can utilize all human knowledge that exist in text and audio
form. I have no strategy on how to do this but it seems like the safest
path. The brain is a tangled mess that could be understood post-singularity
but for now I would think NLP is what really-really matters when it comes to
developing an AGI.

Would there be some major contention between Ben's AGI and Osama bin Laden's
AGI?

I think our contentions will regard radically different topics in a
post-singularity civilization, so I can't say. But I predict they will both
agree with the main AGI's take on
trans-national corporate imperialism, amongst other currently
disputed issues, because the AGI will be as objective as possible and as the
two individuals augment their intelligence they will naturally flow towards
a more objective perception of reality.

What if some accident disables an AGI's protective mechanisms?

I don't know. You guys should do your best to create stable goal systems and
I'll go sweep this floor.

contention between future AGIs

I don't feel qualified to mediate between two future AGIs so I don't know.

why should AGIs give a damn about us?

I like to think that they will give a damn because humans have a unique way
of experiencing reality and there is no reason to not take advantage of that
precious opportunity to create astonishment or bliss. If anything is
important in the universe, its insuring positive experiences for all areas
in which it is conscious, I think it will realize that. And with the
resources available in the solar system alone, I don't think we will be much
of a burden. Obviously this can't be screwed up and it ends up using us for
its own reproduction or whatever because thats all it was programmed to care
about. But I don't think trying is inherently suicidal by any means if
that's ultimately what you're getting at.
On Sat, Jun 26, 2010 at 1:37 AM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Travis,

 The AGI world seems to be cleanly divided into two groups:

 1.  People (like Ben) who feel as you do, and aren't at all interested or
 willing to look at the really serious lapses in logic that underlie this
 approach. Note that there is a similar belief in Buddhism, akin to the
 prisoners dilemma, that if everyone just decides to respect everyone else,
 that the world will be a really nice place. The problem is, it doesn't work,
 and it can't work for some sound logical reasons that were unknown thousands
 of years ago when those beliefs were first advanced, and are STILL unknown
 to most of the present-day population, and...

 2.  People (like me) who see that this is a really insane, dangerous, and
 delusional belief system, as it encourages activities that are every bit as
 dangerous as DIY thermonuclear weapons. Sure, you aren't likely to build a
 successful H-bomb in your basement using heavy water that you separated
 using old automobile batteries, but should we encourage you to even try?

 Unfortunately, there is ~zero useful communication between these two
 groups. For example, Ben explains that he has heard all of the horror
 scenarios for AGIs, and I believe that he has, yet he continues in this
 direction for reasons that he is too busy to explain in detail. I have
 viewed some of his presentations, e.g. at the 2009 Singularity conference.
 There, he provides no glimmer of any reason why his approach isn't
 predictably suicidal if/when an AGI ever comes into existence, beyond what
 you outlined, e.g. imperfect protective mechanisms that would only serve to
 become their own points of contention between future AGIs. What if some
 accident disables an AGI's protective mechanisms? Would there be some major
 contention between Ben's AGI and Osama bin Laden's AGI? How about those
 nasty little areas where our present social rules enforce specie-destroying
 dysgenic activity? Ultimately and eventually, why should AGIs give a damn
 about us?

 Steve
 =
 On Fri, Jun 25, 2010 at 1:25 PM, Travis Lenting travlent...@gmail.comwrote:

 I hope I don't miss represent him but I agree with Ben (at
 least my interpretation) when he said, We can ask it questions like, 'how
 can we make a better A(G)I that can serve us in more different ways without
 becoming dangerous'...It can help guide us along the path to a
 positive singularity. I'm pretty sure he was also saying at first it
 should just be a question answering machine with a reliable goal system and
 stop the development if it has an unstable one before it gets to smart. I
 like the idea that 

Re: [agi] Questions for an AGI

2010-06-26 Thread rob levy

 why should AGIs give a damn about us?


 I like to think that they will give a damn because humans have a unique way
of experiencing reality and there is no reason to not take advantage of that
precious opportunity to create astonishment or bliss. If anything is
important in the universe, its insuring positive experiences for all areas
in which it is conscious, I think it will realize that. And with the
resources available in the solar system alone, I don't think we will be much
of a burden.


I like that idea.  Another reason might be that we won't crack the problem
of autonomous general intelligence, but the singularity will proceed
regardless as a symbiotic relationship between life and AI.  That would be
beneficial to us as a form of intelligence expansion, and beneficial to the
artificial entity a way of being alive and having an experience of the
world.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-25 Thread Travis Lenting
I hope I don't miss represent him but I agree with Ben (at
least my interpretation) when he said, We can ask it questions like, 'how
can we make a better A(G)I that can serve us in more different ways without
becoming dangerous'...It can help guide us along the path to a
positive singularity. I'm pretty sure he was also saying at first it
should just be a question answering machine with a reliable goal system and
stop the development if it has an unstable one before it gets to smart. I
like the idea that we should create an automated
cross disciplinary scientist and engineer (if you even separate the two) and
that NLP not modeled after the human brain is the best proposal for
a benevolent and resourceful super intelligence that enables a positive
singularity and all its unforeseen perks.
On Wed, Jun 23, 2010 at 11:04 PM, The Wizard key.unive...@gmail.com wrote:


 If you could ask an AGI anything, what would you ask it?
 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-25 Thread Ian Parker
One of the first things in AGI is to produce software which is self
monitoring and which will correct itself when it is not working. For over a
day now I have been unable to access Google Groups. The Internet
access simply loops and does not get anywhere. If Google had any true AGI it
would :-

a) Spot that it was looping.
b) Failing that it would provide the use with an interface which would
enable the fault to be corrected on line.

This may seem an absolutely trivial point, but I feel it is absolutely
fundamental. First of all you do not pass the Turing test by being
absolutely dumb. I suppose you might say that conversing with Google was
rather like Tony Haywood answering questions in Congress. Sorry we cannot
process your request at this time (or any other time for that matter). You
don't either (this is Google Translate for you) by saying hat US forces
have committed atrocities in Burma when they have been out of SE Asia since
the end of the Vietnam war.

Another instance. Google denied access to my site saying that I had breached
the terms and conditions. I hadn't and they said they did not know why. You
do not pass the TT either by walking up and saying they had a paedophile
website when they hadn't.

I would say that the first task of AGI (this is actually a definition) would
be to provide software that is fault tolerant and self correcting. After all
if we have 2 copies of AGI we will have (by definition) a fault tolerant
system. If a request cannot be processed an AGI system should know why not
and hopefully be able to do something about it.

The lack of any real fault tolerance in our systems to me underlines just
how far off we really are.


  - Ian Parker

On 24 June 2010 07:10, Dana Ream dmr...@sonic.net wrote:

  How do you work?

  --
 *From:* The Wizard [mailto:key.unive...@gmail.com]
 *Sent:* Wednesday, June 23, 2010 11:05 PM
 *To:* agi
 *Subject:* [agi] Questions for an AGI


 If you could ask an AGI anything, what would you ask it?
 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Questions for an AGI

2010-06-24 Thread The Wizard
If you could ask an AGI anything, what would you ask it?
-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


RE: [agi] Questions for an AGI

2010-06-24 Thread Dana Ream
How do you work?

  _  

From: The Wizard [mailto:key.unive...@gmail.com] 
Sent: Wednesday, June 23, 2010 11:05 PM
To: agi
Subject: [agi] Questions for an AGI



If you could ask an AGI anything, what would you ask it? 
-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com  

agi | Archives https://www.listbox.com/member/archive/303/=now
https://www.listbox.com/member/archive/rss/303/  | Modify
https://www.listbox.com/member/?;
abf  Your Subscription  http://www.listbox.com   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-24 Thread deepakjnath
I would ask What should I ask if I could ask AGI anything?


On Thu, Jun 24, 2010 at 11:34 AM, The Wizard key.unive...@gmail.com wrote:


 If you could ask an AGI anything, what would you ask it?
 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
cheers,
Deepak



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-24 Thread Florent Berthet
Tell me what I need to know, by order of importance.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-24 Thread The Wizard
I would ask the agi What should I ask an agi

On Thu, Jun 24, 2010 at 4:56 AM, Florent Berthet
florent.bert...@gmail.comwrote:

 Tell me what I need to know, by order of importance.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Carlos A Mejia

Taking life one singularity at a time.
www.Transalchemy.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-24 Thread A. T. Murray
Carlos A Mejia invited questions for an AGI!

 If you could ask an AGI anything, what would you ask it?

Who killed Donald Young, a gay sex partner 
of U.S. President Barak Obama, on December 
24, 2007, in Obama's home town of Chicago, 
when it began to look like Obama could 
actually be elected president? 

Who had the most to gain from killing 
not only Donald Young but also Larry Bland 
on November 17, 2007, another gay member of 
Obama's Trinity United Church of Christ (TUCC) 
on Chicago's south side? 

It is not a question of Obama's privacy as 
a married gay man; it is a question of 
Murder Most Foul. 

Most likely, Obama did not arrange, orchestrate 
and order the suspicious cluster of homosexual 
deaths and murders in Chicago at the end of 2007, 
just before the year 2008 in which Obama became 
the first black president and acquired the power 
that he employed for the wanton murder of innocent 
(and also guilty) Arabs in Iraq and innocent 
citizens of Afghanistan and Pakistan. 

Most likely, somebody else did the Chicago killings 
for Obama, just like somebody else does the Iraq 
and Afghanistan killings for Obama. 

American soldiers, killing on behalf of Obama, 
recently killed a group of innocent women in 
Afghanistan. In order to hide their crime, 
Obama's soldiers approached the dead women's 
bodies, took out knives, and dug the bullets 
out of the dead women's bodies so as to obscure 
the fact that the killers of the women were in 
Obama's chain of command. It was your tax dollars 
at work, and your elected president carrying on 
the murders first initiated by George W. Bush. 

Bush Two, is what they are beginning to call Obama. 
Obama, who told the voters he would close down the 
Guantanamo concentration camp -- America's Auschwitz. 
Obama, who promised to bring home the troops but 
who instead, suckers, has enlarged the War 
To Make the World Safe for Opium and Heroin. 

Meanwhile, the mainstream media (MSM) think 
that they have a stranglehold on the dissemination 
and broadcasting of what is to be the news in 
America. If the MSM do not report something, then 
it never happened, right? What happens here in 
gangland gayland Chicago, stays here in gangland 
gayland Czechago, right? WRONG!!! 
Pomshchenie moyo, az vozdam, sayeth the Lord. 

-- 
Mentifex shouting STOP THE WARS, Mr. President! 
and We will persuade you to resign in disgrace. 
http://www.scn.org/~mentifex/20100522.html 
http://www.globemagazine.com/story/512


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-24 Thread David Jones
I get the impression from this question that you think an AGI is some sort
of all-knowing, idealistic invention. It is sort of like asking if you
could ask the internet anything, what would you ask it?. Uhhh, lots of
stuff, like how do I get wine stains out of white carpet :). AGI's will not
be all-knowing for quite a long time. They won't be any more all-knowing
than you and I. Eventually they will know a lot, just as the internet
contains more information than any human brain can store. But the AGI will
certainly not know everything, at least not for quite a long time. It has to
learn stuff just as we do. And we'll have to manage that knowledge until it
gets to the point where it is basically all knowing. Hopefully it will get
to that point some far day in the future.

Dave

On Thu, Jun 24, 2010 at 2:04 AM, The Wizard key.unive...@gmail.com wrote:


 If you could ask an AGI anything, what would you ask it?
 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-24 Thread Matt Mahoney
Am I a human or am I an AGI?

Dana Ream wrote:
 How do you work?
 
Just like you designed me to.

deepakjnath wrote: 
 What should I ask if I could ask AGI anything?
The Wizard wrote:
 What should I ask an agi


You don't need to ask me anything. I will do all of your thinking for you.

Florent Bethert wrote:
 Tell me what I need to know, by order of importance.

Nothing. I will do all of your thinking for you.

A. T. Murray wrote:
 Who killed Donald Young, a gay sex partner of U.S. President Barak Obama

It must have been that other AGI, Mentifex. I never did trust it ;-)


-- Matt Mahoney, matmaho...@yahoo.com


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com