Re: [agi] Questions for an AGI

2010-06-28 Thread Ian Parker
26, 2010 1:14:22 PM *Subject:* Re: [agi] Questions for an AGI why should AGIs give a damn about us? I like to think that they will give a damn because humans have a unique way of experiencing reality and there is no reason to not take advantage of that precious opportunity to create

Re: [agi] Questions for an AGI

2010-06-28 Thread Steve Richfield
Ian, Travis, etc. On Mon, Jun 28, 2010 at 6:42 AM, Ian Parker ianpark...@gmail.com wrote: On 27 June 2010 22:21, Travis Lenting travlent...@gmail.com wrote: I think crime has to be made impossible even for an enhanced humans first. If our enhancement was Internet based it could be turned

Re: [agi] Questions for an AGI

2010-06-28 Thread Erdal Bektaş
What is the equation and solution method providing solution of every physical problem? or Give me the equation of god, and its solution. (lol) On Mon, Jun 28, 2010 at 6:02 PM, David Jones davidher...@gmail.com wrote: Crime has its purpose just like many other unpleasant behaviors. When

Re: [agi] Questions for an AGI

2010-06-28 Thread Travis Lenting
-- *From:* rob levy r.p.l...@gmail.com *To:* agi agi@v2.listbox.com *Sent:* Sat, June 26, 2010 1:14:22 PM *Subject:* Re: [agi] Questions for an AGI why should AGIs give a damn about us? I like to think that they will give a damn because humans have a unique way of experiencing reality

Re: [agi] Questions for an AGI

2010-06-28 Thread Travis Lenting
Anyone who could suggest making crime impossible is SO far removed from reality that it is hard to imagine that they function in society. I cleared this obviously confusing statement up with Matt. What I meant to say was impossible to get away with in public (in America I guess) because of mass

Re: [agi] Questions for an AGI

2010-06-28 Thread The Wizard
, 2010 1:14:22 PM *Subject:* Re: [agi] Questions for an AGI why should AGIs give a damn about us? I like to think that they will give a damn because humans have a unique way of experiencing reality and there is no reason to not take advantage of that precious opportunity to create astonishment

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
@v2.listbox.com Sent: Sat, June 26, 2010 1:14:22 PM Subject: Re: [agi] Questions for an AGI why should AGIs give a damn about us? I like to think that they will give a damn because humans have a unique way of experiencing reality and there is no reason to not take advantage of that precious

Re: [agi] Questions for an AGI

2010-06-27 Thread rob levy
of hoping that AGI won't destroy the world, you study the problem and come up with a safe design. -- Matt Mahoney, matmaho...@yahoo.com -- *From:* rob levy r.p.l...@gmail.com *To:* agi agi@v2.listbox.com *Sent:* Sat, June 26, 2010 1:14:22 PM *Subject:* Re: [agi

Re: [agi] Questions for an AGI

2010-06-27 Thread The Wizard
.listbox.com *Sent:* Sat, June 26, 2010 1:14:22 PM *Subject:* Re: [agi] Questions for an AGI why should AGIs give a damn about us? I like to think that they will give a damn because humans have a unique way of experiencing reality and there is no reason to not take advantage of that precious

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
and maintain control over it. An example would be the internet. -- Matt Mahoney, matmaho...@yahoo.com From: rob levy r.p.l...@gmail.com To: agi agi@v2.listbox.com Sent: Sun, June 27, 2010 2:37:15 PM Subject: Re: [agi] Questions for an AGI I definitely agree, however we

Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
-- *From:* rob levy r.p.l...@gmail.com *To:* agi agi@v2.listbox.com *Sent:* Sat, June 26, 2010 1:14:22 PM *Subject:* Re: [agi] Questions for an AGI why should AGIs give a damn about us? I like to think that they will give a damn because humans have a unique way of experiencing

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
Lenting travlent...@gmail.com To: agi agi@v2.listbox.com Sent: Sun, June 27, 2010 5:21:24 PM Subject: Re: [agi] Questions for an AGI I don't like the idea of enhancing human intelligence before the singularity. I think crime has to be made impossible even for an enhanced humans first. I think life

Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
Mahoney, matmaho...@yahoo.com -- *From:* Travis Lenting travlent...@gmail.com *To:* agi agi@v2.listbox.com *Sent:* Sun, June 27, 2010 5:21:24 PM *Subject:* Re: [agi] Questions for an AGI I don't like the idea of enhancing human intelligence before the singularity

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
: Travis Lenting travlent...@gmail.com To: agi agi@v2.listbox.com Sent: Sun, June 27, 2010 6:53:14 PM Subject: Re: [agi] Questions for an AGI Everything has to happen before the singularity because there is no after. I meant when machines take over technological evolution. That is easy. Eliminate

Re: [agi] Questions for an AGI

2010-06-26 Thread Steve Richfield
Travis, The AGI world seems to be cleanly divided into two groups: 1. People (like Ben) who feel as you do, and aren't at all interested or willing to look at the really serious lapses in logic that underlie this approach. Note that there is a similar belief in Buddhism, akin to the prisoners

Re: [agi] Questions for an AGI

2010-06-26 Thread Steve Richfield
Fellow Cylons, I sure hope SOMEONE is assembling a list from these responses, because this is exactly the sort of stuff that I (or someone) would need to run a Reverse Turing Test (RTT) competition. Steve --- agi Archives:

Re: [agi] Questions for an AGI

2010-06-26 Thread Travis Lenting
Well, the existence of different contingencies is one reason I don't wont the first one modeled after a brain. I would like it to be a bit simpler in the sense that it only tries to answer questions from the most scientific perspective as possible. To me it seems like there isn't someone stable

Re: [agi] Questions for an AGI

2010-06-26 Thread rob levy
why should AGIs give a damn about us? I like to think that they will give a damn because humans have a unique way of experiencing reality and there is no reason to not take advantage of that precious opportunity to create astonishment or bliss. If anything is important in the universe, its

Re: [agi] Questions for an AGI

2010-06-25 Thread Travis Lenting
I hope I don't miss represent him but I agree with Ben (at least my interpretation) when he said, We can ask it questions like, 'how can we make a better A(G)I that can serve us in more different ways without becoming dangerous'...It can help guide us along the path to a positive singularity. I'm

Re: [agi] Questions for an AGI

2010-06-25 Thread Ian Parker
? -- *From:* The Wizard [mailto:key.unive...@gmail.com] *Sent:* Wednesday, June 23, 2010 11:05 PM *To:* agi *Subject:* [agi] Questions for an AGI If you could ask an AGI anything, what would you ask it? -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com

[agi] Questions for an AGI

2010-06-24 Thread The Wizard
If you could ask an AGI anything, what would you ask it? -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed:

RE: [agi] Questions for an AGI

2010-06-24 Thread Dana Ream
How do you work? _ From: The Wizard [mailto:key.unive...@gmail.com] Sent: Wednesday, June 23, 2010 11:05 PM To: agi Subject: [agi] Questions for an AGI If you could ask an AGI anything, what would you ask it? -- Carlos A Mejia Taking life one singularity at a time

Re: [agi] Questions for an AGI

2010-06-24 Thread deepakjnath
I would ask What should I ask if I could ask AGI anything? On Thu, Jun 24, 2010 at 11:34 AM, The Wizard key.unive...@gmail.com wrote: If you could ask an AGI anything, what would you ask it? -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy.com *agi* |

Re: [agi] Questions for an AGI

2010-06-24 Thread Florent Berthet
Tell me what I need to know, by order of importance. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

Re: [agi] Questions for an AGI

2010-06-24 Thread The Wizard
I would ask the agi What should I ask an agi On Thu, Jun 24, 2010 at 4:56 AM, Florent Berthet florent.bert...@gmail.comwrote: Tell me what I need to know, by order of importance. *agi* | Archives https://www.listbox.com/member/archive/303/=now

Re: [agi] Questions for an AGI

2010-06-24 Thread A. T. Murray
Carlos A Mejia invited questions for an AGI! If you could ask an AGI anything, what would you ask it? Who killed Donald Young, a gay sex partner of U.S. President Barak Obama, on December 24, 2007, in Obama's home town of Chicago, when it began to look like Obama could actually be elected

Re: [agi] Questions for an AGI

2010-06-24 Thread David Jones
I get the impression from this question that you think an AGI is some sort of all-knowing, idealistic invention. It is sort of like asking if you could ask the internet anything, what would you ask it?. Uhhh, lots of stuff, like how do I get wine stains out of white carpet :). AGI's will not be

Re: [agi] Questions for an AGI

2010-06-24 Thread Matt Mahoney
Am I a human or am I an AGI? Dana Ream wrote: How do you work? Just like you designed me to. deepakjnath wrote: What should I ask if I could ask AGI anything? The Wizard wrote: What should I ask an agi You don't need to ask me anything. I will do all of your thinking for you. Florent

Re: [agi] Questions

2007-11-07 Thread Monika Krishan
.listbox.com Subject: Re: [agi] Questions On 11/6/07, Monika Krishan [EMAIL PROTECTED] wrote: So when speaking of augmentation, a clarification would have to made as to whether the enhancement refers to human competence or human performance. . and hence the related issue of discovering

RE: [agi] Questions

2007-11-07 Thread Edward W. Porter
: Monika Krishan [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 07, 2007 10:20 PM To: agi@v2.listbox.com Subject: Re: [agi] Questions On Nov 7, 2007 8:46 AM, Edward W. Porter [EMAIL PROTECTED] wrote: It is much easier to think how superhuman intelligences will outshine us in the performance

Re: [agi] Questions

2007-11-06 Thread Monika Krishan
On 11/5/07, Matt Mahoney [EMAIL PROTECTED] wrote: --- Monika Krishan [EMAIL PROTECTED] wrote: Hi All, I'm new to the list. So I'm not sure if these issues have been already been raised. 1. Do you think AGIs will eventually reach a point in their evolution when self improvement

Re: [agi] Questions

2007-11-06 Thread Linas Vepstas
On Tue, Nov 06, 2007 at 01:55:43PM -0500, Monika Krishan wrote: questions was the possibility that AGI might come full circle and attempt to emulate human intelligence (HI) in the process of continually improving itself. Google The simulation argument, Nick Bostrom. There is a 1/3 chance that

Re: [agi] Questions

2007-11-06 Thread Russell Wallace
On 11/6/07, Monika Krishan [EMAIL PROTECTED] wrote: There has been discussion re. the use of AGI to augment human intelligence (HI). Can this augmentation be achieved without determining what HI is capable of? For instance, one wouldn't consider a basic square root calculator something that

Re: [agi] Questions

2007-11-05 Thread Eliezer S. Yudkowsky
Monika Krishan wrote: 2. Would it be a worthwhile exercise to explore what Human General Intelligence, in it's present state, is capable of ? Nah. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence -

Re: [agi] Questions

2007-11-05 Thread Matt Mahoney
--- Monika Krishan [EMAIL PROTECTED] wrote: Hi All, I'm new to the list. So I'm not sure if these issues have been already been raised. 1. Do you think AGIs will eventually reach a point in their evolution when self improvement might come to mean attempting to solve previously solved