Re: [agi] Questions for an AGI

2010-06-28 Thread The Wizard
;>> >>>> This is wishful thinking. Wishful thinking is dangerous. How about >>>> instead of hoping that AGI won't destroy the world, you study the problem >>>> and come up with a safe design. >>>> >>>> >>>> Agreed on thi

Re: [agi] Questions for an AGI

2010-06-28 Thread Travis Lenting
Anyone who could suggest making crime impossible is SO far removed from reality that it is hard to imagine that they function in society. I cleared this obviously confusing statement up with Matt. What I meant to say was "impossible to get away with in public (in America I guess) because of mass s

Re: [agi] Questions for an AGI

2010-06-28 Thread Travis Lenting
>>> Agreed on this dangerous thought! >>> >>> On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney wrote: >>> >>>> This is wishful thinking. Wishful thinking is dangerous. How about >>>> instead of hoping that AGI won't destroy the world

Re: [agi] Questions for an AGI

2010-06-28 Thread Erdal Bektaş
What is the equation and solution method providing solution of every physical problem? or Give me the equation of god, and its solution. (lol) On Mon, Jun 28, 2010 at 6:02 PM, David Jones wrote: > Crime has its purpose just like many other unpleasant behaviors. When > government is reasonably

Re: [agi] Questions for an AGI

2010-06-28 Thread David Jones
Crime has its purpose just like many other unpleasant behaviors. When government is reasonably good, crime causes problems. But, when government is bad, crime is good. Given the chance, I might have tried to assassinate Hitler. Yet, assassination is a crime. On Mon, Jun 28, 2010 at 10:51 AM, Steve

Re: [agi] Questions for an AGI

2010-06-28 Thread Steve Richfield
Ian, Travis, etc. On Mon, Jun 28, 2010 at 6:42 AM, Ian Parker wrote: > > On 27 June 2010 22:21, Travis Lenting wrote: >> >> I think crime has to be made impossible even for an enhanced humans first. > > > If our enhancement was Internet based it could be turned off if we were > about to commit

Re: [agi] Questions for an AGI

2010-06-28 Thread Ian Parker
gt; >>> -- Matt Mahoney, matmaho...@yahoo.com >>> >>> >>> -- >>> *From:* rob levy >>> *To:* agi >>> *Sent:* Sat, June 26, 2010 1:14:22 PM >>> *Subject:* Re: [agi] Questions for an AGI >>> >>>

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
ray goo might be collectively vastly more intelligent than humanity, if that makes you feel any better. -- Matt Mahoney, matmaho...@yahoo.com From: Travis Lenting To: agi Sent: Sun, June 27, 2010 6:53:14 PM Subject: Re: [agi] Questions for an AGI Everything has to happen

Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
are > implemented, it's either a matter of rewiring our neurons or rewriting our > software. Is that better than a gray goo accident? > > > -- Matt Mahoney, matmaho...@yahoo.com > > > -- > *From:* Travis Lenting > > *To:* agi > *S

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
_ From: Travis Lenting To: agi Sent: Sun, June 27, 2010 5:21:24 PM Subject: Re: [agi] Questions for an AGI I don't like the idea of enhancing human intelligence before the singularity. I think crime has to be made impossible even for an enhanced humans first

Re: [agi] Questions for an AGI

2010-06-27 Thread Travis Lenting
safe design. >> >> >> -- Matt Mahoney, matmaho...@yahoo.com >> >> >> ------ >> *From:* rob levy >> *To:* agi >> *Sent:* Sat, June 26, 2010 1:14:22 PM >> *Subject:* Re: [agi] Questions for an AGI >> >> why should AGIs give a damn about us? >

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
billions of humans that own and maintain control over it. An example would be the internet. -- Matt Mahoney, matmaho...@yahoo.com From: rob levy To: agi Sent: Sun, June 27, 2010 2:37:15 PM Subject: Re: [agi] Questions for an AGI I definitely agree, howeve

Re: [agi] Questions for an AGI

2010-06-27 Thread The Wizard
gi > *Sent:* Sat, June 26, 2010 1:14:22 PM > *Subject:* Re: [agi] Questions for an AGI > > why should AGIs give a damn about us? > > >> I like to think that they will give a damn because humans have a unique > way of experiencing reality and there is no reason to not t

Re: [agi] Questions for an AGI

2010-06-27 Thread rob levy
at AGI won't destroy the world, you study the problem and come > up with a safe design. > > > -- Matt Mahoney, matmaho...@yahoo.com > > > -- > *From:* rob levy > *To:* agi > *Sent:* Sat, June 26, 2010 1:14:22 PM > *Subject:* Re: [ag

Re: [agi] Questions for an AGI

2010-06-27 Thread Matt Mahoney
ne 26, 2010 1:14:22 PM Subject: Re: [agi] Questions for an AGI >why should AGIs give a damn about us? > I like to think that they will give a damn because humans have a unique way of experiencing reality and there is no reason to not take advantage of that precious opportunity to create asto

Re: [agi] Questions for an AGI

2010-06-26 Thread rob levy
> > why should AGIs give a damn about us? > I like to think that they will give a damn because humans have a unique way of experiencing reality and there is no reason to not take advantage of that precious opportunity to create astonishment or bliss. If anything is important in the universe, its

Re: [agi] Questions for an AGI

2010-06-26 Thread Travis Lenting
Well, the existence of different contingencies is one reason I don't wont the first one modeled after a brain. I would like it to be a bit simpler in the sense that it only tries to answer questions from the most scientific perspective as possible. To me it seems like there isn't someone stable eno

Re: [agi] Questions for an AGI

2010-06-26 Thread Steve Richfield
Fellow Cylons, I sure hope SOMEONE is assembling a list from these responses, because this is exactly the sort of stuff that I (or someone) would need to run a Reverse Turing Test (RTT) competition. Steve --- agi Archives: https://www.listbox.com/member/

Re: [agi] Questions for an AGI

2010-06-26 Thread Steve Richfield
Travis, The AGI world seems to be cleanly divided into two groups: 1. People (like Ben) who feel as you do, and aren't at all interested or willing to look at the really serious lapses in logic that underlie this approach. Note that there is a similar belief in Buddhism, akin to the "prisoners d

Re: [agi] Questions for an AGI

2010-06-25 Thread Ian Parker
One of the first things in AGI is to produce software which is self monitoring and which will correct itself when it is not working. For over a day now I have been unable to access Google Groups. The Internet access simply loops and does not get anywhere. If Google had any true AGI it would :- a)

Re: [agi] Questions for an AGI

2010-06-25 Thread Travis Lenting
I hope I don't miss represent him but I agree with Ben (at least my interpretation) when he said, "We can ask it questions like, 'how can we make a better A(G)I that can serve us in more different ways without becoming dangerous'...It can help guide us along the path to a positive singularity." I'm

Re: [agi] Questions for an AGI

2010-06-24 Thread Matt Mahoney
Am I a human or am I an AGI? Dana Ream wrote: > How do you work? Just like you designed me to. deepakjnath wrote: > "What should I ask if I could ask AGI anything?" The Wizard wrote: > "What should I ask an agi" You don't need to ask me anything. I will do all of your thinking for you. Flor

Re: [agi] Questions for an AGI

2010-06-24 Thread David Jones
I get the impression from this question that you think an AGI is some sort of all-knowing, idealistic invention. It is sort of like asking "if you could ask the internet anything, what would you ask it?". Uhhh, lots of stuff, like how do I get wine stains out of white carpet :). AGI's will not be a

Re: [agi] Questions for an AGI

2010-06-24 Thread A. T. Murray
Carlos A Mejia invited questions for an AGI! > If you could ask an AGI anything, what would you ask it? Who killed Donald Young, a gay sex partner of U.S. President Barak Obama, on December 24, 2007, in Obama's home town of Chicago, when it began to look like Obama could actually be elected p

Re: [agi] Questions for an AGI

2010-06-24 Thread The Wizard
I would ask the agi "What should I ask an agi" On Thu, Jun 24, 2010 at 4:56 AM, Florent Berthet wrote: > "Tell me what I need to know, by order of importance." >*agi* | Archives > | > Modify

Re: [agi] Questions for an AGI

2010-06-24 Thread Florent Berthet
"Tell me what I need to know, by order of importance." --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&

Re: [agi] Questions for an AGI

2010-06-23 Thread deepakjnath
I would ask "What should I ask if I could ask AGI anything?" On Thu, Jun 24, 2010 at 11:34 AM, The Wizard wrote: > > If you could ask an AGI anything, what would you ask it? > -- > Carlos A Mejia > > Taking life one singularity at a time. > www.Transalchemy.com >*agi* | Archives

RE: [agi] Questions for an AGI

2010-06-23 Thread Dana Ream
How do you work? _ From: The Wizard [mailto:key.unive...@gmail.com] Sent: Wednesday, June 23, 2010 11:05 PM To: agi Subject: [agi] Questions for an AGI If you could ask an AGI anything, what would you ask it? -- Carlos A Mejia Taking life one singularity at a time. www.Transalchemy

RE: [agi] Questions

2007-11-07 Thread Edward W. Porter
: Monika Krishan [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 07, 2007 10:20 PM To: agi@v2.listbox.com Subject: Re: [agi] Questions On Nov 7, 2007 8:46 AM, Edward W. Porter <[EMAIL PROTECTED]> wrote: It is much easier to think how superhuman intelligences will outshine us in the perfo

Re: [agi] Questions

2007-11-07 Thread Monika Krishan
-Original Message- > From: Russell Wallace [mailto:[EMAIL PROTECTED] > Sent: Tuesday, November 06, 2007 6:22 PM > To: agi@v2.listbox.com > Subject: Re: [agi] Questions > > > On 11/6/07, Monika Krishan <[EMAIL PROTECTED]> wrote: > > So when speaking of augmenta

Re: [agi] Questions

2007-11-07 Thread Linas Vepstas
On Tue, Nov 06, 2007 at 11:22:09PM +, Russell Wallace wrote: > the world's first > programmer never did get her hands on a working computer. But her algo's did have the world's first programming bug :-) --linas - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe

RE: [agi] Questions

2007-11-07 Thread Edward W. Porter
se many of them involve abilities, that because they are outside our competence, we have not yet thought of. Ed Porter -Original Message- From: Russell Wallace [mailto:[EMAIL PROTECTED] Sent: Tuesday, November 06, 2007 6:22 PM To: agi@v2.listbox.com Subject: Re: [agi] Questions On 11/6/

Re: [agi] Questions

2007-11-06 Thread Russell Wallace
On 11/6/07, Monika Krishan <[EMAIL PROTECTED]> wrote: > So when speaking of augmentation, a clarification would have to made as to > whether the enhancement refers to human competence or human performance. > . and hence the related issue of "discovering human competencies". Ah. *nods* Well, l

Re: [agi] Questions

2007-11-06 Thread Monika Krishan
On 11/6/07, Russell Wallace <[EMAIL PROTECTED]> wrote: > > On 11/6/07, Monika Krishan <[EMAIL PROTECTED]> wrote: > > There has been discussion re. the use of AGI to augment human > intelligence > > (HI). Can this augmentation be achieved without determining what HI is > > capable of? For instance,

Re: [agi] Questions

2007-11-06 Thread Russell Wallace
On 11/6/07, Monika Krishan <[EMAIL PROTECTED]> wrote: > There has been discussion re. the use of AGI to augment human intelligence > (HI). Can this augmentation be achieved without determining what HI is > capable of? For instance, one wouldn't consider a basic square root > calculator something th

Re: [agi] Questions

2007-11-06 Thread Linas Vepstas
On Tue, Nov 06, 2007 at 01:55:43PM -0500, Monika Krishan wrote: > questions was the possibility that AGI might come full circle and attempt to > emulate human intelligence (HI) in the process of continually improving > itself. Google "The simulation argument", Nick Bostrom. There is a 1/3 chance t

Re: [agi] Questions

2007-11-06 Thread Monika Krishan
On 11/5/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > --- Monika Krishan <[EMAIL PROTECTED]> wrote: > > > Hi All, > > > > I'm new to the list. So I'm not sure if these issues have been already > been > > raised. > > > > 1. Do you think AGIs will eventually reach a point in their evolution > when

Re: [agi] Questions

2007-11-06 Thread YKY (Yan King Yin)
If it's all so predictable, why don't you keep that to yourselves. On 11/6/07, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote: > > Monika Krishan wrote: > > > > 2. Would it be a worthwhile exercise to explore what Human General > > Intelligence, in it's present state, is capable of ? > > Nah. > >

Re: [agi] Questions

2007-11-05 Thread Matt Mahoney
--- Monika Krishan <[EMAIL PROTECTED]> wrote: > Hi All, > > I'm new to the list. So I'm not sure if these issues have been already been > raised. > > 1. Do you think AGIs will eventually reach a point in their evolution when > "self improvement" might come to mean attempting to "solve previously

Re: [agi] Questions

2007-11-05 Thread Eliezer S. Yudkowsky
Monika Krishan wrote: 2. Would it be a worthwhile exercise to explore what Human General Intelligence, in it's present state, is capable of ? Nah. -- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence - T