Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-05 Thread Richard Loosemore
Benjamin Goertzel wrote: I think that if it were dumb enough that it could be treated as a tool, then it would have to no be able to understand that it was being used as a tool. And if it could not understand that, it would just not have any hope of being generally intelligent. You seem to be

Re: [agi] Can humans keep superintelligences under control

2007-11-05 Thread Richard Loosemore
Edward W. Porter wrote: In response to Richard Loosemore’s Post of Sun 11/4/2007 12:15 PM responding to my prior message of Sat 11/3/2007 3:28 PM ED’s prior msg For example, humans might for short sighted personal gain (such as when using them in weapon systems) RL Whoaa! You assume that

Re: [agi] Can humans keep superintelligences under control

2007-11-05 Thread Charles D Hixson
Richard Loosemore wrote: Charles D Hixson wrote: Richard Loosemore wrote: Edward W. Porter wrote: Richard in your November 02, 2007 11:15 AM post you stated: ... I think you should read some stories from the 1930's by John W. Campbell, Jr. Specifically the three stories collectively

Re: [agi] Can humans keep superintelligences under control

2007-11-05 Thread Richard Loosemore
Charles D Hixson wrote: Richard Loosemore wrote: Charles D Hixson wrote: Richard Loosemore wrote: Edward W. Porter wrote: Richard in your November 02, 2007 11:15 AM post you stated: ... I think you should read some stories from the 1930's by John W. Campbell, Jr. Specifically the three

Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans competeg

2007-11-05 Thread Linas Vepstas
On Sat, Nov 03, 2007 at 01:17:03PM -0400, Richard Loosemore wrote: Isn't there a fundamental contradiction in the idea of something that can be a tool and also be intelligent? What I mean is, is the word tool usable in this context? In the 1960's, there was an expression you're just a

Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-05 Thread Richard Loosemore
Jiri Jelinek wrote: Richard, Question: do you believe it will really be possible to build something that is completely intelligent -- smart enough to understand humans in such a way as to have conversations on the subtlest of subjects, and being able to understand the functions of things in

Re: [agi] Can humans keep superintelligences under control

2007-11-05 Thread Charles D Hixson
Richard Loosemore wrote: Charles D Hixson wrote: Richard Loosemore wrote: Charles D Hixson wrote: Richard Loosemore wrote: Edward W. Porter wrote: Richard in your November 02, 2007 11:15 AM post you stated: ... In parents, sure, those motives exist. But in an AGI there is no earthly

Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-04 Thread Richard Loosemore
Jiri Jelinek wrote: On Nov 3, 2007 1:17 PM, Richard Loosemore [EMAIL PROTECTED] wrote: Isn't there a fundamental contradiction in the idea of something that can be a tool and also be intelligent? No. It could be just a sophisticated search engine. What I mean is, is the word tool usable in

Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-04 Thread Benjamin Goertzel
I think that if it were dumb enough that it could be treated as a tool, then it would have to no be able to understand that it was being used as a tool. And if it could not understand that, it would just not have any hope of being generally intelligent. You seem to be assuming this

Re: [agi] Can humans keep superintelligences under control

2007-11-04 Thread Charles D Hixson
Richard Loosemore wrote: Edward W. Porter wrote: Richard in your November 02, 2007 11:15 AM post you stated: ... I think you should read some stories from the 1930's by John W. Campbell, Jr. Specifically the three stories collectively called The Story of the Machine. You can find them in

Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-03 Thread Richard Loosemore
Jiri Jelinek wrote: People will want to enjoy life: yes. And they should, of course. But so, of course, will the AGIs. Giving AGI the ability to enjoy = potentially asking for serious trouble. Why shouldn't AGI just work for us like other tools we currently have (no joy involved)? Isn't

RE: [agi] Can humans keep superintelligences under control

2007-11-03 Thread Edward W. Porter
the enlightenment and increased hope that would come with being shown how I am wrong. Ed Porter -Original Message- From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Saturday, November 03, 2007 1:17 PM To: agi@v2.listbox.com Subject: Re: [agi] Can humans keep superintelligences under control

[agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-02 Thread Edward W. Porter
Can humans keep superintelligences under control -- can superintelligence-augmented humans compete Richard Loosemore (RL) wrote the following on Fri 11/2/2007 11:15 AM, in response to a post by Matt Mahoney. My comments are preceded by ED RL This is the worst possible summary of the situation,