Benjamin Goertzel wrote:
I think that if it were dumb enough that it could be treated as a tool,
then it would have to no be able to understand that it was being used as
a tool.
And if it could not understand that, it would just not have any hope of
being generally intelligent.
You seem to be
Edward W. Porter wrote:
In response to Richard Loosemore’s Post of Sun 11/4/2007 12:15 PM
responding to my prior message of Sat 11/3/2007 3:28 PM
ED’s prior msg For example, humans might for short sighted personal
gain (such as when using them in weapon systems)
RL Whoaa! You assume that
Richard Loosemore wrote:
Charles D Hixson wrote:
Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:
...
I think you should read some stories from the 1930's by John W.
Campbell, Jr. Specifically the three stories collectively
Charles D Hixson wrote:
Richard Loosemore wrote:
Charles D Hixson wrote:
Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:
...
I think you should read some stories from the 1930's by John W.
Campbell, Jr. Specifically the three
On Sat, Nov 03, 2007 at 01:17:03PM -0400, Richard Loosemore wrote:
Isn't there a fundamental contradiction in the idea of something that
can be a tool and also be intelligent? What I mean is, is the word
tool usable in this context?
In the 1960's, there was an expression you're just a
Jiri Jelinek wrote:
Richard,
Question: do you believe it will really be possible to build something
that is completely intelligent -- smart enough to understand humans in
such a way as to have conversations on the subtlest of subjects, and
being able to understand the functions of things in
Richard Loosemore wrote:
Charles D Hixson wrote:
Richard Loosemore wrote:
Charles D Hixson wrote:
Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:
...
In parents, sure, those motives exist.
But in an AGI there is no earthly
Jiri Jelinek wrote:
On Nov 3, 2007 1:17 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Isn't there a fundamental contradiction in the idea of something that
can be a tool and also be intelligent?
No. It could be just a sophisticated search engine.
What I mean is, is the word tool usable in
I think that if it were dumb enough that it could be treated as a tool,
then it would have to no be able to understand that it was being used as
a tool.
And if it could not understand that, it would just not have any hope of
being generally intelligent.
You seem to be assuming this
Richard Loosemore wrote:
Edward W. Porter wrote:
Richard in your November 02, 2007 11:15 AM post you stated:
...
I think you should read some stories from the 1930's by John W.
Campbell, Jr. Specifically the three stories collectively called The
Story of the Machine. You can find them in
Jiri Jelinek wrote:
People will want to enjoy life: yes. And they should, of course.
But so, of course, will the AGIs.
Giving AGI the ability to enjoy = potentially asking for serious
trouble. Why shouldn't AGI just work for us like other tools we
currently have (no joy involved)?
Isn't
the enlightenment and increased hope that
would come with being shown how I am wrong.
Ed Porter
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 03, 2007 1:17 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Can humans keep superintelligences under control
Can humans keep superintelligences under control -- can
superintelligence-augmented humans compete
Richard Loosemore (RL) wrote the following on Fri 11/2/2007 11:15 AM,
in response to a post by Matt Mahoney.
My comments are preceded by ED
RL This is the worst possible summary of the situation,
13 matches
Mail list logo