for years.
I would bet that these identities already exist. What happens when there are
many, many of them? Would we even know?
John
From: Steve Richfield [mailto:steve.richfi...@gmail.com]
Sent: Saturday, August 07, 2010 8:17 PM
To: agi
Subject: Re: [agi] Epiphany - Statements
:* Re: [agi] Epiphany - Statements of Stupidity
Ian,
I recall several years ago that a group in Britain was operating just such
a chatterbox as you explained, but did so on numerous sex-related sites, all
running simultaneously. The chatterbox emulated young girls looking for sex
John,
You brought up some interesting points...
On Fri, Aug 6, 2010 at 10:54 PM, John G. Rose johnr...@polyplexic.comwrote:
-Original Message-
From: Steve Richfield [mailto:steve.richfi...@gmail.com]
On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose johnr...@polyplexic.com
wrote:
I wanted to see what other people's views were.My own view of the risks is
as follows. If the Turing Machine is built to be as isomorphic with humans
as possible, it would be incredibly dangerous. Indeed I feel that the
biological model is far more dangerous than the mathematical.
If on the other
Ian,
I recall several years ago that a group in Britain was operating just such a
chatterbox as you explained, but did so on numerous sex-related sites, all
running simultaneously. The chatterbox emulated young girls looking for sex.
The program just sat there doing its thing on numerous sites,
sTEVE:I have posted plenty about statements of ignorance, our probable
inability to comprehend what an advanced intelligence might be thinking,
What will be the SIMPLEST thing that will mark the first sign of AGI ? - Given
that there are zero but zero examples of AGI.
Don't you think it would
Mike,
Your reply flies in the face of two obvious facts:
1. I have little interest in what is called AGI here. My interests lie
elsewhere, e.g. uploading, Dr. Eliza, etc. I posted this piece for several
reasons, as it is directly applicable to Dr. Eliza, and because it casts a
shadow on future
: Fri, August 6, 2010 5:57:33 AM
Subject: Re: [agi] Epiphany - Statements of Stupidity
sTEVE:I have posted plenty about statements of ignorance, our probable
inability to comprehend what an advanced intelligence might be thinking,
What will be the SIMPLEST thing that will mark the first
statements of stupidity - some of these are examples of cramming
sophisticated thoughts into simplistic compressed text. Language is both
intelligence enhancing and limiting. Human language is a protocol between
agents. So there is minimalist data transfer, I had no choice but to ...
is a
intelligence has to be developed from v. simple origins, step by
step, in order to actually understand these activities.
From: Steve Richfield
Sent: Friday, August 06, 2010 4:52 PM
To: agi
Subject: Re: [agi] Epiphany - Statements of Stupidity
Mike,
Your reply flies in the face of two obvious facts:
1
I think that some quite important philosofical questions are raised by
Steve's posting. I don't know BTW how you got it. I monitor all
correspondence to the group, and I did not see it.
The Turing test is not in fact a test of intelligence, it is a test of
similarity with the human. Hence for a
John,
Congratulations, as your response was the only one that was on topic!!!
On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose johnr...@polyplexic.comwrote:
statements of stupidity - some of these are examples of cramming
sophisticated thoughts into simplistic compressed text.
Definitely, as
-Original Message-
From: Ian Parker [mailto:ianpark...@gmail.com]
The Turing test is not in fact a test of intelligence, it is a test of
similarity with
the human. Hence for a machine to be truly Turing it would have to make
mistakes. Now any useful system will be made as
13 matches
Mail list logo