A succesful AGI should have n methods of data-mining its experience
for knowledge, I think. If it should have n ways of generating those
methods or n sets of ways to generate ways of generating those methods
etc I don't know.
On 8/28/08, j.k. [EMAIL PROTECTED] wrote:
On 08/28/2008 04:47 PM, Matt
I like that argument.
Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.
--Abram
On Thu, Aug 28, 2008 at 9:04 PM, j.k.
On 08/29/2008 10:09 AM, Abram Demski wrote:
I like that argument.
Also, it is clear that humans can invent better algorithms to do
specialized things. Even if an AGI couldn't think up better versions
of itself, it would be able to do the equivalent of equipping itself
with fancy calculators.
2008/8/29 j.k. [EMAIL PROTECTED]:
On 08/28/2008 04:47 PM, Matt Mahoney wrote:
The premise is that if humans can create agents with above human
intelligence, then so can they. What I am questioning is whether agents at
any intelligence level can do this. I don't believe that agents at any
It seems that the debate over recursive self improvement depends on what you
mean by improvement. If you define improvement as intelligence as defined by
the Turing test, then RSI is not possible because the Turing test does not test
for superhuman intelligence. If you mean improvement as more
On 08/29/2008 01:29 PM, William Pearson wrote:
2008/8/29 j.k.[EMAIL PROTECTED]:
An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable intelligence. That same AGI at some later
2008/8/29 j.k. [EMAIL PROTECTED]:
On 08/29/2008 01:29 PM, William Pearson wrote:
2008/8/29 j.k.[EMAIL PROTECTED]:
An AGI with an intelligence the equivalent of a 99.-percentile human
might be creatable, recognizable and testable by a human (or group of
humans) of comparable
On 08/29/2008 03:14 PM, William Pearson wrote:
2008/8/29 j.k.[EMAIL PROTECTED]:
... The human-level AGI running a million
times faster could simultaneously interact with tens of thousands of
scientists at their pace, so there is no reason to believe it need be
starved for interaction to the
Here is Vernor Vinge's original essay on the singularity.
http://mindstalk.net/vinge/vinge-sing.html
The premise is that if humans can create agents with above human intelligence,
then so can they. What I am questioning is whether agents at any intelligence
level can do this. I don't believe
Thanks. But like I said, airy generalities.
That machines can become faster and faster at computations and accumulating
knowledge is certain. But that's narrow AI.
For general intelligence, you have to be able first to integrate as well as
accumulate knowledge. We have learned vast amounts
On 08/28/2008 04:47 PM, Matt Mahoney wrote:
The premise is that if humans can create agents with above human intelligence,
then so can they. What I am questioning is whether agents at any intelligence
level can do this. I don't believe that agents at any level can recognize
higher
11 matches
Mail list logo