A succesful AGI should have n methods of data-mining its experience
for knowledge, I think. If it should have n ways of generating those
methods or n sets of ways to generate ways of generating those methods
etc I don't know.

On 8/28/08, j.k. <[EMAIL PROTECTED]> wrote:
> On 08/28/2008 04:47 PM, Matt Mahoney wrote:
>> The premise is that if humans can create agents with above human
>> intelligence, then so can they. What I am questioning is whether agents at
>> any intelligence level can do this. I don't believe that agents at any
>> level can recognize higher intelligence, and therefore cannot test their
>> creations.
>
> The premise is not necessary to arrive at greater than human
> intelligence. If a human can create an agent of equal intelligence, it
> will rapidly become more intelligent (in practical terms) if advances in
> computing technologies continue to occur.
>
> An AGI with an intelligence the equivalent of a 99.9999-percentile human
> might be creatable, recognizable and testable by a human (or group of
> humans) of comparable intelligence. That same AGI at some later point in
> time, doing nothing differently except running 31 million times faster,
> will accomplish one genius-year of work every second. I would argue that
> by any sensible definition of intelligence, we would have a
> greater-than-human intelligence that was not created by a being of
> lesser intelligence.
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to