2008/8/29 j.k. <[EMAIL PROTECTED]>:
> On 08/28/2008 04:47 PM, Matt Mahoney wrote:
>>
>> The premise is that if humans can create agents with above human
>> intelligence, then so can they. What I am questioning is whether agents at
>> any intelligence level can do this. I don't believe that agents at any level
>> can recognize higher intelligence, and therefore cannot test their
>> creations.
>
> The premise is not necessary to arrive at greater than human intelligence.
> If a human can create an agent of equal intelligence, it will rapidly become
> more intelligent (in practical terms) if advances in computing technologies
> continue to occur.
>
> An AGI with an intelligence the equivalent of a 99.9999-percentile human
> might be creatable, recognizable and testable by a human (or group of
> humans) of comparable intelligence. That same AGI at some later point in
> time, doing nothing differently except running 31 million times faster, will
> accomplish one genius-year of work every second.

Will it? It might be starved for lack of interaction with the world
and other intelligences, and so be a lot less productive than
something working at normal speeds.

Most learning systems aren't constrained by lack of processing power
for how long it takes them to learn things (AIXI excepted), but by the
speed of running an experiment.

  Will Pearson


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to