On Thu, Sep 18, 2008 at 9:02 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Thu, 9/18/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >I believe there is a qualitative difference btw AGI and narrow-AI, so that
> no tractably small collection of computationally-feasible narrow-AI's (like
> Google etc.) are going to achieve general intelligence at the human level or
> anywhere near.  I think you need an AGI architecture & approach that is
> fundamentally different from narrow-AI approaches...
>
> Well, yes, and that difference is a distributed index, which has yet to be
> built.


I extremely strongly disagree with the prior sentence ... I do not think
that a distributed index is a sufficient architecture for powerful AGI at
the human level, beyond, or anywhere near...


>
>
> Also, what do you mean by "human level intelligence"? What test do you use?
> My calculator already surpasses human level intelligence depending on the
> tests I give it.


Yes, and my dog surpasses human level intelligence at finding poop in a
grassy field ... so what?? ;-)

If I need to specify a test right now I'll just use the standard IQ tests as
a reference, or else the Turing Test

But I don't think these tests are ideal by any means...

One of the items on my list for this fall is the articulation of a clear set
of metrics for evaluating developing, learning AGI systems as they move
toward human-level AI ...

-- Ben G



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to