Then why does he give Watson as his only example of progress?
There is not the slightest element of AGI progress – there is no program  that 
can be generative – that can “learn and generalize beyond its original domain” 
per you.
Any discussions or predictions of AGI have to *start* from an explanation of 
that fact. None do.  
From: Ben Goertzel 
Sent: Friday, May 03, 2013 2:07 PM
To: AGI 
Subject: Re: [agi] Kurzweil irrelevant


Actually Ray does understand AGI pretty well, and he understands that Watson, 
internally, is architected differently from an AGI ... and he understands that 
Watson, unlike an AGI, cannot learn and generalize beyond its original domain 

However, he believes that the technological infrastructure needed to create a 
Watson, has a lot of overlap with that needed to create an AGI.  And he is 
right about that...

-- Ben G


On Fri, May 3, 2013 at 8:18 PM, Mike Tintner <[email protected]> wrote:

  Q: How do you gauge if strong A.I. is a few years away?

  K: Developments such as Watson should give us confidence that we are on track.

  So we know for sure that he doesn’t understand AGI – and his Singularity is 
equally baseless.
        AGI | Archives  | Modify Your Subscription  





-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche

      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to