Actually Ray does understand AGI pretty well, and he understands that Watson, internally, is architected differently from an AGI ... and he understands that Watson, unlike an AGI, cannot learn and generalize beyond its original domain
However, he believes that the technological infrastructure needed to create a Watson, has a lot of overlap with that needed to create an AGI. And he is right about that... -- Ben G On Fri, May 3, 2013 at 8:18 PM, Mike Tintner <[email protected]>wrote: > Q: How do you gauge if strong A.I. is a few years away? > > K: Developments such as Watson should give us confidence that we are on > track. > > So we know for sure that he doesn’t understand AGI – and his Singularity > is equally baseless. > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/212726-deec6279> | > Modify<https://www.listbox.com/member/?&>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD http://goertzel.org "My humanity is a constant self-overcoming" -- Friedrich Nietzsche ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
