>
> To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject
> - narrow AI.  Looking for the one right prediction/ explanation is narrow
> AI. Being able to generate more and more possible explanations, wh. could
> all be valid,  is AGI.  The former is rational, uniform thinking. The latter
> is creative, polyform thinking. Or, if you prefer, it's convergent vs
> divergent thinking, the difference between wh. still seems to escape Dave &
> Ben & most AGI-ers.
>

You are misrepresenting my approach, which is not based on looking for "the
one right prediction/explanation"

OpenCog relies heavily on evolutionary learning and probabilistic inference,
both of which naturally generate a massive number of alternative possible
explanations in nearly every instance...

-- Ben G



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to