On 8/13/08, rick the ponderer <[EMAIL PROTECTED]> wrote:
>
> Reading this, I get the view of ai as basically neural networks, where
each individual perceptron could be any of a number of algorithms
(decision tree, random forest, svm etc).
> I also get the view that academics such as Hinton are trying to find ways
of automatically learning the network, whereas there could also be a
parallel track of "engineering" the network, manually creating it perceptron
by percetron, in the way Rodney Brooks advocates "bottom up" subsumption
architecture.
>
> How does opencog relate to the above viewpoint. Is there something
fundamentally flawed in the above as an approach to achieving agi.

NN *may* be inadequate for AGI, because logic-based learning seems to be, at
least for some datasets, more efficient than NN learning (that includes
variants such as SVMs).  This has been my intuition for some time, and
recently I've found a book that explores this issue in more detail.  See
Chris Thorton 2000, "Truth from Trash -- how learning makes sense", MIT
press, or some of his papers on his web site.

To use Thorton's example, he demontrated that a "checkerboard" pattern can
be learned using logic easily, but it will drive a NN learner crazy.

It doesn't mean that the NN approach is hopeless, but it faces some
challenges.  Or, maybe this intuition is wrong (ie, do such heavily
"logical" datasets occur in real life?)

YKY



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to