Hello, I'm a singularity/agi enthusiast and have been reading on the subject
for a couple of years.

I hope It's ok to ask questions on this forum (that it's not specifically
for opencog/novamente).

What do you think of Geoff Hinton's wake-sleep neural networks in his paper
"To recognize shapes, first learn to generate images" and his talk on it at
google
http://video.google.ca/videoplay?docid=228784531481853811&ei=8nGiSM_dO4eW-QHQwYEU&q=geoff+hinton&vt=lf

Reading this, I get the view of ai as basically neural networks, where each
individual perceptron could be any of a number of algorithms (decision tree,
random forest, svm etc).
I also get the view that academics such as Hinton are trying to find ways of
automatically learning the network, whereas there could also be a parallel
track of "engineering" the network, manually creating it perceptron by
percetron, in the way Rodney Brooks advocates "bottom up" subsumption
architecture.

How does opencog relate to the above viewpoint. Is there something
fundamentally flawed in the above as an approach to achieving agi.



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to