Costi, I think your reasoning is right as of the importance of the multi-modal 
input for generalisation. As of ANN (as popular), they were explicily mentioned 
only in several slides in the lecture about Narrow AI and why it failed, with 
some hopes given to Schmidhuber's LSTM. Slides 34-36:  
http://research.twenkid.com/agi/2010/Narrow_AI_Review_Why_Failed_MTR.pdf

In a new course they would deserve more attention. :)

Perhaps current ANN  could be used for AGI if glued appropriately with other 
methods (not just one ANN config. on tensorflow e.g.) or maybe as Goodfellow 
suggested in the podcast on AI podcast of MIT - if a big enough multi-modal 
dataset is fed with appropriate representation (your proposal is in that 
direction as well). However with current NN I assume it would be massively less 
efficient than with a better  more clever, or "selective", method. 
 <http://research.twenkid.com/agi/2010/Narrow_AI_Review_Why_Failed_MTR.pdf>
If you lacked access to the computing power of the big organizations, working 
their way is impossible, i.e. adding more and more resources, while using the 
same or just slightly adjusted simple  algorithm. You have to either invent 
something smarter which works on the less powerful hardware available to you, 
or join them.

 "Deep Learning" as hierarchical processing at different  resolutions (time 
including, all dimensions), starting from sensory input, interactive adjustment 
(but more so), RL ideas (but hierarchical), multi-modal data etc. are right 
directions regarding the "generality", but the way the mainstream ANN do it yet 
is not what I considered "AGI" either back then and now. 


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5417cc95d981211e-M1bbb803753b84a8f5de3cbd8
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to