Yes cortical columns, general representations for objects in the real world. 
Just a representation space for a feature, a cat rear, side view, front view, 
bottom view, and overview all light up 'cat node'. Just a representation space 
for activation. A general robust feature node that will recognize cat, no 
matter the angle, lighting, species, size, rotation, etc. Hence the 
representation space. Of course the brain may simply work with any lighting 
because it uses shape outlines instead! So it uses a certain representation 
space that can be activated. Like Word2Vec / Glove, the text word 'cat' has an 
ex. 1000 dimensional space, it is related to all other words some certain 
amount of probability, and so to activate it you can see other words that are 
very related instead, and so multiple different words will light it up ex. cat 
overview/ rear view/ cat front view = cat, tiger, dog, kitten, animal, lurks at 
night, eats, run. The point is certain things light up that node, and word2vec 
could accomplish just that. Even thought it never seen cat=kitten it can 
recognize they are in almost same dimensional space!
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1a1cab406375d46f-Mda115231c7495a8385b6128a
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to