Have a question...

These days more and more people are interested in deep learning, where
distributive representations may be relevant.

​In a distributive representation many units are shared to represent
individual concepts.  In my mind I visualize the concepts as a partition of
the vector space:​

​
​There is also the "principle of superposition" that says that 2 concepts
can share the same vector space representation via vector addition.​

​But here comes a problem:  if we have 3 propositions, say​
​   P1 = yesterday rained​
​   P2 = Obama is president of US
   P3 = the moon is made of cheese
and if there exists a linear dependence among them, say:
   a3 P3 = a1 P1 + a2 P2
where a1, a2, a3 are scalars, that seems to create a relation between
apparently unrelated sentences, and would lead to error. ​

​So it seems that the principle of superposition cannot hold with
distributive representation​s unless the dimension of the vector space is
bigger than the number of concepts / propositions / objects that need to be
represented.

​Maybe my understanding above is incorrect?  What's the problem here...?​

-- 
*YKY*
*"The ultimate goal of mathematics is to eliminate any need for intelligent
thought"* -- Alfred North Whitehead



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to