I appreciated the links to the transformers. I found a slightly more readable 
and see that the first step of transformer use in nlp is to turn words into 
embedding and positional vectors that indicate more than just co-occurrence. I 
appreciate that. But then the phraseology becomes confused when the neural 
networks are said to create vectors. Are these traditional vectors or are they 
neural net vectors? I have no way of telling what the author is getting at.  I 
would have to examine a number of sources in order to decode what the authors 
are saying because each author outlines the process in their own way. I am not 
excited about word vectors (even in the traditional sense of the term). I think 
we human beings learn using component-based conceptualization. 'This situation' 
is like 'that situation', I can apply 'that operation' to 'that situation' so 
does that mean I can apply 'that operation' to 'this situation'? This is a kind 
of substitution of component o that works in situation a to some other 
situation b which has some similarity to situation a.  But I think transformers 
and attention are a step in the right direction.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tefaeb8e790a54cec-Ma85d909f54dd6456f7b3fe1d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to