Also,

A good example of needing the context-shared based evidence for related words 
is the following example, you see for the first time a new word used 
differently, it is not dog, it is "turn/close/screw-on", so the context related 
it immediately to the word turn/etc, then at the end you predict turn/etc, 
however you translate that to 'dog' and long and behold that has less weight 
but the priming makes it very active, and rightfully so, dog belongs at the 
end, take a look below, dog is used in a new context and the end word should be 
dog, you can only really get it by translating them priming as explained:

"can you dog that cap for me, and I would like if you could ___"

"I love my new aVcG9, I told my friend he should buy a ___"

While priming would make it reappear there, it needs at least an example that 
it can follow, you wouldn't want to predict 'you should aVcG9', because then it 
could go anywhere.

And while ghost mirroring I had explained in another recent thread is useful, 
it would not work here unless seen before a similar brace for the 2 
similars/exacts placed in that sentence.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T192296c5c5a27230-M09ac398cfe3999377e279fbe
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to