On 7/10/21, [email protected] <[email protected]> wrote: > Isn't self attention about helping translation of the prompt? Ex. 'the dog, > it was sent to them, food was high quality' and we see yes dog and food can > fit where it and them are, and, another way to know what it and them > mean/are is by looking at all previous 30 rare words - they all might match > as """dog""", and so likely it and them are "dog" category as well. Also, is > self attention the mirror prediction method for end of prompt as well too? > Ex. cat cat cat cat cat > _?_ What is the next word here? Cat! So it adjusts > the output predictions to favor cat. Does Self-Attention in GPT do all this? > If not, which contribute?
Final version of my paper (corrected a lot of inaccuracies): https://drive.google.com/file/d/1P0D9814ivR0MScowcmWh9ISpBETlUnq-/view?usp=sharing You seem to mix together 1) BERT's prediction of next word in text with 2) prediction of next item in IQ tests but these two are not exactly the same...? You may argue that humans can do both, and indeed an AGI should be able to do that too. Typically, it would require multiple steps of reasoning, you were just looking at 1 layer of Transformer or Attention Mechanism, that corresponds to a single step of inference. YKY ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb5526c8a9151713b-M4d57775eefc661f786f005ac Delivery options: https://agi.topicbox.com/groups/agi/subscription
