The reason I believe it's related to circuits is because:
1) I show in my attached file how my "hierarchy" (as shown in the images)
builds larger features as it reads text.
2) Then I show how, say it reads "my cat ran", then a day later reads "my dog
ran", this will activate the 'cat' node in the hierarchy by indirect paths.
3) I have proof and strong belief and tests with GPT-2 (I go through them) that
nodes retain energy for a while, which leads to much better prediction accuracy.
4) I propose to use reward for prediction.
5) I show how nodes can become less pure, ex. stores dog and cat as 'cotg', and
can works for words too.
And more...
------------------------------------------
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T43ab26814eaa1bdd-Med62b339620701fa1603d329
Delivery options: https://agi.topicbox.com/groups/agi/subscription