I already told yous over a year ago sigh, the text window scans over all 17 
last letters as runs along the whole text file, updating a trie tree, so thanks 
is made of thank made of than made of tha.....obviously as it passes the lower 
layers more it strengthens them more, so it is more weighted at bottom.....you 
feed in the text bro.....it is simultaneously storing the Times Seen for all 
17, 16, .... 2, 1 letter phrases in the whole text file so it can tell u how 
many time t and th and the are in the text file........which are stored in a 
single tree or hierarchy as depicted.......there's no other way to make a 
hierarchy other than this way......no ai NN can make a hierarchy or network 
that isn't this. The only other network is a heterarchy for similar words, but 
that's it, a network is only merely to connect words or features that come in 
context with other ones, hence the linking thank+you or the linking cat=dog by 
x amount of weight on the connection. There's no such thing as a NN network of 
transformative Functions learnt by backprop.....to do what? Translate text? 
Predict the Next feature of the prompt fed in? Segment images? We know how 
that's done. The hierarchy of mine allows getting the next expected following 
letters, or the input is bonjour and activates the node hello based on shared 
surrounding contexts "my friend" around both nodes is shared by both, and the 
hierarchy learns the segmentatinos of text etc, thank you is a node and so much 
is a node, thennnn it builds thank you so much.....hierarchy does all 
this........ Fight me BRO fight me!!! All patterns in data are only occur if 
the dataset isn't random ex. abcdefghi...has NO recoccurence of the same letter 
or word, so there is no weights to make, no patterns to store....no translation 
to find....it is Frequency, that runs the brain / makes all patterns.... You 
can't seriously think a NN with transformative functions learnt by Backprop 
exists or can do things mine can't do, all it can do is take your text or image 
or both fed in as a prompt and based on that context and parts in it, pop out 
the prediction.....I mean how much magic can you expect!? What if your prompt 
fed in was only 1 letter "z", what will it predict? probably o, then another o, 
to make zoo....it's that embarrassing and I'm trying to tell ya all, you can 
only do so much with a prompt fed in "the zoo had a large" obviously large > ? 
will be predicted and zoo will steer it to dinosaur and rhino etc, that's 
basically all you can do with the input... besides just these few things will 
make my predictor close to the state of the art, so there you have it.... the 
number of functions triggered by that prompt fed in can only be so many, 
because it has to match or match similarily some amount, so there is only so 
many paths and views you can take of it ex. "i saw a" / "we seen some" / "we 
showed the"....you see it translated and predict the next word and tally up all 
the predictions to get a combined total and predict way better and go back to 
the prompt and now have better probabilities....see? I just figure cuz only 
15.6% of humans are atheists that most people in the AI field are still loving 
the blackboxes cuz most people love mystery instead of simple answers.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2d8c52ab59a2887a-Mde1c865888a80e41df78bd11
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to