It's been 5.5 years since I started AGI. I still hate Backpropagation with a 
passion. It makes no sense. Everyone uses it. Google. OpenAI. Ben. Hinton. It 
is said to be the meat/ a learning algorithm. But it's the last thing that is 
AI. Backprop is nothing without telling the architecture to be n layers deep, 
how wide, or to be a hierarchy form, or to use attention, or similar word 
embeds, or use residuals, or use dropout, or normalization, or drive the model 
using persona dialog rewards, or pooling energy into clusters, etc. Backprop 
does nothing, 0. I have made a network learn with no backprop, by making links 
stronger to represent how frequent letters/ phrases are, the lower layers have 
most sightings, the bottom nodes builds all bigger memories. I have how all AGI 
works too. There's no backprop. They say the hierarchy/ network builds bigger 
tranformational functions that specialize in deeper layers, but this is not 
nothing but context matching longer prompt memories in deeper layers! Once you 
match the memory, you get many predictions at the end of the branch. See in 
this image once you see the long memory "cats are anim" you get the following 
letters that follow deep in the net https://imgbb.com/p22LNrN

The most odd prediction I haven't 100% grasped in morphing a volcano into a 
iceberg, it's still prediction, just a bit different. You may think a net with 
backprop is needed to *transform* the input image, but I reckon it's not. Let's 
look at some examples. A dutch face (white) becomes black, a wolf becomes half 
frog, a chair in the shape of a watermelon, a girl sitting on the tail of a 
giant frog, an illustration-style of 3 broken pikachu teapots thrown in a 
dumpster. So how are these generated? Well, there's lots of matching first of 
all, the girl sitting on a tail is recognized at a bench, the throwing teapots 
as throwing trash, the color as color, the giant frog as a giant object, etc. 
DALL-E and others using CLIP seem to re-generate to refine the artwork being 
generated. While the methods to make this efficient may be odd, we shouldn't 
explain it that way though, we should explain it how it makes sense showing the 
patterns in data. Let's take the chair watermelon one, it should be with the 
texture of a watermelon, or shape, or both, or just color. When we prime in the 
prompt, this sets it up to predict the next pixels on both contexts. I'll stop 
here for now but so far it all made sense to me so this as well is just waiting 
to be verified more concretely in my AGI blueprint. Everything stems off exact 
matches, every problem, every pattern. You can't get similarity without exact 
roots. Saying a telescope is to the left of a bed and is wet and has 9 legs 
while godzilla uses it is just it not only predicting each into the image, but 
making sure each match "man looks into scope" and but is godzilla instead, it 
matches. It looks so new but it is SO based on things already known to make it.

P.S. the reason my AGI Guide is taking nearly a year is cuz I went over all my 
notes making sure all is in one place, and had to make tons of hard discoveries 
on demand.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2d8c52ab59a2887a-Mbc7f8b4a52cb836867f790db
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to