New Discovery I made:

Take a theoretical memory device that has finite storage (binary, 01100) with 
no external mechanisms to dictate how to view these bits other than its program 
file (its laws, which say every 5 bits is a new pixel's brightness, and after 
100 make newline, 100 newlines, then start at top right pixel, until reach 1M 
images). Now consider that it literally stores 1,000,000 images perfectly pixel 
by pixel (a very long 2D collage [][][][]...). In this setup, each possible 
collage is unique and you cannot compress any of them. Now consider storing 
these images in a lossless compressed file that retains X amount of data 
perfectly - by changing the program file laws to look for patterns and specify 
bitstream length. It's now possible that if the file is a billion Ts then you 
can compress it so so much, while random data will have no further compression. 
Each possible collage still has its own unique bitstream, ex 0000, 0001, ... 
but very few will be ex. 0, 1, 01, just as Matt Mahoney said. Yet most seem 
compressible to us because our data/body contains pattern like organizations. 
These compressed files like 01 are now completely random. Now consider we store 
the million images as representations that activate when the correct input 
signals are seen. If you were to store a hierarchy or heterarchy of 
representations where our representation is built while sharing its lower sub 
representations, you could save on storage space while storing X amount of data 
(which you could re-generate X losslessly e.g. paint by hand an image from 
memory). But as we seen above, each highest layer node for each image would at 
best be compressed - by looking for hierarchical/heterarchical patterns, which 
means that all our visual memories in our neural network are actually 
compressed pieces of images we've recorded, literally. The brain doesn't store 
ever high level feature, most larger images are not stored. Which allows higher 
level nodes to have more space to act like a representation - a cat node will 
activate for multiple different images of cats. So we don't store all the 
images we see. Now, our access time is pretty fast, I don't think the brain 
de-compresses the visual features as it would also be slower and more work - 
simple is better. This means that, more so for lower level nodes, we actually 
store images (appox. like what went into the eyes) in the brain (in a 3D 
hierarchical network). These lower layer features still have some 
representation space they cover. Therefore, images are stored in our brain 
using a shared hierarchy for compression without any de-compression lag 
required Matt Mahoney - low level parts of images are stored, then the higher 
nodes only build bigger parts, which are representations that actually 
look-like what was seen, sorta. So we store image parts, while using 2 
compressions that don't need decompressing - hierarchy and representation 
space. You can store the image parts any way you want or etc but what is 
activated from the eyeballs has a finite possible classifications and only a 
few - cat face and cat side view - light up that node space. P.S. 
https://ibb.co/hYcVVpB surrounding context votes on a feature's classification, 
rotation, location, scale, brightness, color, motion, good/bad-ness. Meaning if 
you see a fox in snow then it may activate the wolf node more for fox, because 
fox activates fox and wolf, but snow activates wolf and it wins energy score. 
However, this doesn't change the representations, only which are activated. The 
leaning tower looks extra rotated because the left side one is rotated and that 
adds to the other one, just like the sexyness of a babe makes a car look good 
in a commercial or how a blob looks dark because its surroundings are bright or 
how the distant monster is far away classified and is therefore scaled bigger.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf119e1ddbea44373-Mb3c76e3c1fef3b0fe46d547d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to