Hello, guys.

I have some questions about SDR of visual pattern(image or some
thing).

Recently, I added a sliding window to my image sensor(for mnist). It
scans the current image step by step. Let's say it moves from top-left
to bottom-right. If it is 4x4 sized window. and connected with SP.
Since the input of SP is 4x4, there're 2^16 possible output. After
scanned a image without overlapping(for the sliding window), we would
have a sequence of these SDRs. If we put these SDRs together into a
big
one. Does that make sense? I mean, is it still a SDR? sparse and
distributed?

I dumped the sample pattern and its SDRs into images. It's from MNIST
dataset. See attachment.
SamplePattern is the input image. 28x28 pixels.
Sample_Sub_SDRs are the output of each steps. 8x8 for each (the
dimension of SP is 8x8). since I used 4x4 sized window, there're 7x7 SDRs
totally for each image, but some SDR are the exactly same.
SampleSDR is the merged SDR. 7x7x8x8 pixels.

Another question is about vision and brain. If we see the exactly same
tiny thing(ignore the most details like color, thickness, brightness,
angle of viewpoint), let's say a line which is draw onto a paper, will
the same neurons in our cortex be activated? I mean when we realize
that, "ah, that's a line". and what about some motions? like, when we
do the same motion.

An Qi
Tokyo University of Agriculture and Technology - Nakagawa Laboratory
2-24-16 Naka-cho, Koganei-shi, Tokyo 184-8588
[email protected]


Attachment: SamplePatternAndItsSDRs.tar.gz
Description: application/gzip

Reply via email to