Hello, comrades

I did MNIST experiments using SP and SVM. I think PR(Pattern
recognition) is a very important part of AI. Even, currently we still
have a lot of unknown about our brain or neo-cortex, and we still
don't know how exactly we recognize something. and I'd like to share
the results I got.

I am using MNIST dataset for experiments. the original one without any
preprocessing. in order to feed the data into CLA model, I used fixed
threshold value to binarize the image data.(currently, 128)

MNIST Images(Image sensor) => SP => SVM

About SP, Input size: 28x28, output size: 28x28 (I decreased the value
from 64x64 to this one) and global inhibition, active rate 10%,
potentialPct  0.9. And SP learning is set to false. because I still
didn't figure out a good training method. It has to be unsupervised
learning.

About SVM, I used the svm from scikit with linear kernel, and default
parameters.

the result of SP_SVM:
for small dataset(training/testing:600:100), the testing accuracy is
82.0%, training accuracy is 100%(use the same training data)
for full dataset(training/testing:60000:10000), the testing accuracy
is 90.49%. training accuracy is 92.83%(the same training data)

now compare with Nupic KNN:

the result of SP_KNN(same parameters just with nupic KNN classifier ):
for small dataset(training/testing:600:100), the testing accuracy is
78.0%, training accuracy is 100%(the same training data)
for full dataset(training/testing:60000:10000), the testing accuracy
is 93.12%. training accuracy is 100%(the same training data)

I don't understand, why the training accuracy of SP_SVM on full
dataset is only 92.83%? even without using cross-validation.

Another interesting thing, without using fixed seed, the result would
float in a range. for example, for small dataset, using SVM, it could
be from 75% to 85%.

An Qi
Tokyo University of Agriculture and Technology - Nakagawa Laboratory
2-24-16 Naka-cho, Koganei-shi, Tokyo 184-8588
[email protected]

Reply via email to