Marek's second point is of utmost importance for anyone doing image
classification. It would be awesome if someone could make 2D topology
easily available. Convolutional neural networks are so much better than
regular neural networks for image classification.



On Wed, Jan 22, 2014 at 3:18 PM, Marek Otahal <[email protected]> wrote:

> Hi Allan,
>
> that was maybe me, it's great someone is working on the MNIST here!
>
> 1/ I'm not 100% clear about the Classifier, but I think it's just a helper
> utility, unrelated to the HTM/CLA, so you've been testing performance of
> any algorithm the CLassifier implements (not CLA imho). So you'd want to
> create a CLA (with SP only) and place Classifier atop of it. The pipeline
> would look like: {MNIST-data[ith-example]} >>> CLA(without TP) >>>(you get
> SDR) >>> Classifier (add MNIST-label[ith-example]
>
> 2/ I assume the mnist dataset is created from 2D images of hadwritten
> digits -> and just simply put in 1D array (??)
> Then you'll lose lot of topological info passing it to the CLA just as is.
> I think this will require ressurection of the Image Encoders that take into
> account distance for neighborhood pixels (each pixel has 8 neighboring px),
> this is used in inhibition etc.
>
> 3/ You're probably overfitting, rather experiment with 80%/20% data split.
>
> Cheers, Mark
>
>
> On Wed, Jan 22, 2014 at 5:57 PM, Allan Inocêncio de Souza Costa <
> [email protected]> wrote:
>
>>
>> Hi,
>>
>> I read a question that someone else asked here, but I couldn't  find the
>> question nor the answers (if any), so I will ask again, as I'm now working
>> around with the classifier.
>>
>> I tried to apply the classifier to the task of handwritten recognition
>> using the MNIST dataset. The best result I got was an overall accuracy of
>> about 42% (by that I mean that after training the entire dataset, the
>> proportion of right predictions from the first to the last training example
>> was 42%), after playing a little with the encoders. Of course this is
>> better than the expected 10% accuracy of a random picker algorithm, but it
>> falls short of what is accomplished by other (linear) algorithms. For those
>> interested, I attached a plot of the accuracy.
>>
>> So here comes the question: what are the inner workings of the
>> classifier? I'm puzzled as it doesn't have a SP. Can someone help or point
>> to some reading?
>>
>> Best regards,
>> Allan
>>
>> _______________________________________________
>> nupic mailing list
>> [email protected]
>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>
>>
>
>
> --
> Marek Otahal :o)
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>


-- 
Pedro Tabacof,
Unicamp - Eng. de Computação 08.
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to