Hi, Andy
I didn't do grid-search. I was just assuming that grid-search returns
optimal parameters for fitting. I may be easily wrong.
So I just used your parameters.
But I stopped the fitting process. I'll try with smaller subset from MNIST
this weekend, as I got some assignments to do, while this
Hi Klo.
Actually back then I was using libSVM directly, not scikit-learn, which
I hadn't discovered yet ;)
Also, if you actually want to do that grid-search, it might take for
ever. Why do you want to do the grid-search on MNIST again?
Andy
On 05/25/2014 07:23 AM, klo uo wrote:
Hi Caleb,
Hi Caleb,
thanks for the pointers. I'm curious what mnist dataset will give as a
result and hope for better ones.
I browsed the paper you linked, and assume you suggest I do transductive
transfer learning model - but I have no idea how to relate that to sklearn
and use it.
Currently I'm trying to
Do note that the digit dataset(MNIST) you used to train the classifier consists
of hand-written digits, while the dataset you used in testing consists of
machine generated digits. It is like learning to read English by learning
German, it might work to some extent but not much. You might be inte
Sure, here is example with 50 sample digits:
http://nbviewer.ipython.org/gist/klonuo/b67789dc9fbea47633ac
As mentioned I'll try later today to learn something from Andy's blog post
and report back if I have anything.
Cheers
On Sat, May 24, 2014 at 9:22 AM, Ronnie Ghose wrote:
> is it possible
@andy
unless you train it of course :P. Tesseract is from the 1990s haha
.
On Sat, May 24, 2014 at 4:01 AM, Andy wrote:
> On 05/23/2014 06:36 PM, klo uo wrote:
> > Replaying to myself...
> >
> > The cause for reported "problem" is that classifier samples have empty
> > strips on both sides, so
On 05/23/2014 06:36 PM, klo uo wrote:
> Replaying to myself...
>
> The cause for reported "problem" is that classifier samples have empty
> strips on both sides, so if I shrink my_array to 6 columns and add
> empty columns on both sides, I get expected value - zero.
>
> But still, results from th
is it possible for you to upload a subset of your digits?
On Sat, May 24, 2014 at 2:52 AM, klo uo wrote:
> Hi,
>
> thanks for your reply.
>
> 1. I tested about 100 samples with sklearn. In my example there was only
> one sample because of readability and simplicity.
>
> In short: I read image w
Hi,
thanks for your reply.
1. I tested about 100 samples with sklearn. In my example there was only
one sample because of readability and simplicity.
In short: I read image with opencv, then detect a region of interest and
extract digits through contouring. These are machine written digits, but
Hi,
I am curious about few things:
1. what are the samples you use for testing your classifier? merely one sample
is hard to do justice for its accuracy.
2. did you try to fine tune the hyper parameters for your svm?
3. you might be interested in this blog post, the author get a very impressi
Replaying to myself...
The cause for reported "problem" is that classifier samples have empty
strips on both sides, so if I shrink my_array to 6 columns and add empty
columns on both sides, I get expected value - zero.
But still, results from this approach can't beat tesseract unfortunately
for m
Hi,
I followed documentation for digit recognition, as I was hoping for
something better then OCR with minimal involvement from my side.
Here is example:
http://nbviewer.ipython.org/gist/klonuo/8738685d0e5a8aa0
So I'm feeding the classifier with my own data compliant to format it
expects and
12 matches
Mail list logo