It seems you are right about the sign of the intercept. Maybe it should
also be flipped in the kernel case? The idea was to have it such that the
decision function could be easily implemented given the dual_coef_ and the
intercept. Feel free to open a pull request for better documentation.
On Wed, Jan 28, 2015 at 2:27 AM, kjs <b...@riseup.net> wrote:
> Andy:
> > Hi Kevin.
> > Somehow I am sure there was a test computing that, but I can't find it
> > any more.
> > I'm pretty sure I wrote that at some point.
> > Btw, when I used a precomputed kernel using your implementation, I got
> > different results.
> > Not sure why that is.
> >
> > Cheers,
> > Andy
> >
>
> My mistake, I was not accounting for gamma in the polynomial kernel. I
> have some new test code that calculates rho with high accuracy for the
> linear SVC and some accuracy for the polynomial and rbf kernel[0]. Two
> thoughts:
>
> 1) My calculation of rho is not exactly the same as the libsvm
> calculation, and I do not believe the difference can be attributed to a
> rounding error. Why/how does libsvm calculate rho as the average of
> G[i]*y[i] for all free support vectors in G and y? (I am not sure what
> the G or y arrays contain. G is described as the gradient in some
> comments and I assume y to contain the class labels.)
>
> 2) sklearn appears to me to report a positive rho in the linear SVC case
> and a negative rho in the kernel method case. If others agree, could we
> document this more clearly?
>
> -Kevin
>
> [0] http://pastebin.com/BXhVvH7y
>
> >
> > On 01/27/2015 11:55 AM, kjs wrote:
> >> Hi all,
> >>
> >> To gain better understanding of SVC methods, I am trying to train an SVC
> >> and then from the dual coefficients (in the kernel case) and the weights
> >> (in the linear case) to calculate rho and to make predictions on new
> >> feature vectors. Thus far, I am only successful in the linear case. I
> >> have posted some sample code to a paste bin for further clarity [0].
> >>
> >> Please help me to understand where I am going wrong. My understanding is
> >> that rho, the constant term, should be the same for every support
> >> vector. However, in the code, I use the average of all hard-margin
> >> support vectors (with an absolute value less than C) to calculate rho.
> >>
> >> I have compared the sklearn SVC results with the libsvm SVC results. As
> >> per the documentation sklearn reports -rho from the libsvm trained SVC.
> >>
> >> Thanks much,
> >> Kevin
> >>
> >> [0] http://pastebin.com/5fqdh0CV
> >>
> >>
> >>
> ------------------------------------------------------------------------------
> >>
> >> Dive into the World of Parallel Programming. The Go Parallel Website,
> >> sponsored by Intel and developed in partnership with Slashdot Media,
> >> is your
> >> hub for all things parallel software development, from weekly thought
> >> leadership blogs to news, videos, case studies, tutorials and more.
> >> Take a
> >> look and join the conversation now. http://goparallel.sourceforge.net/
> >>
> >>
> >> _______________________________________________
> >> Scikit-learn-general mailing list
> >> Scikit-learn-general@lists.sourceforge.net
> >> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
> >
> >
> >
> >
> >
> ------------------------------------------------------------------------------
> > Dive into the World of Parallel Programming. The Go Parallel Website,
> > sponsored by Intel and developed in partnership with Slashdot Media, is
> your
> > hub for all things parallel software development, from weekly thought
> > leadership blogs to news, videos, case studies, tutorials and more. Take
> a
> > look and join the conversation now. http://goparallel.sourceforge.net/
> >
> >
> >
> > _______________________________________________
> > Scikit-learn-general mailing list
> > Scikit-learn-general@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
> >
>
>
> ------------------------------------------------------------------------------
> Dive into the World of Parallel Programming. The Go Parallel Website,
> sponsored by Intel and developed in partnership with Slashdot Media, is
> your
> hub for all things parallel software development, from weekly thought
> leadership blogs to news, videos, case studies, tutorials and more. Take a
> look and join the conversation now. http://goparallel.sourceforge.net/
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
------------------------------------------------------------------------------
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general