On Fri, Oct 21, 2011 at 10:36 PM, Alexandre Passos
<[email protected]> wrote:

> If I recall correctly in their own code the projection step is almost
> always commented out. The really important part of the algorithm is
> the learning rate scaled by the strong convexity constant.
>
> When I implemented pegasos I found out that the projection step made
> no difference at all, and hence also commented it out.

The projection step was compulsory in the conference paper but they
made it optional in the journal version :)

> I've implemented this in the past, and kernelized pegasos was always
> far too slow to be usable, as predicting on a new data point involves
> computing the kernel between this data point and every single other
> point on which an update has ever happenned. LaSVM is much faster
> because it is very clever about keeping its support set small, and it
> might be worth implementing. I should have inneficient pure-python
> code for it lying around somewhere.

LaSVM is really good. Implementing it in Cython was actually on my
todo-list. Having it in Cython will allow us to handle numpy array and
scipy sparse matrices without conversion (the liblinear binding is
slow because of that). Note that the original LaSVM code is GPL
anyway. LaSVM would naturally support a partial_fit method too.

Mathieu

------------------------------------------------------------------------------
The demand for IT networking professionals continues to grow, and the
demand for specialized networking skills is growing even more rapidly.
Take a complimentary Learning@Cisco Self-Assessment and learn 
about Cisco certifications, training, and career opportunities. 
http://p.sf.net/sfu/cisco-dev2dev
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to