Hey Immanuelle,

Thanks for the summary of your reading of the paper. This is useful.

On Thu, Jul 19, 2012 at 05:37:31PM +0200, iBayer wrote:
>    What's the main point of implementing regularized log loss in
>    scikit-learn?

I think that you and I have the same needs here: we need some code that
is fast in the dense data, small n/large p situation, and we think that
we can do something that's better than liblinear.

>    How about a straight forward implementation with strong rules that
>    is well integrated in scikit-learn?

Do you think that you can make it outperform liblinear? I do think that
the strong rules would probably give it an edge.

>    It appears to me that lot's of tricks are needed to stand a chance
>    to be competitive with the fastest implementations (lots of Cython
>    pointer kung fu etc.).

Yeah :$

Gael

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to