I've observed that SVMs fit with sklearn consistently get around 5
percentage points lower accuracy than equivalent SVMs fit with Adam
Coates' SVM implementation based on minFunc. Am I overlooking some
basic usage issue (eg too loose of a default convergence criterion),
or is this likely to be a defect in the underlying libsvm
implementation?

To demonstrate, run svm_comparison.m in matlab then svm_comparison.py in python.
You'll need Adam Coates' code from
http://www.stanford.edu/~acoates/papers/sc_vq_demo.tgz  for train_svm
to work.

To be clear, SVC and Adam Coates' code are supposedly minimizing the
exact same convex loss function, so this really shouldn't happen.

Attachment: svm_comparison.m
Description: application/vnd.wolfram.mathematica.package

Attachment: svm_comparison.py
Description: Binary data

------------------------------------------------------------------------------
Keep Your Developer Skills Current with LearnDevNow!
The most comprehensive online learning library for Microsoft developers
is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5, CSS3, MVC3,
Metro Style Apps, more. Free future releases when you subscribe now!
http://p.sf.net/sfu/learndevnow-d2d
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to