If you want to use the exponential loss (the loss used by AdaBoost), you
can train a (single) linear model which minimizes it directly. The main
point I want to make is that a LinearSVC is not a good choice of weak
learner.

M.

On Fri, Oct 3, 2014 at 6:10 PM, Olivier Grisel <olivier.gri...@ensta.org>
wrote:

> 2014-09-27 4:51 GMT+02:00 Mathieu Blondel <math...@mblondel.org>:
> > This is because LinearSVC doesn't support sample_weight.
> >
> > I added a new issue for raising a more explicit error message:
> > https://github.com/scikit-learn/scikit-learn/issues/3711
> >
> > BTW, a linear combination of linear models is a linear model itself. So
> you
> > can't learn a better model than a LinearSVC() with
> > AdaBoostClassifier(svm.LinearSVC())
>
> While adaboosted linear SVM and vanilla linear SVM are both linear
> models, they don't optimize the same loss: the loss of the boosted
> model automatically puts more weights on samples that are harder to
> classify (closer to the decision hyperplane, or on the wrong side of
> the optimal hyperplane).
>
> Therefore, adaboosted linear models might or might not be better than
> non-boosted linear models. I think it depends on the amount of label
> noise that might cause the boosted models to overfit some noisy
> samples outliers.
>
> --
> Olivier
> http://twitter.com/ogrisel - http://github.com/ogrisel
>
------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://pubads.g.doubleclick.net/gampad/clk?id=154622311&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to