[
https://issues.apache.org/jira/browse/SPARK-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yanbo Liang updated SPARK-13448:
--------------------------------
Description:
This JIRA keeps a list of MLlib behavior changes in Spark 2.0. So we can
remember to add them to the migration guide.
* SPARK-13429: change convergenceTol in LogisticRegressionWithLBFGS from 1e-4
to 1e-6.
* SPARK-7780: LogisticRegressionWithLBFGS intercept will not be regularized if
users train binary classification model with L1/L2 Updater by
LogisticRegressionWithLBFGS, because it calls ML LogisticRegresson
implementation. Meanwhile if users set without regularization, training with or
without feature scaling will return the same solution by the same convergence
rate(because they run the same code route), this behavior is different from the
old API.
was:
This JIRA keeps a list of MLlib behavior changes in Spark 2.0. So we can
remember to add them to the migration guide.
* SPARK-13429: change convergenceTol in LogisticRegressionWithLBFGS from 1e-4
to 1e-6.
* SPARK-7780: LogisticRegressionWithLBFGS intercept will not be regularized.
Meanwhile if users train binary classification model with L1/L2 Updater by
LogisticRegressionWithLBFGS, it calls ML LogisiticRegresson implementation.
When without regularization, training with or without feature scaling will
return the same solution by the same convergence rate(because they run the same
code route), this behavior is different from the old API.
> Document MLlib behavior changes in Spark 2.0
> --------------------------------------------
>
> Key: SPARK-13448
> URL: https://issues.apache.org/jira/browse/SPARK-13448
> Project: Spark
> Issue Type: Documentation
> Components: ML, MLlib
> Reporter: Xiangrui Meng
> Assignee: Xiangrui Meng
>
> This JIRA keeps a list of MLlib behavior changes in Spark 2.0. So we can
> remember to add them to the migration guide.
> * SPARK-13429: change convergenceTol in LogisticRegressionWithLBFGS from 1e-4
> to 1e-6.
> * SPARK-7780: LogisticRegressionWithLBFGS intercept will not be regularized
> if users train binary classification model with L1/L2 Updater by
> LogisticRegressionWithLBFGS, because it calls ML LogisticRegresson
> implementation. Meanwhile if users set without regularization, training with
> or without feature scaling will return the same solution by the same
> convergence rate(because they run the same code route), this behavior is
> different from the old API.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]