[
https://issues.apache.org/jira/browse/SPARK-2505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14069354#comment-14069354
]
Apache Spark commented on SPARK-2505:
-------------------------------------
User 'dbtsai' has created a pull request for this issue:
https://github.com/apache/spark/pull/1518
> Weighted Regularizer
> --------------------
>
> Key: SPARK-2505
> URL: https://issues.apache.org/jira/browse/SPARK-2505
> Project: Spark
> Issue Type: New Feature
> Components: MLlib
> Reporter: DB Tsai
> Fix For: 1.1.0
>
>
> The current implementation of regularization in linear model is using
> `Updater`, and this design has couple issues as the following.
> 1) It will penalize all the weights including intercept. In machine learning
> training process, typically, people don't penalize the intercept.
> 2) The `Updater` has the logic of adaptive step size for gradient decent, and
> we would like to clean it up by separating the logic of regularization out
> from updater to regularizer so in LBFGS optimizer, we don't need the trick
> for getting the loss and gradient of objective function.
> In this work, a weighted regularizer will be implemented, and users can
> exclude the intercept or any weight from regularization by setting that term
> with zero weighted penalty. Since the regularizer will return a tuple of loss
> and gradient, the adaptive step size logic, and soft thresholding for L1 in
> Updater will be moved to SGD optimizer.
--
This message was sent by Atlassian JIRA
(v6.2#6252)