[ https://issues.apache.org/jira/browse/FLINK-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286577#comment-15286577 ]
ASF GitHub Bot commented on FLINK-1979: --------------------------------------- Github user thvasilo commented on a diff in the pull request: https://github.com/apache/flink/pull/1985#discussion_r63517587 --- Diff: flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/optimization/GradientDescent.scala --- @@ -272,7 +272,7 @@ abstract class GradientDescent extends IterativeSolver { * The regularization function is `1/2 ||w||_2^2` with `w` being the weight vector. */ class GradientDescentL2 extends GradientDescent { - + //TODO(skavulya): Pass regularization penalty as a parameter --- End diff -- I've mentioned this in the previous PR but adding here for completeness: I'm in favor of adding the regularization penalty as a parameter for the optimizer. However that would involve changes that perhaps beyond the scope of this PR, currently with only SGD available we don't have to worry about the applicability of L1/L2 regularization, but should add a note for when L-BFGS get implemented. Depending on how much work @skavulya it would be to make the change here, we can choose to have a separate PR for that or include it here. > Implement Loss Functions > ------------------------ > > Key: FLINK-1979 > URL: https://issues.apache.org/jira/browse/FLINK-1979 > Project: Flink > Issue Type: Improvement > Components: Machine Learning Library > Reporter: Johannes Günther > Assignee: Johannes Günther > Priority: Minor > Labels: ML > > For convex optimization problems, optimizer methods like SGD rely on a > pluggable implementation of a loss function and its first derivative. -- This message was sent by Atlassian JIRA (v6.3.4#6332)