[ 
https://issues.apache.org/jira/browse/FLINK-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286464#comment-15286464
 ] 

ASF GitHub Bot commented on FLINK-1979:
---------------------------------------

Github user tillrohrmann commented on a diff in the pull request:

    https://github.com/apache/flink/pull/1985#discussion_r63509514
  
    --- Diff: 
flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/optimization/GradientDescent.scala
 ---
    @@ -321,19 +318,8 @@ class GradientDescentL1 extends GradientDescent {
           regularizationConstant: Double,
           learningRate: Double)
         : Vector = {
    -    // Update weight vector with gradient. L1 regularization has no 
gradient, the proximal operator
    -    // does the job.
    -    BLAS.axpy(-learningRate, gradient, weightVector)
    -
    -    // Apply proximal operator (soft thresholding)
    -    val shrinkageVal = regularizationConstant * learningRate
    -    var i = 0
    -    while (i < weightVector.size) {
    -      val wi = weightVector(i)
    -      weightVector(i) = scala.math.signum(wi) *
    -        scala.math.max(0.0, scala.math.abs(wi) - shrinkageVal)
    -      i += 1
    -    }
    +
    +    L1Regularization.takeStep(weightVector, gradient, 
regularizationConstant,learningRate)
    --- End diff --
    
    whitespace missing


> Implement Loss Functions
> ------------------------
>
>                 Key: FLINK-1979
>                 URL: https://issues.apache.org/jira/browse/FLINK-1979
>             Project: Flink
>          Issue Type: Improvement
>          Components: Machine Learning Library
>            Reporter: Johannes Günther
>            Assignee: Johannes Günther
>            Priority: Minor
>              Labels: ML
>
> For convex optimization problems, optimizer methods like SGD rely on a 
> pluggable implementation of a loss function and its first derivative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to