[
https://issues.apache.org/jira/browse/FLINK-1807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14519585#comment-14519585
]
ASF GitHub Bot commented on FLINK-1807:
---------------------------------------
Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/613#discussion_r29348422
--- Diff:
flink-staging/flink-ml/src/main/scala/org/apache/flink/ml/optimization/RegularizationType.scala
---
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.optimization
+
+import breeze.numerics._
+import org.apache.flink.ml.math.{BLAS, Vector}
+import org.apache.flink.ml.math.Breeze._
+import breeze.linalg.{norm => BreezeNorm, Vector => BreezeVector, max}
+
+// TODO(tvas): Change name to RegularizationPenalty?
+abstract class RegularizationType extends Serializable{
+ /** Calculates and applies the regularization amount and the
regularization parameter
+ *
+ * @param weights The old weights
+ * @param effectiveStepSize The effective step size for this iteration
+ * @param regParameter The current regularization parameter
+ * @return A tuple whose first element is the updated weight vector and
the second is the
+ * new regularization parameter
+ */
+ def applyRegularization(weights: Vector, effectiveStepSize: Double,
regParameter: Double):
+ (Vector, Double)
+ // TODO(tvas): We are not currently using the regularization value
anywhere, but it could be
+ // useful to keep a history of it.
+
+}
+
+class NoRegularization extends RegularizationType {
+ /** Returns the original weights without any regularization applied
+ *
+ * @param weights The old weights
+ * @param effectiveStepSize The effective step size for this iteration
+ * @param regParameter The current regularization parameter
+ * @return A tuple whose first element is the updated weight vector and
the second is the
+ * regularization value
+ */
+ override def applyRegularization(weights: Vector,
+ effectiveStepSize: Double,
+ regParameter: Double):
+ (Vector, Double) = {(weights, 0.0)}
+}
+
+class L2Regularization extends RegularizationType {
+ /** Calculates and applies the regularization amount and the
regularization parameter
+ *
+ * Implementation was taken from the Apache Spark Mllib library:
+ * http://git.io/vfZIT
+ * @param weights The old weights
+ * @param effectiveStepSize The effective step size for this iteration
+ * @param regParameter The current regularization parameter
+ * @return A tuple whose first element is the updated weight vector and
the second is the
+ * regularization value
+ */
+ override def applyRegularization(weights: Vector,
+ effectiveStepSize: Double,
+ regParameter: Double):
+ (Vector, Double) = {
+
+ val brzWeights: BreezeVector[Double] = weights.asBreeze
+ brzWeights :*= (1.0 - effectiveStepSize * regParameter)
--- End diff --
I think you have to apply the regularization gradient on the old weight
vector and not on the new weight vector to make it mathematically sound here.
> Stochastic gradient descent optimizer for ML library
> ----------------------------------------------------
>
> Key: FLINK-1807
> URL: https://issues.apache.org/jira/browse/FLINK-1807
> Project: Flink
> Issue Type: Improvement
> Components: Machine Learning Library
> Reporter: Till Rohrmann
> Assignee: Theodore Vasiloudis
> Labels: ML
>
> Stochastic gradient descent (SGD) is a widely used optimization technique in
> different ML algorithms. Thus, it would be helpful to provide a generalized
> SGD implementation which can be instantiated with the respective gradient
> computation. Such a building block would make the development of future
> algorithms easier.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)