Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/166#discussion_r10696072
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescentWithLocalUpdate.scala
---
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.mllib.optimization
+
+import org.apache.spark.Logging
+import org.apache.spark.rdd.RDD
+
+import org.jblas.DoubleMatrix
+
+import scala.collection.mutable.ArrayBuffer
+import scala.util.Random
+
+/**
+ * Class used to solve an optimization problem using Gradient Descent.
+ * @param gradient Gradient function to be used.
+ * @param updater Updater to be used to update weights after every
iteration.
+ */
+class GradientDescentWithLocalUpdate(gradient: Gradient, updater: Updater)
+ extends GradientDescent(gradient, updater) with Logging
+{
+ private var numLocalIterations: Int = 1
+
+ /**
+ * Set the number of local iterations. Default 1.
+ */
+ def setNumLocalIterations(numLocalIter: Int): this.type = {
+ this.numLocalIterations = numLocalIter
+ this
+ }
+
+ override def optimize(data: RDD[(Double, Array[Double])],
initialWeights: Array[Double])
+ : Array[Double] = {
+
+ val (weights, stochasticLossHistory) =
GradientDescentWithLocalUpdate.runMiniBatchSGD(
+ data,
+ gradient,
+ updater,
+ stepSize,
+ numIterations,
+ numLocalIterations,
+ regParam,
+ miniBatchFraction,
+ initialWeights)
+ weights
+ }
+
+}
+
+// Top-level method to run gradient descent.
+object GradientDescentWithLocalUpdate extends Logging {
+ /**
+ * Run BSP+ gradient descent in parallel using mini batches.
--- End diff --
Hmm... since BSP+ is not a well known concept and there's no related
references (yet), maybe we should not use this term? Any suggestions @mengxr?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---