Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/692#discussion_r30779835
--- Diff:
flink-staging/flink-ml/src/main/scala/org/apache/flink/ml/optimization/GradientDescent.scala
---
@@ -36,19 +36,20 @@ import org.apache.flink.ml.optimization.Solver._
* At the moment, the whole partition is used for SGD, making it
effectively a batch gradient
* descent. Once a sampling operator has been introduced, the algorithm
can be optimized
*
- * @param runParameters The parameters to tune the algorithm. Currently
these include:
- * [[Solver.LossFunction]] for the loss function to
be used,
- * [[Solver.RegularizationType]] for the type of
regularization,
- * [[Solver.RegularizationParameter]] for the
regularization parameter,
+ * The parameters to tune the algorithm are:
+ * [[Solver.LossFunctionParameter]] for the loss
function to be used,
+ * [[Solver.RegularizationTypeParameter]] for the
type of regularization,
+ * [[Solver.RegularizationValueParameter]] for the
regularization parameter,
* [[IterativeSolver.Iterations]] for the maximum
number of iteration,
* [[IterativeSolver.Stepsize]] for the learning
rate used.
+ * [[IterativeSolver.ConvergenceThreshold]] when
provided the algorithm will
+ * stop the iterations if the change in the value of
the objective
+ * function between successive iterations is is
smaller than this value.
*/
-class GradientDescent(runParameters: ParameterMap) extends IterativeSolver
{
+class GradientDescent() extends IterativeSolver() {
--- End diff --
Do we need the parenthesis?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---