[
https://issues.apache.org/jira/browse/FLINK-1992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14553808#comment-14553808
]
ASF GitHub Bot commented on FLINK-1992:
---------------------------------------
Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/692#discussion_r30779810
--- Diff:
flink-staging/flink-ml/src/main/scala/org/apache/flink/ml/optimization/GradientDescent.scala
---
@@ -36,19 +36,20 @@ import org.apache.flink.ml.optimization.Solver._
* At the moment, the whole partition is used for SGD, making it
effectively a batch gradient
* descent. Once a sampling operator has been introduced, the algorithm
can be optimized
*
- * @param runParameters The parameters to tune the algorithm. Currently
these include:
- * [[Solver.LossFunction]] for the loss function to
be used,
- * [[Solver.RegularizationType]] for the type of
regularization,
- * [[Solver.RegularizationParameter]] for the
regularization parameter,
+ * The parameters to tune the algorithm are:
+ * [[Solver.LossFunctionParameter]] for the loss
function to be used,
+ * [[Solver.RegularizationTypeParameter]] for the
type of regularization,
+ * [[Solver.RegularizationValueParameter]] for the
regularization parameter,
--- End diff --
Do we want to append the "Parameter" suffix to the parameter names? So far
we haven't done that.
> Add convergence criterion to SGD optimizer
> ------------------------------------------
>
> Key: FLINK-1992
> URL: https://issues.apache.org/jira/browse/FLINK-1992
> Project: Flink
> Issue Type: Improvement
> Components: Machine Learning Library
> Reporter: Till Rohrmann
> Assignee: Theodore Vasiloudis
> Priority: Minor
> Labels: ML
> Fix For: 0.9
>
>
> Currently, Flink's SGD optimizer runs for a fixed number of iterations. It
> would be good to support a dynamic convergence criterion, too.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)