Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/692#discussion_r30779810
--- Diff:
flink-staging/flink-ml/src/main/scala/org/apache/flink/ml/optimization/GradientDescent.scala
---
@@ -36,19 +36,20 @@ import org.apache.flink.ml.optimization.Solver._
* At the moment, the whole partition is used for SGD, making it
effectively a batch gradient
* descent. Once a sampling operator has been introduced, the algorithm
can be optimized
*
- * @param runParameters The parameters to tune the algorithm. Currently
these include:
- * [[Solver.LossFunction]] for the loss function to
be used,
- * [[Solver.RegularizationType]] for the type of
regularization,
- * [[Solver.RegularizationParameter]] for the
regularization parameter,
+ * The parameters to tune the algorithm are:
+ * [[Solver.LossFunctionParameter]] for the loss
function to be used,
+ * [[Solver.RegularizationTypeParameter]] for the
type of regularization,
+ * [[Solver.RegularizationValueParameter]] for the
regularization parameter,
--- End diff --
Do we want to append the "Parameter" suffix to the parameter names? So far
we haven't done that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---