Github user tillrohrmann commented on a diff in the pull request:
https://github.com/apache/flink/pull/692#discussion_r30786949
--- Diff:
flink-staging/flink-ml/src/main/scala/org/apache/flink/ml/optimization/GradientDescent.scala
---
@@ -36,19 +36,20 @@ import org.apache.flink.ml.optimization.Solver._
* At the moment, the whole partition is used for SGD, making it
effectively a batch gradient
* descent. Once a sampling operator has been introduced, the algorithm
can be optimized
*
- * @param runParameters The parameters to tune the algorithm. Currently
these include:
- * [[Solver.LossFunction]] for the loss function to
be used,
- * [[Solver.RegularizationType]] for the type of
regularization,
- * [[Solver.RegularizationParameter]] for the
regularization parameter,
+ * The parameters to tune the algorithm are:
+ * [[Solver.LossFunctionParameter]] for the loss
function to be used,
+ * [[Solver.RegularizationTypeParameter]] for the
type of regularization,
+ * [[Solver.RegularizationValueParameter]] for the
regularization parameter,
--- End diff --
Does IntelliJ suggests the wrong `LossFunction` type when you use it to set
the parameter value in the `ParameterMap`?
I see the point with the same type names from a programmer's perspective.
As a user, though, it makes sense that the parameter is called `LossFunction`,
for example. I think I'm slightly in favour of the shorter version.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---