Github user avulanov commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-61757505
I think that these 3 parameters should be somehow bound otherwise one can
plug a gradient with vector length that does not correspond to the ANN size. We
could provide a fabric of correct gradients, or, what is better, to create a
`trait ANNGradient` that must be used for any ANN gradient. It should have few
functions that allow setting error function for example. However, some of ML
algorithms with specific optimization are separate classes in MLlib, such as
`SVMWithSGD`. If we follow this route we can create abstract `trait ANN` with
vals of optimizer, gradient and updater that have to be initialized somehow in
the descendants. We'll have one descendant - `ANNWithLBFGS` - the current
implementation.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]