Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-67883971
@avulanov I think we should support multiple optimizers too, but it should
be done properly, in a way which will not change APIs unless absolutely
necessary. Otherwise, users will suddenly find their code breaks when they
update Spark versions; some users will refuse to use modules which do not have
stable APIs. It would be great if we could split the optimizer issue into
multiple PRs, where we only add support for more optimizers later on (after the
optimizer API is stabilized).
Also, with respect to trainWithX methods, it is really hard to come up with
good ways to specify parameters right now because of the problems with
Optimizer APIs. My feeling is that the public API should be declared
Experimental and kept as minimal as possible right now, even if it means
limiting options. Afterwards, optimization can be cleaned up, and then the ANN
API can be updated and made non-Experimental. Even if you don't allow all of
the options you want at first, it would be valuable to get some other users
testing the implementation.
@bgreeven +1 for only including hidden layer nodes in
```randomWeights()``` argument ```hiddenLayersTopology```.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]