Github user avulanov commented on the pull request:

    https://github.com/apache/spark/pull/1290#issuecomment-82528305
  
    @debasish83 The current implementation is intended for more typical use of 
artificial neural network, although it would be interesting task to implement 
distributed model in general, not only for the sparse case. Are you going to 
build recommender system with artificial neural networks? Could you please 
elaborate on this or point me to the relevant papers. 
    
    If model fits into memory there is still a need for distributed gradient if 
your data is big. You can process each part of your data on a separate worker 
simultaneously and then do an update. This will not give linear speed-up but 
still worth doing. That's how it is implemented in MLlib's optimizers and also 
both in this branch and GuoQiang's.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to