Github user debasish83 commented on the pull request:

    https://github.com/apache/spark/pull/1290#issuecomment-82510517
  
    @avulanov If we want an auto-encoder that's working at the scale of matrix 
factorization and LDA, we cannot assume that model fits into worker memory. 
Even for a decent 10M x 3M auto-encoder and 10K hidden units we need 10M x 10K 
+ 3M x 10K model memory (most likely sparse)...If the model fits into worker 
memory, the solver fits into master memory and there is neither the need for 
distributed gradient calculation or coordinate descent based solver...Does 
@witgo 's branch also aggregates gradients in the same way ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to