Github user etrain commented on the pull request:
https://github.com/apache/spark/pull/476#issuecomment-41753574
Before I get too deep into this review - I want to step back and think
about whether we expect the model in this case to be on the order of the size
of the data - I think it is, and if so, we may want to consider representing
the model as RDD[DocumentTopicFeatures] and RDD[TopicWordFeatures], similar to
what we do with ALS. This may change the algorithm substantially.
Separately, maybe it makes sense to have a concrete use case to work with
(reuters dataset or something) so that we can evaluate how much memory actually
gets used given a reasonably sized corpus.
Perhaps @mengxr or @jegonzal has a strong opinion on this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---