Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/4047#issuecomment-70146798
@EntilZha Hereâs a sketch of my plan.
Datasets:
* UCI ML Repository data (also used by Asuncion et al., 2009):
* KOS
* NIPS
* NYTimes
* PubMed (full)
* Wikipedia?
Data preparation:
* Converting to bags of words:
* UCI datasets are given as word counts already.
* Wikipedia dump is text.
* I use the SimpleTokenizer in the LDAExample, which sets term = word
and only accepts alphabetic characters.
* Use stopwords from @dlwh located at
[https://github.com/dlwh/spark/feature/lda]
* No stemming
* Choosing vocab: For various vocabSize settings, I took the most common
vocabSize terms.
Scaling tests: *(doing these first)*
* corpus size
* vocabSize
* k
* numIterations
Accuracy tests: *(doing these second)*
* train on full datasets
* Tune hyperparameters via grid search, following Asuncion et al. (2009)
section 4.1.
* Can hopefully compare with their results in Fig. 5.
These tests will run on a 16-node EC2 cluster of r3.2xlarge instances.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]