t; x[0]]).cache()#.collect()
>
> corpus = grouped.zipWithIndex().map(lambda (term_counts, doc_id): [doc_id,
> term_counts]).cache()
>
> #corpus.cache()
>
> model = LDA.train(corpus, k=10, maxIterations=10, optimizer="online")
>
> #ldaModel = LDA.train(corpus, k=3)
&
I'm not exactly sure how you would like to setup your LDA model, but I
noticed there was no Python example for LDA in Spark. I created this issue
to add it https://issues.apache.org/jira/browse/SPARK-13500. Keep an eye
on this if it could be of help.
bryan
On Wed, Feb 24, 2016 at 8:34 PM, Mishr
Hello All,
If someone has any leads on this please help me.
Sincerely,
Abhishek
From: Mishra, Abhishek
Sent: Wednesday, February 24, 2016 5:11 PM
To: user@spark.apache.org
Subject: LDA topic Modeling spark + python
Hello All,
I am doing a LDA model, please guide me with something.
I ha