Github user thunterdb commented on a diff in the pull request:
https://github.com/apache/spark/pull/10610#discussion_r49028171
--- Diff: python/pyspark/mllib/clustering.py ---
@@ -774,17 +843,32 @@ def train(cls, rdd, k=10, maxIterations=20,
docConcentration=-1.0,
topicConcentration=-1.0, seed=None, checkpointInterval=10,
optimizer="em"):
"""Train a LDA model.
- :param rdd: RDD of data points
- :param k: Number of clusters you want
- :param maxIterations: Number of iterations. Default to 20
- :param docConcentration: Concentration parameter (commonly
named "alpha")
- for the prior placed on documents' distributions over topics
("theta").
- :param topicConcentration: Concentration parameter (commonly
named "beta" or "eta")
- for the prior placed on topics' distributions over terms.
- :param seed: Random Seed
- :param checkpointInterval: Period (in iterations) between
checkpoints.
- :param optimizer: LDAOptimizer used to perform the
actual calculation.
- Currently "em", "online" are supported. Default to "em".
+ :param rdd:
+ Train with a RDD of data points.
+ :param k:
+ Number of topics to infer. I.e., the number of soft cluster
centers.
--- End diff --
nit. According to Strunk and White, you cannot start a sentence with I.e.:
`to infer, i.e., the...`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]