Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/8254#discussion_r37837766
--- Diff: docs/mllib-clustering.md ---
@@ -443,23 +443,106 @@ LDA can be thought of as a clustering algorithm as
follows:
* Rather than estimating a clustering using a traditional distance, LDA
uses a function based
on a statistical model of how text documents are generated.
-LDA takes in a collection of documents as vectors of word counts.
-It supports different inference algorithms via `setOptimizer` function.
EMLDAOptimizer learns clustering using
[expectation-maximization](http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm)
-on the likelihood function and yields comprehensive results, while
OnlineLDAOptimizer uses iterative mini-batch sampling for [online variational
inference](https://www.cs.princeton.edu/~blei/papers/HoffmanBleiBach2010b.pdf)
and is generally memory friendly. After fitting on the documents, LDA provides:
-
-* Topics: Inferred topics, each of which is a probability distribution
over terms (words).
-* Topic distributions for documents: For each non empty document in the
training set, LDA gives a probability distribution over topics. (EM only). Note
that for empty documents, we don't create the topic distributions. (EM only)
+LDA supports different inference algorithms via `setOptimizer` function.
+`EMLDAOptimizer` learns clustering using
+[expectation-maximization](http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm)
+on the likelihood function and yields comprehensive results, while
+`OnlineLDAOptimizer` uses iterative mini-batch sampling for [online
+variational
+inference](https://www.cs.princeton.edu/~blei/papers/HoffmanBleiBach2010b.pdf)
+and is generally memory friendly.
-LDA takes the following parameters:
+LDA takes in a collection of documents as vectors of word counts and the
+following parameters:
* `k`: Number of topics (i.e., cluster centers)
-* `maxIterations`: Limit on the number of iterations of EM used for
learning
-* `docConcentration`: Hyperparameter for prior over documents'
distributions over topics. Currently must be > 1, where larger values encourage
smoother inferred distributions.
-* `topicConcentration`: Hyperparameter for prior over topics'
distributions over terms (words). Currently must be > 1, where larger values
encourage smoother inferred distributions.
-* `checkpointInterval`: If using checkpointing (set in the Spark
configuration), this parameter specifies the frequency with which checkpoints
will be created. If `maxIterations` is large, using checkpointing can help
reduce shuffle file sizes on disk and help with failure recovery.
-
-*Note*: LDA is a new feature with some missing functionality. In
particular, it does not yet
-support prediction on new documents, and it does not have a Python API.
These will be added in the future.
+* `LDAOptimizer`: Optimizer to use for learning the LDA model, either
+`EMLDAOptimizer` or `OnlineLDAOptimizer`
+* `docConcentration`: Dirichlet parameter for prior over documents'
+distributions over topics. Larger values encourage smoother inferred
+distributions.
+* `topicConcentration`: Dirichlet parameter for prior over topics'
+distributions over terms (words). Larger values encourage smoother
+inferred distributions.
+* `maxIterations`: Limit on the number of iterations.
+* `checkpointInterval`: If using checkpointing (set in the Spark
+configuration), this parameter specifies the frequency with which
+checkpoints will be created. If `maxIterations` is large, using
+checkpointing can help reduce shuffle file sizes on disk and help with
+failure recovery.
+
+
+All of MLlib's LDA models support:
+
+* `describeTopics(n: Int)`: Prints `n` of the inferred topics, each of
+which is a probability distribution over terms (words).
+* `topicsMatrix`: For each non empty document in the
+training set, LDA gives a probability distribution over topics. Note
+that for empty documents, we don't create the topic distributions.
+
+*Note*: LDA is still an experimental feature under active development.
+As a result, certain features are only available in one of the two
+optimizers / models generated by the optimizer. The following
+discussion will describe each optimizer/model pair separately.
+
+**EMLDAOptimizer and DistributedLDAModel**
+
+For the parameters provided to `LDA`:
+
+* `docConcentration`: Only symmetric priors are supported, so all values
+in the provided `k`-dimensional vector must be identical. All values
+must also be $> 1.0$. Providing `Vector(-1)` results in default behavior
+(uniform `k` dimensional vector with value $(50 / k) + 1$
+* `topicConcentration`: Only symmetric priors supported. Values must be
+$> 1.0$. Providing `-1` results in defaulting to a value of $0.1 + 1$.
+* `maxIterations`: Interpreted as maximum number of EM iterations.
--- End diff --
Remove "Interpreted as"
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]