Github user felixcheung commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14229#discussion_r74675732
  
    --- Diff: R/pkg/R/mllib.R ---
    @@ -605,6 +701,69 @@ setMethod("spark.survreg", signature(data = 
"SparkDataFrame", formula = "formula
                 return(new("AFTSurvivalRegressionModel", jobj = jobj))
               })
     
    +#' Latent Dirichlet Allocation
    +#'
    +#' \code{spark.lda} fits a Latent Dirichlet Allocation model on a 
SparkDataFrame. Users can call
    +#' \code{summary} to get a summary of the fitted LDA model, 
\code{spark.posterior} to compute
    +#' posterior probabilities on new data, \code{spark.perplexity} to compute 
log perplexity on new
    +#' data and \code{write.ml}/\code{read.ml} to save/load fitted models.
    +#'
    +#' @param data A SparkDataFrame for training
    +#' @param features Features column name, default "features". Either Vector 
format column or String
    +#'        format column are accepted.
    +#' @param k Number of topics, default 10
    +#' @param maxIter Maximum iterations, default 20
    +#' @param optimizer Optimizer to train an LDA model, "online" or "em", 
default "online"
    +#' @param subsamplingRate (For online optimizer) Fraction of the corpus to 
be sampled and used in
    +#         each iteration of mini-batch gradient descent, in range (0, 1], 
default 0.05
    +#' @param topicConcentration concentration parameter (commonly named 
\code{beta} or \code{eta}) for
    +#'        the prior placed on topic distributions over terms, default -1 
to set automatically on the
    +#'        Spark side. Use \code{summary} to retrieve the effective 
topicConcentration.
    +#' @param docConcentration concentration parameter (commonly named 
\code{alpha}) for the
    +#'        prior placed on documents distributions over topics 
(\code{theta}), default -1 to set
    +#'        automatically on the Spark side. Use \code{summary} to retrieve 
the effective
    +#'        docConcentration.
    +#' @param customizedStopWords stopwords that need to be removed from the 
given corpus. Only effected
    +#'        given training data with string format column.
    +#' @param maxVocabSize maximum vocabulary size, default 1 << 18
    +#' @return \code{spark.lda} returns a fitted Latent Dirichlet Allocation 
model
    +#' @rdname spark.lda
    +#' @aliases spark.lda,SparkDataFrame
    --- End diff --
    
    add `-method`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to