Github user junyangq commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14229#discussion_r75026541
  
    --- Diff: R/pkg/R/mllib.R ---
    @@ -299,6 +306,94 @@ setMethod("summary", signature(object = 
"NaiveBayesModel"),
                 return(list(apriori = apriori, tables = tables))
               })
     
    +# Returns posterior probabilities from a Latent Dirichlet Allocation model 
produced by spark.lda()
    +
    +#' @param newData A SparkDataFrame for testing
    +#' @return \code{spark.posterior} returns a SparkDataFrame containing 
posterior probabilities
    +#'         vectors named "topicDistribution"
    +#' @rdname spark.lda
    +#' @aliases spark.posterior,LDAModel,SparkDataFrame-method
    +#' @export
    +#' @note spark.posterior(LDAModel) since 2.1.0
    +setMethod("spark.posterior", signature(object = "LDAModel", newData = 
"SparkDataFrame"),
    +          function(object, newData) {
    +            return(dataFrame(callJMethod(object@jobj, "transform", 
newData@sdf)))
    +          })
    +
    +# Returns the summary of a Latent Dirichlet Allocation model produced by 
\code{spark.lda}
    +
    +#' @param object A Latent Dirichlet Allocation model fitted by 
\code{spark.lda}.
    +#' @param maxTermsPerTopic Maximum number of terms to collect for each 
topic. Default value of 10.
    +#' @return \code{summary} returns a list containing
    +#'         \item{\code{docConcentration}}{concentration parameter commonly 
named \code{alpha} for
    +#'               the prior placed on documents distributions over topics 
\code{theta}}
    +#'         \item{\code{topicConcentration}}{concentration parameter 
commonly named \code{beta} or
    +#'               \code{eta} for the prior placed on topic distributions 
over terms}
    +#'         \item{\code{logLikelihood}}{log likelihood of the entire corpus}
    +#'         \item{\code{logPerplexity}}{log perplexity}
    +#'         \item{\code{isDistributed}}{TRUE for distributed model while 
FALSE for local model}
    +#'         \item{\code{vocabSize}}{number of terms in the corpus}
    +#'         \item{\code{topics}}{top 10 terms and their weights of all 
topics}
    +#'         \item{\code{vocabulary}}{whole terms of the training corpus, 
NULL if libsvm format file
    +#'               used as training set}
    +#' @rdname spark.lda
    +#' @aliases summary,LDAModel-method
    +#' @export
    +#' @note summary(LDAModel) since 2.1.0
    +setMethod("summary", signature(object = "LDAModel"),
    +          function(object, maxTermsPerTopic, ...) {
    --- End diff --
    
    Is `...` useful here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to