Repository: spark
Updated Branches:
  refs/heads/master 08913ce00 -> 6a0fda2c0


[SPARKR][MINOR] Fix LDA doc

## What changes were proposed in this pull request?

This PR tries to fix the name of the `SparkDataFrame` used in the example. 
Also, it gives a reference url of an example data file so that users can play 
with.

## How was this patch tested?

Manual test.

Author: Junyang Qian <junya...@databricks.com>

Closes #14853 from junyangq/SPARKR-FixLDADoc.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6a0fda2c
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/6a0fda2c
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/6a0fda2c

Branch: refs/heads/master
Commit: 6a0fda2c0590b455e8713da79cd5f2413e5d0f28
Parents: 08913ce
Author: Junyang Qian <junya...@databricks.com>
Authored: Mon Aug 29 10:23:10 2016 -0700
Committer: Xiangrui Meng <m...@databricks.com>
Committed: Mon Aug 29 10:23:10 2016 -0700

----------------------------------------------------------------------
 R/pkg/R/mllib.R | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/6a0fda2c/R/pkg/R/mllib.R
----------------------------------------------------------------------
diff --git a/R/pkg/R/mllib.R b/R/pkg/R/mllib.R
index 6808aae..64d19fa 100644
--- a/R/pkg/R/mllib.R
+++ b/R/pkg/R/mllib.R
@@ -994,18 +994,22 @@ setMethod("spark.survreg", signature(data = 
"SparkDataFrame", formula = "formula
 #' @export
 #' @examples
 #' \dontrun{
-#' text <- read.df("path/to/data", source = "libsvm")
+#' # nolint start
+#' # An example "path/to/file" can be
+#' # paste0(Sys.getenv("SPARK_HOME"), "/data/mllib/sample_lda_libsvm_data.txt")
+#' # nolint end
+#' text <- read.df("path/to/file", source = "libsvm")
 #' model <- spark.lda(data = text, optimizer = "em")
 #'
 #' # get a summary of the model
 #' summary(model)
 #'
 #' # compute posterior probabilities
-#' posterior <- spark.posterior(model, df)
+#' posterior <- spark.posterior(model, text)
 #' showDF(posterior)
 #'
 #' # compute perplexity
-#' perplexity <- spark.perplexity(model, df)
+#' perplexity <- spark.perplexity(model, text)
 #'
 #' # save and load the model
 #' path <- "path/to/model"


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to