Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/19643#discussion_r148983980
--- Diff: R/pkg/R/context.R ---
@@ -319,6 +319,27 @@ spark.addFile <- function(path, recursive = FALSE) {
invisible(callJMethod(sc, "addFile",
suppressWarnings(normalizePath(path)), recursive))
}
+#' Adds a JAR dependency for Spark tasks to be executed in the future.
+#'
+#' The \code{path} passed can be either a local file, a file in HDFS (or
other Hadoop-supported
+#' filesystems), an HTTP, HTTPS or FTP URI, or local:/path for a file on
every worker node.
+#' If \code{addToCurrentClassLoader} is true, add the jar to the current
driver.
--- End diff --
I think it is roughly right .. I wanted to avoid the words like
"classloader" or "thread" .. Not sure what's the best wording to describe this
within R / Python contexts.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]