Github user shivaram commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12426#discussion_r61343618
  
    --- Diff: R/pkg/R/context.R ---
    @@ -226,6 +226,49 @@ setCheckpointDir <- function(sc, dirName) {
       invisible(callJMethod(sc, "setCheckpointDir", 
suppressWarnings(normalizePath(dirName))))
     }
     
    +#' @title Run a function over a list of elements, distributing the 
computations with Spark.
    +#'
    +#' @description
    +#' Applies a function in a manner that is similar to doParallel or lapply 
to elements of a list.
    +#' The computations are distributed using Spark. It is conceptually the 
same as the following code:
    +#'   lapply(list, func)
    +#'
    +#' Known limitations:
    +#'  - variable scoping and capture: compared to R's rich support for 
variable resolutions, the
    +# distributed nature of SparkR limits how variables are resolved at 
runtime. All the variables
    +# that are available through lexical scoping are embedded in the closure 
of the function and
    +# available as read-only variables within the function. The environment 
variables should be
    +# stored into temporary variables outside the function, and not directly 
accessed within the
    +# function.
    +#'
    +#'  - loading external packages: In order to use a package, you need to 
load it inside the
    +#'    closure. For example, if you rely on the MASS module, here is how 
you would use it:
    +#'
    +#'\dontrun{
    +#' train <- function(hyperparam) {
    +#'   library(MASS)
    +#'   lm.ridge(“y ~ x+z”, data, lambda=hyperparam)
    +#'   model
    +#' }
    +#'}
    +#'
    +#' @rdname spark.lapply
    +#' @param list the list of elements
    +#' @param func a function that takes one argument.
    +#' @return a list of results (the exact type being determined by the 
function)
    +#' @export
    +#' @examples
    +#'\dontrun{
    +#' doubled <- spark.lapply(1:10, function(x){2 * x})
    +#'}
    +spark.lapply <- function(list, func) {
    +  sc <- get(".sparkRjsc", envir = .sparkREnv)
    --- End diff --
    
    One minor thing: All the existing functions like `parallelize` take in a 
Spark context as the first argument.  We've discussed removing this in the past 
(See https://github.com/apache/spark/pull/9192)  but we didn't reach a 
resolution on it.
    
    So to be consistent it'd be better to take in `sc` as the first argument 
here ? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to