Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68541684
--- Diff: R/pkg/R/group.R ---
@@ -198,62 +198,61 @@ createMethods()
#'
#' Applies a R function to each group in the input GroupedData
#'
-#' @param x a GroupedData
-#' @param func A function to be applied to each group partition specified
by GroupedData.
-#' The function `func` takes as argument a key - grouping
columns and
-#' a data frame - a local R data.frame.
-#' The output of `func` is a local R data.frame.
-#' @param schema The schema of the resulting SparkDataFrame after the
function is applied.
-#' The schema must match to output of `func`. It has to be
defined for each
-#' output column with preferred output column name and
corresponding data type.
-#' @return a SparkDataFrame
+#' @param x A GroupedData
#' @rdname gapply
#' @name gapply
#' @export
-#' @examples
-#' \dontrun{
-#' Computes the arithmetic mean of the second column by grouping
-#' on the first and third columns. Output the grouping values and the
average.
-#'
-#' df <- createDataFrame (
-#' list(list(1L, 1, "1", 0.1), list(1L, 2, "1", 0.2), list(3L, 3, "3",
0.3)),
-#' c("a", "b", "c", "d"))
-#'
-#' Here our output contains three columns, the key which is a combination
of two
-#' columns with data types integer and string and the mean which is a
double.
-#' schema <- structType(structField("a", "integer"), structField("c",
"string"),
-#' structField("avg", "double"))
-#' df1 <- gapply(
-#' df,
-#' list("a", "c"),
-#' function(key, x) {
-#' y <- data.frame(key, mean(x$b), stringsAsFactors = FALSE)
-#' },
-#' schema)
-#' collect(df1)
-#'
-#' Result
-#' ------
-#' a c avg
-#' 3 3 3.0
-#' 1 1 1.5
-#' }
+#' @seealso \link{gapplyCollect}
#' @note gapply(GroupedData) since 2.0.0
setMethod("gapply",
signature(x = "GroupedData"),
function(x, func, schema) {
- try(if (is.null(schema)) stop("schema cannot be NULL"))
- packageNamesArr <- serialize(.sparkREnv[[".packages"]],
- connection = NULL)
- broadcastArr <- lapply(ls(.broadcastNames),
- function(name) { get(name, .broadcastNames)
})
- sdf <- callJStatic(
- "org.apache.spark.sql.api.r.SQLUtils",
- "gapply",
- x@sgd,
- serialize(cleanClosure(func), connection = NULL),
- packageNamesArr,
- broadcastArr,
- schema$jobj)
- dataFrame(sdf)
+ if (is.null(schema)) stop("schema cannot be NULL")
+ gapplyInternal(x, func, schema)
})
+
+#' gapplyCollect
+#'
+#' Applies a R function to each group in the input GroupedData and
collects the result
--- End diff --
I think having this would somewhat duplicates
gapplyCollect(SparkDataFrame), since they go to the same rd file. Could you
check?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]