HyukjinKwon commented on a change in pull request #23760: [SPARK-26762][SQL][R]
Arrow optimization for conversion from Spark DataFrame to R DataFrame
URL: https://github.com/apache/spark/pull/23760#discussion_r256350010
##########
File path: R/pkg/R/DataFrame.R
##########
@@ -1177,11 +1177,65 @@ setMethod("dim",
setMethod("collect",
signature(x = "SparkDataFrame"),
function(x, stringsAsFactors = FALSE) {
+ useArrow <- FALSE
+ arrowEnabled <-
sparkR.conf("spark.sql.execution.arrow.enabled")[[1]] == "true"
+ if (arrowEnabled) {
+ useArrow <- tryCatch({
+ requireNamespace1 <- requireNamespace
+ if (!requireNamespace1("arrow", quietly = TRUE)) {
+ stop("'arrow' package should be installed.")
+ }
+ # Currenty Arrow optimization does not support raw for now.
+ # Also, it does not support explicit float type set by users.
+ if (inherits(schema(x), "structType")) {
+ if (any(sapply(schema(x)$fields(),
+ function(x) x$dataType.toString() ==
"FloatType"))) {
+ stop(paste0("Arrow optimization in the conversion from
Spark DataFrame to R ",
+ "DataFrame does not support FloatType yet."))
+ }
+ if (any(sapply(schema(x)$fields(),
+ function(x) x$dataType.toString() ==
"BinaryType"))) {
+ stop(paste0("Arrow optimization in the conversion from
Spark DataFrame to R ",
+ "DataFrame does not support BinaryType yet."))
+ }
+ }
+ TRUE
+ }, error = function(e) {
+ warning(paste0("The conversion from Spark DataFrame to R
DataFrame was attempted ",
+ "with Arrow optimization because ",
+ "'spark.sql.execution.arrow.enabled' is set to
true; however, ",
+ "failed, attempting non-optimization. Reason: ",
+ e))
Review comment:
In other places so far, yes. That's going to show whole Java stacktrace.
Here, it wouldn't but R error messages. For example, if I don't install
"arrow", it will show something like:
```
Warning message:
In value[[3L]](cond) :
The conversion from Spark DataFrame to R DataFrame was attempted with
Arrow optimization because 'spark.sql.execution.arrow.enabled' is set to true;
however, failed, attempting non-optimization. Reason: Error in
doTryCatch(return(expr), name, parentenv, handler): 'arrow' package should be
installed.
```
The reason why it's different is (it's same as in Python's toPandas with
Arrow), it tried to avoid the fallback in the middle of distributed
computation. Arguably R data frame -> Spark data frame conversion is
fallback-able since R data frame is in driver alone and arguably not quite big.
In case of Spark data frame -> R Data frame, usually collect brings
computation as well. So, if it falls back, it needs to re-compute everything in
a distributed manner, which is potentially a huge job. So, it tried to only
fallback during schema validation which is done in R side.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]