[
https://issues.apache.org/jira/browse/SPARK-33113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17213709#comment-17213709
]
Jacek Pliszka commented on SPARK-33113:
---------------------------------------
sessionInfo()
R version 3.6.3 (2020-02-29)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.5 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/openblas/libblas.so.3
LAPACK: /usr/lib/x86_64-linux-gnu/libopenblasp-r0.2.20.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] SparkR_3.0.0 arrow_1.0.1
loaded via a namespace (and not attached):
[1] Rcpp_1.0.5 digest_0.6.25 assertthat_0.2.1 R6_2.4.1
[5] magrittr_1.5 rlang_0.4.8 TeachingDemos_2.10 hwriter_1.3.2
[9] hwriterPlus_1.0-3 vctrs_0.2.4 tools_3.6.3 bit64_0.9-7
[13] glue_1.4.2 purrr_0.3.4 bit_1.1-15.2 compiler_3.6.3
[17] Rserve_1.8-6 htmltools_0.4.0 tidyselect_1.0.0
> [SparkR] gapply works with arrow disabled, fails with arrow enabled
> stringsAsFactors=TRUE
> -----------------------------------------------------------------------------------------
>
> Key: SPARK-33113
> URL: https://issues.apache.org/jira/browse/SPARK-33113
> Project: Spark
> Issue Type: Bug
> Components: R
> Affects Versions: 3.0.0, 3.0.1
> Reporter: Jacek Pliszka
> Priority: Major
>
> Running in databricks on Azure
> {code}
> library("arrow")
> library("SparkR")
> df <- as.DataFrame(list("A", "B", "C"), schema="ColumnA")
> udf <- function(key, x) data.frame(out=c("dfs"))
> {code}
>
> This works:
> {code}
> sparkR.session(master = "local[*]",
> sparkConfig=list(spark.sql.execution.arrow.sparkr.enabled = "false"))
> df1 <- gapply(df, c("ColumnA"), udf, "out String")
> collect(df1)
> {code}
> This fails:
> {code}
> sparkR.session(master = "local[*]",
> sparkConfig=list(spark.sql.execution.arrow.sparkr.enabled = "true"))
> df2 <- gapply(df, c("ColumnA"), udf, "out String")
> collect(df2)
> {code}
>
> with error
> {code}
> Error in readBin(con, raw(), as.integer(dataLen), endian = "big") : }}Error
> in readBin(con, raw(), as.integer(dataLen), endian = "big") : invalid 'n'
> argument
> Error in readBin(con, raw(), as.integer(dataLen), endian = "big") : invalid
> 'n' argument In addition: Warning messages: 1: Use 'read_ipc_stream' or
> 'read_feather' instead. 2: Use 'read_ipc_stream' or 'read_feather' instead.
> {code}
>
> Clicking through Failed Stages to Failure Reason:
>
> {code}
> Job aborted due to stage failure: Task 49 in stage 1843.0 failed 4 times,
> most recent failure: Lost task 49.3 in stage 1843.0 (TID 89810, 10.99.0.5,
> executor 0): java.lang.UnsupportedOperationException
> at
> org.apache.spark.sql.vectorized.ArrowColumnVector$ArrowVectorAccessor.getUTF8String(ArrowColumnVector.java:233)
> at
> org.apache.spark.sql.vectorized.ArrowColumnVector.getUTF8String(ArrowColumnVector.java:109)
> at
> org.apache.spark.sql.vectorized.ColumnarBatchRow.getUTF8String(ColumnarBatch.java:220)
> at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
> Source)
> at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
> Source)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
> at
> org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.$anonfun$next$1(ArrowConverters.scala:131)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1559)
> at
> org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.next(ArrowConverters.scala:140)
> at
> org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$1.next(ArrowConverters.scala:115)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
> at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
> at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
> at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
> at scala.collection.TraversableOnce.to(TraversableOnce.scala:315)
> at scala.collection.TraversableOnce.to$(TraversableOnce.scala:313)
> at scala.collection.AbstractIterator.to(Iterator.scala:1429)
> at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:307)
> at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:307)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1429)
> at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:294)
> at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:288)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1429)
> at
> org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToR$3(Dataset.scala:3589)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
> at org.apache.spark.scheduler.Task.doRunTask(Task.scala:144)
> at org.apache.spark.scheduler.Task.run(Task.scala:117)
> at
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$9(Executor.scala:639)
> at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1559)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:642)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Driver stacktrace:
> {code}
>
>
>
>
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]