LuciferYang commented on pull request #35262:
URL: https://github.com/apache/spark/pull/35262#issuecomment-1022919561
@parthchandra I think we should add some UTs similar to `String with Nulls
Scan` because when I add
```
sparkSession.conf.set(SQLConf.COLUMN_VECTOR_OFFHEAP_ENABLED.key, "true")
```
to `DataSourceReadBenchmark` to enable `ColumnVector` use offheap memory,
`String with Nulls Scan` releated cases will failed as follows:
```
14:33:29.271 ERROR org.apache.spark.executor.Executor: Exception in task 0.0
in stage 5043.0 (TID 3936)
org.apache.spark.sql.execution.QueryExecutionException: Encountered error
while reading file
file:///private/var/folders/0x/xj61_dbd0dldn793s6cyb7rr0000gp/T/spark-a6065795-c141-43cd-8ec6-359f3f3a0307/parquetV2/part-00000-7c6de322-95b1-4283-9399-8306753c68ab-c000.snappy.parquet.
Details:
at
org.apache.spark.sql.errors.QueryExecutionErrors$.cannotReadFilesError(QueryExecutionErrors.scala:659)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:283)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116)
~[classes/:?]
at
org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:546)
~[classes/:?]
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.columnartorow_nextBatch_0$(Unknown
Source) ~[?:?]
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.hashAgg_doAggregateWithoutKey_0$(Unknown
Source) ~[?:?]
at
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
Source) ~[?:?]
at
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
~[classes/:?]
at
org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
~[classes/:?]
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
~[scala-library-2.12.15.jar:?]
at
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140)
~[classes/:?]
at
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
~[classes/:?]
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
~[classes/:?]
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
~[classes/:?]
at org.apache.spark.scheduler.Task.run(Task.scala:136) ~[classes/:?]
at
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:507)
~[classes/:?]
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1475)
~[classes/:?]
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:510)
[classes/:?]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[?:1.8.0_292]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[?:1.8.0_292]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_292]
Caused by: org.apache.parquet.io.ParquetDecodingException: Failed to read
268435456 bytes
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedDeltaLengthByteArrayReader.readBinary(VectorizedDeltaLengthByteArrayReader.java:79)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedDeltaByteArrayReader.initFromPage(VectorizedDeltaByteArrayReader.java:76)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.initDataReader(VectorizedColumnReader.java:293)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readPageV2(VectorizedColumnReader.java:362)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.access$100(VectorizedColumnReader.java:52)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader$1.visit(VectorizedColumnReader.java:260)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader$1.visit(VectorizedColumnReader.java:247)
~[classes/:?]
at
org.apache.parquet.column.page.DataPageV2.accept(DataPageV2.java:192)
~[parquet-column-1.12.2.jar:1.12.2]
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readPage(VectorizedColumnReader.java:247)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.readBatch(VectorizedColumnReader.java:183)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:311)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.nextKeyValue(VectorizedParquetRecordReader.java:209)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:39)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116)
~[classes/:?]
at
org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:274)
~[classes/:?]
... 19 more
```
I manually verified that there was no such problem before this pr
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]