GitHub user dongjoon-hyun reopened a pull request:
https://github.com/apache/spark/pull/21093
[SPARK-23340][SQL][BRANCH-2.3] Upgrade Apache ORC to 1.4.3
## What changes were proposed in this pull request?
This PR updates Apache ORC dependencies to 1.4.3 released on February 9th.
Apache ORC 1.4.2 release removes unnecessary dependencies and 1.4.3 has 5 more
patches (https://s.apache.org/Fll8).
Especially, the following ORC-285 is fixed at 1.4.3.
```scala
scala> val df = Seq(Array.empty[Float]).toDF()
scala> df.write.format("orc").save("/tmp/floatarray")
scala> spark.read.orc("/tmp/floatarray")
res1: org.apache.spark.sql.DataFrame = [value: array<float>]
scala> spark.read.orc("/tmp/floatarray").show()
18/02/12 22:09:10 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
java.io.IOException: Error reading file:
file:/tmp/floatarray/part-00000-9c0b461b-4df1-4c23-aac1-3e4f349ac7d6-c000.snappy.orc
at
org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1191)
at
org.apache.orc.mapreduce.OrcMapreduceRecordReader.ensureBatch(OrcMapreduceRecordReader.java:78)
...
Caused by: java.io.EOFException: Read past EOF for compressed stream Stream
for column 2 kind DATA position: 0 length: 0 range: 0 offset: 0 limit: 0
```
## How was this patch tested?
Pass the Jenkins test.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/dongjoon-hyun/spark SPARK-23340-2
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/21093.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #21093
----
commit fc5d976ffb33ebec996415ac1296196f8458a01f
Author: Dongjoon Hyun <dongjoon@...>
Date: 2018-02-17T08:25:36Z
[SPARK-23340][SQL][BRANCH-2.3] Upgrade Apache ORC to 1.4.3
This PR updates Apache ORC dependencies to 1.4.3 released on February 9th.
Apache ORC 1.4.2 release removes unnecessary dependencies and 1.4.3 has 5 more
patches (https://s.apache.org/Fll8).
Especially, the following ORC-285 is fixed at 1.4.3.
```scala
scala> val df = Seq(Array.empty[Float]).toDF()
scala> df.write.format("orc").save("/tmp/floatarray")
scala> spark.read.orc("/tmp/floatarray")
res1: org.apache.spark.sql.DataFrame = [value: array<float>]
scala> spark.read.orc("/tmp/floatarray").show()
18/02/12 22:09:10 ERROR Executor: Exception in task 0.0 in stage 1.0 (TID 1)
java.io.IOException: Error reading file:
file:/tmp/floatarray/part-00000-9c0b461b-4df1-4c23-aac1-3e4f349ac7d6-c000.snappy.orc
at
org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1191)
at
org.apache.orc.mapreduce.OrcMapreduceRecordReader.ensureBatch(OrcMapreduceRecordReader.java:78)
...
Caused by: java.io.EOFException: Read past EOF for compressed stream Stream
for column 2 kind DATA position: 0 length: 0 range: 0 offset: 0 limit: 0
```
Pass the Jenkins test.
Author: Dongjoon Hyun <[email protected]>
Closes #20511 from dongjoon-hyun/SPARK-23340.
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]