GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/21657
[SPARK-24676][SQL] Project required data from CSV parsed data when column
pruning disabled
## What changes were proposed in this pull request?
This pr modified code to project required data from CSV parsed data when
column pruning disabled.
In the current master, an exception below happens if
`spark.sql.csv.parser.columnPruning.enabled` is false. This is because required
formats and CSV parsed formats are different from each other;
```
./bin/spark-shell --conf spark.sql.csv.parser.columnPruning.enabled=false
scala> val dir = "/tmp/spark-csv/csv"
scala> spark.range(10).selectExpr("id % 2 AS p",
"id").write.mode("overwrite").partitionBy("p").csv(dir)
scala> spark.read.csv(dir).selectExpr("sum(p)").collect()
18/06/25 13:48:46 ERROR Executor: Exception in task 2.0 in stage 2.0 (TID 7)
java.lang.ClassCastException: org.apache.spark.unsafe.types.UTF8String
cannot be cast to java.lang.Integer
at scala.runtime.BoxesRunTime.unboxToInt(BoxesRunTime.java:101)
at
org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getInt(rows.scala:41)
...
```
## How was this patch tested?
Added tests in `CSVSuite`.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/maropu/spark SPARK-24676
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/21657.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #21657
----
commit ecd3c80d9ded538c5c07f65b4f5aa1a4fbf2677b
Author: Takeshi Yamamuro <yamamuro@...>
Date: 2018-06-25T06:17:58Z
Fix
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]