[ 
https://issues.apache.org/jira/browse/SPARK-38536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17506698#comment-17506698
 ] 

Huicheng Song commented on SPARK-38536:
---------------------------------------

The issue is in Spark3, when reading an ORC partition under Parquet Hive table, 
Spark incorrectly use Parquet InputFormat.

This is due to Spark3 using table InputFormat when creating HadoopRDD: 
[https://github.com/apache/spark/blob/branch-3.0/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala#L322]

It is a regression, as Spark2 correctly uses partition InputFormat: 
[https://github.com/apache/spark/blob/branch-2.4/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala#L312]

 

[~Deegue] let me know if you need more context

> Spark 3 can not read mixed format partitions
> --------------------------------------------
>
>                 Key: SPARK-38536
>                 URL: https://issues.apache.org/jira/browse/SPARK-38536
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.0.0, 3.2.1
>            Reporter: Huicheng Song
>            Priority: Major
>
> Spark 3.x reads partitions with table's input format, which fails when the 
> partition has a different input format than the table.
> This is a regression introduced by SPARK-26630. Before that fix, Spark will 
> use Partition InputFormat when creating HadoopRDD. With that fix, Spark uses 
> only Table InputFormat when creating HadoopRDD, causing failures
> Reading mixed format partitions is an import scenario, especially for format 
> migration. It is also well supported in query engines like Hive and Presto.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to