[ 
https://issues.apache.org/jira/browse/HUDI-5807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17689276#comment-17689276
 ] 

Alexey Kudinkin commented on HUDI-5807:
---------------------------------------

We should do this by rebasing HoodieSparkFileReader onto ParquetFileFormat (to 
make sure we're creating readers same way as we do w/ Spark itself)
{code:java}
val parquetFileFormat = SparkAdapterSupport$.MODULE$.sparkAdapter()
    // TODO this should be based on the table config
    .createHoodieParquetFileFormat(true)
    .get(); {code}

> HoodieSparkParquetReader is not appending partition-path values
> ---------------------------------------------------------------
>
>                 Key: HUDI-5807
>                 URL: https://issues.apache.org/jira/browse/HUDI-5807
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: spark
>    Affects Versions: 0.13.0
>            Reporter: Alexey Kudinkin
>            Priority: Blocker
>             Fix For: 0.13.1
>
>
> Current implementation of HoodieSparkParquetReader isn't supporting the case 
> when "hoodie.datasource.write.drop.partition.columns" is set to true.
> In that case partition-path values are expected to be parsed from 
> partition-path and be injected w/in the File Reader (this is behavior of 
> Spark's own readers)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to