[ 
https://issues.apache.org/jira/browse/HUDI-5807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17763201#comment-17763201
 ] 

Ethan Guo commented on HUDI-5807:
---------------------------------

[~linliu] You tested the default behavior of merging on master where the 
`HoodieAvroRecordMerger` and record payload logic is used.  You need to set 
`hoodie.datasource.write.record.merger.impls` to 
`org.apache.hudi.HoodieSparkRecordMerger` so that the Spark record type and 
Spark record merger are used, which uses `HoodieSparkParquetReader`.  Try 
testing that again with "hoodie.datasource.write.drop.partition.columns" as 
"true".

> HoodieSparkParquetReader is not appending partition-path values
> ---------------------------------------------------------------
>
>                 Key: HUDI-5807
>                 URL: https://issues.apache.org/jira/browse/HUDI-5807
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: spark
>    Affects Versions: 0.13.0
>            Reporter: Alexey Kudinkin
>            Assignee: Lin Liu
>            Priority: Blocker
>             Fix For: 1.0.0
>
>
> Current implementation of HoodieSparkParquetReader isn't supporting the case 
> when "hoodie.datasource.write.drop.partition.columns" is set to true.
> In that case partition-path values are expected to be parsed from 
> partition-path and be injected w/in the File Reader (this is behavior of 
> Spark's own readers)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to