codope opened a new pull request, #6333:
URL: https://github.com/apache/hudi/pull/6333

   ### Change Logs
   
   Return early from the partition extractor infer function if 
`hoodie.table.partition.fields` is set and it does not match with 
`hoodie.datasource.write.partitionpath.field`.
   
   ### Impact
   
   Without this fix, in the scenario mentioned above, the partition extractor 
would be set as `NonPartitionedExtractor` even though hoodie.properties has a 
partition field.
   
   **Risk level: none | low | medium | high**
   
   High
   
   Added a test to cover the scenario in `TestHoodieSyncConfig`. Also, manually 
verified spark-sql as follows:
   ```
   spark-sql> create table hudi_cow_pt_tbl (
            >   id bigint,
            >   name string,
            >   ts bigint,
            >   dt string,
            >   hh string
            > ) using hudi
            > tblproperties (
            >   type = 'cow',
            >   primaryKey = 'id',
            >   preCombineField = 'ts'
            >  )
            > partitioned by (dt, hh)
            > location '/tmp/hudi/hudi_cow_pt_tbl';
   spark-sql> insert into hudi_cow_pt_tbl partition (dt, hh)
            > select 1 as id, 'a1' as name, 1000 as ts, '2021-12-09' as dt, 
'10' as hh;
   ...
   ...
   22/08/08 23:15:04 INFO BaseHoodieTableFileIndex: Refresh table 
hudi_cow_pt_tbl, spent: 389 ms
   Time taken: 18.07 seconds
   22/08/08 23:15:04 INFO SparkSQLCLIDriver: Time taken: 18.07 seconds
   ```
   
   ### Contributor's checklist
   
   - [ ] Read through [contributor's 
guide](https://hudi.apache.org/contribute/how-to-contribute)
   - [ ] Change Logs and Impact were stated clearly
   - [ ] Adequate tests were added if applicable
   - [ ] CI passed
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to