n3nash commented on issue #2637:
URL: https://github.com/apache/hudi/issues/2637#issuecomment-810006367


   @Sugamber You code looks correct. Here is the flow : 
   
   1) InputDF -> DF<HoodieRecord> -> DF<HoodieRecord (PartialUpdatePayload 
(bytes))
   2) In-batch dedup, combine of records with same record key -> preCombine(..) 
-> getInsertValue(incremental_schema)
   3) Perform upsert
   4) combineAndGetUpdateValue(record_from_disk, incremental_schema) -> 
getInsertValue(incremental_schema)
   
   Now, if your target schema (schema of the record_from_disk) is different 
from the incremental_schema, that is not a problem as long as target_schema and 
incremental_schema are backwards compatible. 
   
   At a high level, the incremental_schema should always be a superset (all 
fields + new fields) of the target schema


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to