xushiyan commented on PR #5436:
URL: https://github.com/apache/hudi/pull/5436#issuecomment-1156525801

   One last scenario to finalize is, as brought up by @danny0405, flink writes 
all input records to log files for MOR tables without knowing which are inserts 
or updates or deletes (bucket index only checks which file group to write to). 
Only when compacting, then we can tell I/U/Ds. On spark's side, with bloom 
index, both writer and compactor would know the I/U/D operation type info of 
the input records.
   
   For MOR tables, we can consider only letting compactor write cdc log blocks, 
after successful compaction. This has following advantages:
   - standardization of the cdc logging mechanism across write engines
   - logically cdc logging should be part of the transaction of running 
compaction
   - it won't compromise the MOR write throughput
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to