danny0405 commented on PR #5436:
URL: https://github.com/apache/hudi/pull/5436#issuecomment-1157289026

   > One last scenario to finalize is, as brought up by @danny0405, flink 
writes all input records to log files for MOR tables without knowing which are 
inserts or updates or deletes (bucket index only checks which file group to 
write to). Only when compacting, then we can tell I/U/Ds. On spark's side, with 
bloom index, both writer and compactor would know the I/U/D operation type info 
of the input records.
   > 
   > For MOR tables, we can consider only letting compactor / mergehandle write 
cdc log blocks, after successful compaction. This has following advantages:
   > 
   > * standardization of the cdc logging mechanism across write engines
   > * logically cdc logging should be part of the transaction of running 
compaction
   > * it won't compromise the MOR write throughput
   
   And in for both COW and MOR tables, the cdc log block is only an optional 
optimization IMHO, because we can always deduce the change logs on the fly, the 
for MOR table, the user may expect the end-to-end latency as well so we can not 
always wait for the compaction to generate the cdc changes on reader side.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to