zhangyue19921010 commented on pull request #3509:
URL: https://github.com/apache/hudi/pull/3509#issuecomment-919678701


   @satishkotha @vinothchandar Thanks for your attention. Let me explain in 
more detail :
   First of all this rejectClustering strategy will do clustering validation 
three times : 
   
   1. Before clustering performed, checking xxx.replacement.request.reject.
   2. After clustering finished, checking xxx.replacement.inflight.reject.
   3. Before clustering committing which will create clustering commit file
   
   There's actually a race condition that reject file could be created **during 
clustering committing**. This step will create a clustering commit file and I 
believe it's a quick move.
   ```
         
table.getActiveTimeline().transitionReplaceInflightToComplete(replaceCommitInflightInstant,
             
Option.of(metadata.toJsonString().getBytes(StandardCharsets.UTF_8)));
   ```
   
   Also Thanks for @satishkotha reminding, I add another validation after 
clustering commit created, if reject file created during clustering committing, 
and there's a reject file remainder after clustering committed. We has to 
throws an exception here to avoid losing update data.
   
   In short this reject clustering strategy can reject request clustering, 
inflight clustering and finished but not committed clustering. Although when 
clustering is committing, we can't reject it anymore and have to throw 
exception.
   
   
   > Anyway, my high level thought is that it is better to find a way to 
integrate this with Multi-writer conflict resolution mentioned here 
https://cwiki.apache.org/confluence/display/HUDI/RFC+-+22+%3A+Snapshot+Isolation+using+Optimistic+Concurrency+Control+for+multi-writers#RFC22:SnapshotIsolationusingOptimisticConcurrencyControlformultiwriters-OptimisticConcurrencyusingatomicrenameswithConflictResolution
   > 
   > If we integrate with multi-writer, this can be done in a generic for any 
two operations (instead of adding very specific strategy for clustering.) Some 
changes may be needed in multi-writer implementation to enforce priority of 
operations (clustering is lower priority than ingestion). But this is better 
long term IMO. Happy to discuss more details/other alternatives.
   
   I strongly agree that multi-write can solve this conflict more completely. 
But the disadvantage of this solution is that Introduced external dependencies 
like ZooKeeper. As the data lake becomes larger, ZooKeeper or something else 
maybe become the bottleneck which needs to maintain and tune.
   
   So that maybe just introduce a new strategy is more convenient to use. 
   Anyway, each solution has its own advantages and disadvantages :)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to