YuweiXiao commented on code in PR #6737:
URL: https://github.com/apache/hudi/pull/6737#discussion_r991096928


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/StreamWriteFunction.java:
##########
@@ -129,6 +137,7 @@ public void snapshotState() {
     // it would check the validity.
     // wait for the buffer data flush out and request a new instant
     flushRemaining(false);
+    this.updateStrategy.reset();
   }

Review Comment:
   `Reject` is also a solution in a way, like how we handle it in Spark write 
path. Without it, concurrent updates may be lost without any notification (as 
it writes to an outdated file group). Exception in this case will have the job 
to retry the updates and leads it to the new clustered file group. 
   
   For consistent hashing, these conflicts updates will be routed to both old & 
new file groups. The corresponding code is in 
`FlinkConsistentBucketDuplicateUpdateStrategy`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to