danny0405 commented on code in PR #6273:
URL: https://github.com/apache/hudi/pull/6273#discussion_r936208671


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/StreamWriteOperatorCoordinator.java:
##########
@@ -378,6 +379,28 @@ private void startInstant() {
         this.conf.getString(FlinkOptions.TABLE_NAME), 
conf.getString(FlinkOptions.TABLE_TYPE));
   }
 
+  /**
+   * Get the valid instant time of last batch from bootstrap events
+   * Return Option.empty() to indicate the instant is invalid from last batch
+   */
+  protected Option<String> bootstrapInstantFromEventBuffer() {
+    ValidationUtils.checkArgument(Arrays.stream(eventBuffer).allMatch(evt -> 
evt != null && evt.isBootstrap()));
+    List<WriteMetadataEvent> events = Arrays.stream(eventBuffer).filter(evt -> 
!evt.getInstantTime().equals("")).collect(Collectors.toList());
+    String instant = 
events.stream().map(WriteMetadataEvent::getInstantTime).reduce((a, b) -> 
a.equals(b) ? a : "").orElse("");
+    // instant and parallelism should be unique

Review Comment:
   The operator state backend would spread the events evenly within the write 
tasks, and i want to say that the empty bootstrap event is also valid for 
coordinator validation.
   
   But here is a problem, when we decrease the parallelism, one write task may 
have several bootstrap events, we should merge these events first before 
sending them to coordinator, or the coordinator may commit eagerly before it 
receives all the bootstrap events.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to