danny0405 commented on code in PR #6273:
URL: https://github.com/apache/hudi/pull/6273#discussion_r935191813


##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/StreamWriteOperatorCoordinator.java:
##########
@@ -378,6 +379,28 @@ private void startInstant() {
         this.conf.getString(FlinkOptions.TABLE_NAME), 
conf.getString(FlinkOptions.TABLE_TYPE));
   }
 
+  /**
+   * Get the valid instant time of last batch from bootstrap events
+   * Return Option.empty() to indicate the instant is invalid from last batch
+   */
+  protected Option<String> bootstrapInstantFromEventBuffer() {
+    ValidationUtils.checkArgument(Arrays.stream(eventBuffer).allMatch(evt -> 
evt != null && evt.isBootstrap()));
+    List<WriteMetadataEvent> events = Arrays.stream(eventBuffer).filter(evt -> 
!evt.getInstantTime().equals("")).collect(Collectors.toList());
+    String instant = 
events.stream().map(WriteMetadataEvent::getInstantTime).reduce((a, b) -> 
a.equals(b) ? a : "").orElse("");
+    // instant and parallelism should be unique

Review Comment:
   The fix may be right but there are some thoughts about code engineering:
   
   1. we should not put static config options in `WriteMetadataEvent` which is 
a dynamic metadata event from write task.
   
   2. do we really need to care about the parallelism of last run ? Can we just 
merge the bootstrap events first before sending it to the coordinator ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to