yihua commented on a change in pull request #3741:
URL: https://github.com/apache/hudi/pull/3741#discussion_r726802387



##########
File path: 
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/table/action/HoodieWriteMetadata.java
##########
@@ -46,6 +46,36 @@
   public HoodieWriteMetadata() {
   }
 
+  /**
+   * Clones the write metadata with transformed write statuses.
+   *
+   * @param transformedWriteStatuses transformed write statuses
+   * @param <T>                      type of transformed write statuses
+   * @return Cloned {@link HoodieWriteMetadata<T>} instance
+   */
+  public <T> HoodieWriteMetadata<T> clone(T transformedWriteStatuses) {
+    HoodieWriteMetadata<T> newMetadataInstance = new HoodieWriteMetadata<>();
+    newMetadataInstance.setWriteStatuses(transformedWriteStatuses);
+    if (indexLookupDuration.isPresent()) {

Review comment:
       I follow the existing logic for the refactoring without touching the 
core flow.  @nsivabalan @vinothchandar I guess we want to do lazy execution as 
latest as possible in the write pipeline with this pattern of keeping `RDD` in 
Spark?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to