danny0405 commented on a change in pull request #4000:
URL: https://github.com/apache/hudi/pull/4000#discussion_r749122541



##########
File path: 
hudi-flink/src/main/java/org/apache/hudi/sink/compact/CompactFunction.java
##########
@@ -114,6 +117,17 @@ private void doCompaction(String instantTime, 
CompactionOperation compactionOper
     collector.collect(new CompactionCommitEvent(instantTime, 
compactionOperation.getFileId(), writeStatuses, taskID));
   }
 
+  private void refresh() throws Exception {
+    HoodieTableMetaClient metaClient = 
writeClient.getHoodieTable().getMetaClient();
+    metaClient.reloadActiveTimeline();
+
+    // set table schema
+    CompactionUtil.setAvroSchema(conf, metaClient);
+
+    // refresh writeClient
+    writeClient = StreamerUtil.createWriteClient(conf, getRuntimeContext());
+  }

Review comment:
       No need to create the write client again, just update the schema of the 
write config, and in which case do we need the schema evolution, schema 
evolution is not supported now ~




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to