rohan-uptycs commented on code in PR #8503:
URL: https://github.com/apache/hudi/pull/8503#discussion_r1172085324


##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/HoodieTimelineArchiver.java:
##########
@@ -441,6 +441,8 @@ private Stream<HoodieInstant> getCommitInstantsToArchive() 
throws IOException {
       Option<HoodieInstant> oldestInstantToRetainForClustering =
           
ClusteringUtils.getOldestInstantToRetainForClustering(table.getActiveTimeline(),
 table.getMetaClient());
 
+      table.getIndex().updateMetadata(table);
+

Review Comment:
   The archival process will archive replace commit from active timeline, once 
it does that , all the hudi writer will start referring default metadata index 
file that is **00000000000000.hashing_meta** , **check the function 
loadMetadata(HoodieTable table, String partition)**.  That's the reason it is 
necessary to trigger the update metadata function before archival , so that it 
will bring 00000000000000.hashing_meta  file in sync with latest metadata 
commit file . 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to