SteNicholas commented on code in PR #8503:
URL: https://github.com/apache/hudi/pull/8503#discussion_r1175030462
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/index/bucket/HoodieSparkConsistentBucketIndex.java:
##########
@@ -275,4 +278,46 @@ public Option<HoodieRecordLocation>
getRecordLocation(HoodieKey key) {
throw new HoodieIndexException("Failed to getBucket as hashing node has
no file group");
}
}
+
+ /**
+ * Update default metadata file(00000000000000.hashing_meta) with the latest
committed metadata file so that default file will be in sync
+ * with latest commit.
+ *
+ * @param table
+ */
+ public void updateMetadata(HoodieTable table) {
+ Map<String, Boolean> partitionVisiteddMap = new HashMap<>();
+ HoodieTimeline hoodieTimeline =
table.getActiveTimeline().getCompletedReplaceTimeline();
+ hoodieTimeline.getInstants().forEach(instant -> {
+ Option<Pair<HoodieInstant, HoodieClusteringPlan>> instantPlanPair =
Review Comment:
@rohan-uptycs, another question is that in `HoodieTimelineArchiver`, the
replacecommits after `oldestInstantToRetainForClustering` aren't archived, but
the replacecommits before `oldestInstantToRetainForClustering` would be
archived. Therefore does all completed replacecommits need to update metadata?
Or only unarchived replacecommits need to be updated metadata?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]