danny0405 commented on code in PR #8503:
URL: https://github.com/apache/hudi/pull/8503#discussion_r1182194304
##########
hudi-client/hudi-spark-client/src/main/java/org/apache/hudi/index/bucket/HoodieSparkConsistentBucketIndex.java:
##########
@@ -275,4 +280,49 @@ public Option<HoodieRecordLocation>
getRecordLocation(HoodieKey key) {
throw new HoodieIndexException("Failed to getBucket as hashing node has
no file group");
}
}
+
+ /**
+ * Update default metadata file(00000000000000.hashing_meta) with the latest
committed metadata file so that default file will be in sync
+ * with latest commit.
+ *
+ * @param table
+ */
+ public void updateArchivalDependentIndexMetadata(HoodieTable
table,List<HoodieInstant> hoodieArchivalInstants) {
+ Map<String, Boolean> partitionVisiteddMap = new HashMap<>();
+ // Update metadata for replace commit which are going to get archived.
+ Stream<HoodieInstant> hoodieListOfReplacedInstants =
hoodieArchivalInstants.stream().filter(instane ->
instane.getAction().equals(REPLACE_COMMIT_ACTION));
+ hoodieListOfReplacedInstants.forEach(instant -> {
+ Option<Pair<HoodieInstant, HoodieClusteringPlan>> instantPlanPair =
+ ClusteringUtils.getClusteringPlan(table.getMetaClient(),
HoodieTimeline.getReplaceCommitRequestedInstant(instant.getTimestamp()));
+ if (instantPlanPair.isPresent()) {
+ HoodieClusteringPlan plan = instantPlanPair.get().getRight();
+ List<Map<String, String>> partitionMapList =
plan.getInputGroups().stream().map(HoodieClusteringGroup::getExtraMetadata).collect(Collectors.toList());
+ partitionMapList.stream().forEach(partitionMap -> {
+ String partition =
partitionMap.get(SparkConsistentBucketClusteringPlanStrategy.METADATA_PARTITION_KEY);
+ if (!partitionVisiteddMap.containsKey(partition)) {
+ Option<HoodieConsistentHashingMetadata>
hoodieConsistentHashingMetadataOption = loadMetadata(table, partition);
+ if (hoodieConsistentHashingMetadataOption.isPresent()) {
Review Comment:
> What if underlying file system is down and updateMetadata fails to sync
metadata, then there is no mechanism to bring it in sync with latest committed
metadat
Just put the update into the same transaction of the clustering operation
for consistency hash indexing.
That is, update the metadata first, then transite the state from inflight to
complete for this replace commit.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]