bvaradar commented on a change in pull request #853: [HUDI-174] Ignore duplicate of a compaction file URL: https://github.com/apache/incubator-hudi/pull/853#discussion_r318574724
########## File path: hudi-common/src/main/java/org/apache/hudi/common/util/CompactionUtils.java ########## @@ -150,11 +150,19 @@ public static HoodieCompactionPlan getCompactionPlan(HoodieTableMetaClient metaC pendingCompactionPlanWithInstants.stream().flatMap(instantPlanPair -> { return getPendingCompactionOperations(instantPlanPair.getKey(), instantPlanPair.getValue()); }).forEach(pair -> { - // Defensive check to ensure a single-fileId does not have more than one pending compaction + // Defensive check to ensure a single-fileId does not have more than one pending compaction with different + // file slices. If we find a full duplicate we assume it is caused by eventual nature of the move operation + // on some DFSs. if (fgIdToPendingCompactionWithInstantMap.containsKey(pair.getKey())) { - String msg = "Hoodie File Id (" + pair.getKey() + ") has more thant 1 pending compactions. Instants: " - + pair.getValue() + ", " + fgIdToPendingCompactionWithInstantMap.get(pair.getKey()); - throw new IllegalStateException(msg); + HoodieCompactionOperation operation = pair.getValue().getValue(); + HoodieCompactionOperation anotherOperation = + fgIdToPendingCompactionWithInstantMap.get(pair.getKey()).getValue(); + + if (!operation.equals(anotherOperation)) { Review comment: This should be fine in this case as we get duplicates because of reading the same compaction plan in 2 different files (due to non-atomic rename) ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services