TengHuo commented on code in PR #6733:
URL: https://github.com/apache/hudi/pull/6733#discussion_r1010254943
##########
hudi-flink-datasource/hudi-flink/src/main/java/org/apache/hudi/sink/compact/CompactionPlanOperator.java:
##########
@@ -129,9 +128,6 @@ private void scheduleCompaction(HoodieFlinkTable<?> table,
long checkpointId) th
List<CompactionOperation> operations =
compactionPlan.getOperations().stream()
.map(CompactionOperation::convertFromAvroRecordInstance).collect(toList());
LOG.info("Execute compaction plan for instant {} as {} file groups",
compactionInstantTime, operations.size());
- WriteMarkersFactory
- .get(table.getConfig().getMarkersType(), table,
compactionInstantTime)
- .deleteMarkerDir(table.getContext(),
table.getConfig().getMarkersDeleteParallelism());
Review Comment:
cool, thanks for reviewing
I'm testing if there will be duplicate marker issue when a compaction failed
and a rollback performed.
As I understand, rollback only deletes data files, but not deleting marker
files, if a compaction failed and retry again, it should throw the same error
as [HUDI-4108](https://issues.apache.org/jira/browse/HUDI-4108) with my current
code.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]