pnowojski commented on code in PR #23425: URL: https://github.com/apache/flink/pull/23425#discussion_r1363482855
########## flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointsCleaner.java: ########## @@ -71,10 +70,26 @@ public void cleanCheckpoint( boolean shouldDiscard, Runnable postCleanAction, Executor executor) { - Checkpoint.DiscardObject discardObject = - shouldDiscard ? checkpoint.markAsDiscarded() : Checkpoint.NOOP_DISCARD_OBJECT; - - cleanup(checkpoint, discardObject::discard, postCleanAction, executor); + if (shouldDiscard) { + incrementNumberOfCheckpointsToClean(); + checkpoint + .markAsDiscarded() + .discardAsync(executor) + .handle( + (Object outerIgnored, Throwable outerThrowable) -> { + if (outerThrowable != null) { + LOG.warn( + "Could not properly discard completed checkpoint {}.", + checkpoint.getCheckpointID(), + outerThrowable); + } + decrementNumberOfCheckpointsToClean(); Review Comment: You have just said it yourself. Old code for `shouldDiscard=false` is doing: ``` try { cleanupAction.run(); } catch (...) { ... } finally { decrementNumberOfCheckpointsToClean(); postCleanupAction.run(); } ``` with no exception and the `cleanupAction` being `NOOP`, code simplifies to: ``` decrementNumberOfCheckpointsToClean(); postCleanupAction.run(); ``` `decrementNumberOfCheckpointsToClean` is clearly there. I would even suspect that the [deadlock in the tests](https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53769&view=logs&j=0da23115-68bb-5dcd-192c-bd4c8adebde1&t=24c3384f-1bcb-57b3-224f-51bf973bbee8) could be actually caused by this issue. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org