wangyang0918 commented on a change in pull request #15396:
URL: https://github.com/apache/flink/pull/15396#discussion_r605430899



##########
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/entrypoint/ClusterEntrypoint.java
##########
@@ -579,6 +580,19 @@ protected static Configuration loadConfiguration(
 
     public static void runClusterEntrypoint(ClusterEntrypoint 
clusterEntrypoint) {
 
+        // Register the signal handler so that we could have the chance to 
execute the shutdown
+        // supplier before handling the signal.
+        SignalHandler.register(
+                LOG,
+                () -> {
+                    if (clusterEntrypoint.isShutDown.get()) {
+                        LOG.info("Waiting for concurrent shutDownAsync 
finished.");
+                        return 
clusterEntrypoint.terminationFuture.thenAccept(ignored -> {});
+                    } else {
+                        return FutureUtils.completedVoidFuture();
+                    }
+                });

Review comment:
       I think simply calling `clusterEntrypoint.closeAsync().join()` in the 
shutdown hook could not solve the problem.
   
   When there is a running `shutDownAsync(... cleanupHaData=true)` and we 
received a concurrent `SIGTERM` signal, the JVM will try to exit and start to 
execute the shutdown hook. If we only do `clusterEntrypoint.closeAsync()`, we 
may still have the residual ConfigMaps/ZNodes. Because the 
`haServices.closeAndCleanupAllData` may not be fully executed.
   
   Inspired by your suggestion, I think we could also wait the possible 
concurrent `shutDownAsync` finished in `closeAsync()`. After then simply 
registering the `closeAsync()` as a shutdown hook could work well. It just has 
same effect with current implementation with `SignalHanlder`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to