akalash commented on a change in pull request #18239:
URL: https://github.com/apache/flink/pull/18239#discussion_r776482352



##########
File path: 
flink-rpc/flink-rpc-core/src/main/java/org/apache/flink/runtime/rpc/RpcEndpoint.java
##########
@@ -427,13 +427,7 @@ public void validateRunsInMainThread() {
 
         @Override
         public void execute(@Nonnull Runnable command) {
-            if (!scheduledExecutorService.isShutdown()) {

Review comment:
       We can not ignore even one runnable here since MainThreadExecutor can be 
the executor for CompleatableFuture and if scheduledExecutorService would be 
shutdown before all futures will be done we can lost important execution like 
releasing resources. For example, take a look at 
org.apache.flink.runtime.scheduler.SchedulerBase#closeAsync where 
shutDownCheckpointServices is called exactly as I described before.

##########
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java
##########
@@ -275,7 +275,7 @@ private void 
handleStartDispatcherServicesException(Exception e) throws Exceptio
     @Override
     public CompletableFuture<Void> onStop() {
         log.info("Stopping dispatcher {}.", getAddress());
-
+        super.onStop();

Review comment:
       Honestly, I don't like this solution since it looks pretty fragile. I 
tried to move closing mainThreadExecutor to `RpcEndpoint#internalCallOnStop` 
which I like more but unfortunately, it is impossible to do so because we have 
one more `Executor` in `FencedRpcEndpoint`.
   In my opinion, we should rethink the management of resources here. Perhaps 
we should register all resources in one `CloseableRegistry` and then close it 
in one place(for example, in `RpcEndpoint#internalCallOnStop` after `onStop` 
future) 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to