[ https://issues.apache.org/jira/browse/FLINK-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16567897#comment-16567897 ]
ASF GitHub Bot commented on FLINK-9936: --------------------------------------- tillrohrmann commented on a change in pull request #6464: [FLINK-9936][mesos] WIP URL: https://github.com/apache/flink/pull/6464#discussion_r207463934 ########## File path: flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java ########## @@ -894,17 +900,21 @@ public void grantLeadership(final UUID newLeaderSessionID) { // clear the state if we've been the leader before if (getFencingToken() != null) { - clearState(); + clearStateInternal(); } setFencingToken(newResourceManagerId); slotManager.start(getFencingToken(), getMainThreadExecutor(), new ResourceActionsImpl()); - getRpcService().execute( - () -> + prepareLeadershipAsync() + .exceptionally(t -> { + onFatalError(t); + return null; + }) + .thenRunAsync(() -> // confirming the leader session ID might be blocking, - leaderElectionService.confirmLeaderSessionID(newLeaderSessionID)); + leaderElectionService.confirmLeaderSessionID(newLeaderSessionID), getRpcService().getExecutor()); Review comment: The check would help to guard against a concurrent revoke leadership which is triggered just before the aynchronous grant leadership call is executed. That way we would not unnecessarily initialize internal components which would be stopped by the revoke leadership async callback. At the moment this is not very likely to happen but if we make `clearState` asynchronous and wait in the `grantLeadership` method to complete the cleanup before calling the leadership callback, it gets more and more likely. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Mesos resource manager unable to connect to master after failover > ----------------------------------------------------------------- > > Key: FLINK-9936 > URL: https://issues.apache.org/jira/browse/FLINK-9936 > Project: Flink > Issue Type: Bug > Components: Mesos, Scheduler > Affects Versions: 1.5.0, 1.5.1, 1.6.0 > Reporter: Renjie Liu > Assignee: Gary Yao > Priority: Blocker > Labels: pull-request-available > Fix For: 1.5.3, 1.6.0 > > > When deployed in mesos session cluster mode, the connector monitor keeps > reporting unable to connect to mesos after restart. In fact, scheduler driver > already connected to mesos master, but when the connected message is lost. > This is because leadership is not granted yet and fence id is not set, the > rpc service ignores the connected message. So we should connect to mesos > master after leadership is granted. -- This message was sent by Atlassian JIRA (v7.6.3#76005)