[ 
https://issues.apache.org/jira/browse/FLINK-16215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046195#comment-17046195
 ] 

YufeiLiu commented on FLINK-16215:
----------------------------------

[~xintongsong]
This sounds good. We can do some actual recovery work like release unused 
containers rather than just put all previous containers into WorkerMap.

It need some changes in {{YarnResourceManager}}, It's kind of like 
{{MesosResourceManager}} recovery process. I think put the recovery work in 
{{prepareLeadershipAsync}} would be nice,  which don't confirm leadership until 
the recovery is done.  Do we have plans to improve on this?

Besides, I had some question about "RM not assuming all TMs have the same 
resource", than which decide the specification of TM? 

> Start redundant TaskExecutor when JM failed
> -------------------------------------------
>
>                 Key: FLINK-16215
>                 URL: https://issues.apache.org/jira/browse/FLINK-16215
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Coordination
>    Affects Versions: 1.10.0
>            Reporter: YufeiLiu
>            Priority: Major
>
> TaskExecutor will reconnect to the new ResourceManager leader when JM failed, 
> and JobMaster will restart and reschedule job. If job slot request arrive 
> earlier than TM registration, RM will start new workers rather than reuse the 
> existing TMs.
> It‘s hard to reproduce becasue TM registration usually come first, and 
> timeout check will stop redundant TMs. 
> But I think it would be better if we make the {{recoverWokerNode}} to 
> interface, and put recovered slots in {{pendingSlots}} wait for TM 
> reconnection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to