[ 
https://issues.apache.org/jira/browse/FLINK-16215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046207#comment-17046207
 ] 

Xintong Song edited comment on FLINK-16215 at 2/27/20 6:48 AM:
---------------------------------------------------------------

[~liuyufei] 
Just trying to understand, why do we need to block confirming leadership until 
the recover is done?

I was thinking about the following approach.
 * In {{getContainersFromPreviousAttempts}}, add all recovered containers to 
the {{workerNodeMap}}, and starts a async status query for each container.
 * In {{onContainerStatusReceived}}, if the returned container's state is 
{{NEW}}, release it and remove it from the {{workerNodeMap}}.

If you agree with the approach, I think [~karmagyz] can work on the 
implementation.

Regarding "RM not assuming all TMs having the same resource", as [~karmagyz] 
said, this comes from FLINK-14106. There's a google doc about the proposed 
changes, and welcome to join us in the discussion.


was (Author: xintongsong):
Just trying to understand, why do we need to block confirming leadership until 
the recover is done?

I was thinking about the following approach.
 * In {{getContainersFromPreviousAttempts}}, add all recovered containers to 
the {{workerNodeMap}}, and starts a async status query for each container.
 * In {{onContainerStatusReceived}}, if the returned container's state is 
{{NEW}}, release it and remove it from the {{workerNodeMap}}.

> Start redundant TaskExecutor when JM failed
> -------------------------------------------
>
>                 Key: FLINK-16215
>                 URL: https://issues.apache.org/jira/browse/FLINK-16215
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Coordination
>    Affects Versions: 1.10.0
>            Reporter: YufeiLiu
>            Priority: Major
>
> TaskExecutor will reconnect to the new ResourceManager leader when JM failed, 
> and JobMaster will restart and reschedule job. If job slot request arrive 
> earlier than TM registration, RM will start new workers rather than reuse the 
> existing TMs.
> It‘s hard to reproduce becasue TM registration usually come first, and 
> timeout check will stop redundant TMs. 
> But I think it would be better if we make the {{recoverWokerNode}} to 
> interface, and put recovered slots in {{pendingSlots}} wait for TM 
> reconnection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to