[ 
https://issues.apache.org/jira/browse/FLINK-16215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046335#comment-17046335
 ] 

YufeiLiu commented on FLINK-16215:
----------------------------------

[~xintongsong] I'm thinking about to reuse the slots will be recovered and 
consideration of {{cluster.slots-number.min}} configuration. If we use async 
execution maybe has potential risk of allocating resource more than our expect.


 If {{evenly-spread-out-slots}} config is set, it will try to use every TM 
rather than fulfill one of them. Here is a case, if source parallelism is 2 and 
sink is 4, need 2 TM if they has 2 slots each. When JM has failover, if 
SlotRequest of source come first, it will start a new worker and wait for slot 
report, and 3 TM will registered ATM, when SlotRequest of sink arrives all of 
TMs will be used and can't be released in timeout check.


 Maybe this case is too extreme, but I think put them in initialize stage would 
be better if it doesn't take long time.
 Here is the approach I'm thinking.
 * In {{getContainersFromPreviousAttempts}}, add all recovered containers to 
the {{workerNodeMap}}, and binding a {{CompletableFuture<ContainerStatus>}} for 
each {{YarnWorkerNode}}, then starts a async status query for each container.
 * In {{onContainerStatusReceived}}, complete the future of this container.
 * In {{prepareLeadershipAsync}}, wait for container status report, if the 
returned container's state is {{NEW}}, release it and remove it from the 
workerNodeMap, if it's {{RUNNING}} put into PendingSlots.
 * As all containers status are confirmed, initialize minimum of workers if 
{{cluster.slots-number.min}} is set, which should reduce the PendingSlots.
 * Prepare work is done and can confirm leadership.

> Start redundant TaskExecutor when JM failed
> -------------------------------------------
>
>                 Key: FLINK-16215
>                 URL: https://issues.apache.org/jira/browse/FLINK-16215
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Coordination
>    Affects Versions: 1.10.0
>            Reporter: YufeiLiu
>            Priority: Major
>
> TaskExecutor will reconnect to the new ResourceManager leader when JM failed, 
> and JobMaster will restart and reschedule job. If job slot request arrive 
> earlier than TM registration, RM will start new workers rather than reuse the 
> existing TMs.
> It‘s hard to reproduce becasue TM registration usually come first, and 
> timeout check will stop redundant TMs. 
> But I think it would be better if we make the {{recoverWokerNode}} to 
> interface, and put recovered slots in {{pendingSlots}} wait for TM 
> reconnection.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to