[ 
https://issues.apache.org/jira/browse/FLINK-9567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shimin Yang updated FLINK-9567:
-------------------------------
    Description: 
After restart the Job Manager in Yarn Cluster mode, Flink does not release task 
manager containers in some specific case. According to my observation, the 
reason is the instance variable *numPendingContainerRequests* in 
*YarnResourceManager* class does not decrease since it has not received the 
containers. And after restart of job manager, it will make increase the 
*numPendingContainerRequests* by the number of task executors. 

Since the callback function *onContainersAllocated* will return the excessive 
container immediately only if the *numPendingContainerRequests* <= 0, so the 
number of container grows bigger and bigger while only a few are acting as task 
manager.

I think it is important to clear the *numPendingContainerRequests* variable 
after restart the Job Manager, but not very clear at how to do that. There's no 
other way to decrease the *numPendingContainerRequests* except the 
*onContainersAllocated*. Is it fine to add a method to operate on the 
*numPendingContainerRequests* variable? And meanwhile, there's no handle of 
YarnResourceManager in the *ExecutionGraph* restart logic.

ps: Another strange thing I found is that when sometimes request for a yarn 
container, it will return much more than requested. Is it a normal scenario for 
AMRMAsyncClient?

  was:
After restart the Job Manager in Yarn Cluster mode, Flink does not release task 
manager containers in some specific case. According to my observation, the 
reason is the instance variable *numPendingContainerRequests* in 
*YarnResourceManager* class does not decrease since it has not received the 
containers. And after restart of job manager, it will make increase the 
*numPendingContainerRequests* by the number of task executors. 

Since the callback function *onContainersAllocated* will return the excessive 
container immediately only if the *numPendingContainerRequests* <= 0, so the 
number of container grows bigger and bigger while only a few are acting as task 
manager.

I think it is important to clear the *numPendingContainerRequests* variable 
after restart the Job Manager, but not very clear at how to do that. There's no 
other way to decrease the *numPendingContainerRequests* except the 
*onContainersAllocated*. Is it fine to add a method to operate on the 
*numPendingContainerRequests* variable? And meanwhile, there's no handle of 
YarnResourceManager in the *ExecutionGraph* restart logic.


> Flink does not release resource in Yarn Cluster mode
> ----------------------------------------------------
>
>                 Key: FLINK-9567
>                 URL: https://issues.apache.org/jira/browse/FLINK-9567
>             Project: Flink
>          Issue Type: Bug
>          Components: Cluster Management, YARN
>    Affects Versions: 1.5.0
>            Reporter: Shimin Yang
>            Priority: Major
>         Attachments: jobmanager.log
>
>
> After restart the Job Manager in Yarn Cluster mode, Flink does not release 
> task manager containers in some specific case. According to my observation, 
> the reason is the instance variable *numPendingContainerRequests* in 
> *YarnResourceManager* class does not decrease since it has not received the 
> containers. And after restart of job manager, it will make increase the 
> *numPendingContainerRequests* by the number of task executors. 
> Since the callback function *onContainersAllocated* will return the excessive 
> container immediately only if the *numPendingContainerRequests* <= 0, so the 
> number of container grows bigger and bigger while only a few are acting as 
> task manager.
> I think it is important to clear the *numPendingContainerRequests* variable 
> after restart the Job Manager, but not very clear at how to do that. There's 
> no other way to decrease the *numPendingContainerRequests* except the 
> *onContainersAllocated*. Is it fine to add a method to operate on the 
> *numPendingContainerRequests* variable? And meanwhile, there's no handle of 
> YarnResourceManager in the *ExecutionGraph* restart logic.
> ps: Another strange thing I found is that when sometimes request for a yarn 
> container, it will return much more than requested. Is it a normal scenario 
> for AMRMAsyncClient?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to