[
https://issues.apache.org/jira/browse/FLINK-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16736924#comment-16736924
]
Till Rohrmann commented on FLINK-10848:
---------------------------------------
I could confirm my suspicion. Apparently, there is a Yarn configuration where
you request a container with {{mem}} memory and {{cores}} vcores and you
receive a container with fewer vcores:
{code}
2019-01-07 15:53:04,840 INFO org.apache.flink.yarn.YarnResourceManager -
Requesting new TaskExecutor container with resources <memory:2048, vCores:3>.
Number pending requests 10.
2019-01-07 15:53:05,627 INFO org.apache.flink.yarn.YarnResourceManager -
Received new container: container_1546876305579_0001_02_000002 with capacity
<memory:2048, vCores:1> - Remaining pending container requests: 10
{code}
> Flink's Yarn ResourceManager can allocate too many excess containers
> --------------------------------------------------------------------
>
> Key: FLINK-10848
> URL: https://issues.apache.org/jira/browse/FLINK-10848
> Project: Flink
> Issue Type: Bug
> Components: YARN
> Affects Versions: 1.3.3, 1.4.2, 1.5.5, 1.6.2
> Reporter: Shuyi Chen
> Assignee: Shuyi Chen
> Priority: Major
> Labels: pull-request-available
> Time Spent: 10m
> Remaining Estimate: 0h
>
> Currently, both the YarnFlinkResourceManager and YarnResourceManager do not
> call removeContainerRequest() on container allocation success. Because the
> YARN AM-RM protocol is not a delta protocol (please see YARN-1902),
> AMRMClient will keep all ContainerRequests that are added and send them to RM.
> In production, we observe the following that verifies the theory: 16
> containers are allocated and used upon cluster startup; when a TM is killed,
> 17 containers are allocated, 1 container is used, and 16 excess containers
> are returned; when another TM is killed, 18 containers are allocated, 1
> container is used, and 17 excess containers are returned.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)