[
https://issues.apache.org/jira/browse/YARN-1902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13961449#comment-13961449
]
Hadoop QA commented on YARN-1902:
---------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12638692/YARN-1902.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 2 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-YARN-Build/3517//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/3517//console
This message is automatically generated.
> Allocation of too many containers when a second request is done with the same
> resource capability
> -------------------------------------------------------------------------------------------------
>
> Key: YARN-1902
> URL: https://issues.apache.org/jira/browse/YARN-1902
> Project: Hadoop YARN
> Issue Type: Bug
> Components: client
> Affects Versions: 2.2.0, 2.3.0
> Reporter: Sietse T. Au
> Labels: patch
> Attachments: YARN-1902.patch
>
>
> Regarding AMRMClientImpl
> Scenario 1:
> Given a ContainerRequest x with Resource y, when addContainerRequest is
> called z times with x, allocate is called and at least one of the z allocated
> containers is started, then if another addContainerRequest call is done and
> subsequently an allocate call to the RM, (z+1) containers will be allocated,
> where 1 container is expected.
> Scenario 2:
> No containers are started between the allocate calls.
> Analyzing debug logs of the AMRMClientImpl, I have found that indeed a (z+1)
> are requested in both scenarios, but that only in the second scenario, the
> correct behavior is observed.
> Looking at the implementation I have found that this (z+1) request is caused
> by the structure of the remoteRequestsTable. The consequence of Map<Resource,
> ResourceRequestInfo> is that ResourceRequestInfo does not hold any
> information about whether a request has been sent to the RM yet or not.
> There are workarounds for this, such as releasing the excess containers
> received.
> The solution implemented is to initialize a new ResourceRequest in
> ResourceRequestInfo when a request has been successfully sent to the RM.
> The patch includes a test in which scenario one is tested.
--
This message was sent by Atlassian JIRA
(v6.2#6252)