[
https://issues.apache.org/jira/browse/YARN-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789977#comment-13789977
]
Hadoop QA commented on YARN-1284:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12607499/YARN-1284.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:red}-1 javac{color:red}. The patch appears to cause the build to
fail.
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/2149//console
This message is automatically generated.
> LCE: Race condition leaves dangling cgroups entries for killed containers
> -------------------------------------------------------------------------
>
> Key: YARN-1284
> URL: https://issues.apache.org/jira/browse/YARN-1284
> Project: Hadoop YARN
> Issue Type: Bug
> Components: nodemanager
> Affects Versions: 2.2.0
> Reporter: Alejandro Abdelnur
> Assignee: Alejandro Abdelnur
> Priority: Blocker
> Attachments: YARN-1284.patch, YARN-1284.patch, YARN-1284.patch,
> YARN-1284.patch
>
>
> When LCE & cgroups are enabled, when a container is is killed (in this case
> by its owning AM, an MRAM) it seems to be a race condition at OS level when
> doing a SIGTERM/SIGKILL and when the OS does all necessary cleanup.
> LCE code, after sending the SIGTERM/SIGKILL and getting the exitcode,
> immediately attempts to clean up the cgroups entry for the container. But
> this is failing with an error like:
> {code}
> 2013-10-07 15:21:24,359 WARN
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exit code
> from container container_1381179532433_0016_01_000011 is : 143
> 2013-10-07 15:21:24,359 DEBUG
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Processing container_1381179532433_0016_01_000011 of type
> UPDATE_DIAGNOSTICS_MSG
> 2013-10-07 15:21:24,359 DEBUG
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler:
> deleteCgroup:
> /run/cgroups/cpu/hadoop-yarn/container_1381179532433_0016_01_000011
> 2013-10-07 15:21:24,359 WARN
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler:
> Unable to delete cgroup at:
> /run/cgroups/cpu/hadoop-yarn/container_1381179532433_0016_01_000011
> {code}
> CgroupsLCEResourcesHandler.clearLimits() has logic to wait for 500 ms for AM
> containers to avoid this problem. it seems this should be done for all
> containers.
> Still, waiting for extra 500ms seems too expensive.
> We should look at a way of doing this in a more 'efficient way' from time
> perspective, may be spinning while the deleteCgroup() cannot be done with a
> minimal sleep and a timeout.
--
This message was sent by Atlassian JIRA
(v6.1#6144)