[
https://issues.apache.org/jira/browse/YARN-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13790320#comment-13790320
]
Hudson commented on YARN-1284:
------------------------------
FAILURE: Integrated in Hadoop-Mapreduce-trunk #1573 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1573/])
Add missing file TestCgroupsLCEResourcesHandler for YARN-1284. (sandy:
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1530493)
*
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/util/TestCgroupsLCEResourcesHandler.java
YARN-1284. LCE: Race condition leaves dangling cgroups entries for killed
containers. (Alejandro Abdelnur via Sandy Ryza) (sandy:
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1530492)
* /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt
*
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
*
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/util/CgroupsLCEResourcesHandler.java
> LCE: Race condition leaves dangling cgroups entries for killed containers
> -------------------------------------------------------------------------
>
> Key: YARN-1284
> URL: https://issues.apache.org/jira/browse/YARN-1284
> Project: Hadoop YARN
> Issue Type: Bug
> Components: nodemanager
> Affects Versions: 2.2.0
> Reporter: Alejandro Abdelnur
> Assignee: Alejandro Abdelnur
> Priority: Blocker
> Fix For: 2.3.0
>
> Attachments: YARN-1284.patch, YARN-1284.patch, YARN-1284.patch,
> YARN-1284.patch, YARN-1284.patch
>
>
> When LCE & cgroups are enabled, when a container is is killed (in this case
> by its owning AM, an MRAM) it seems to be a race condition at OS level when
> doing a SIGTERM/SIGKILL and when the OS does all necessary cleanup.
> LCE code, after sending the SIGTERM/SIGKILL and getting the exitcode,
> immediately attempts to clean up the cgroups entry for the container. But
> this is failing with an error like:
> {code}
> 2013-10-07 15:21:24,359 WARN
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exit code
> from container container_1381179532433_0016_01_000011 is : 143
> 2013-10-07 15:21:24,359 DEBUG
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
> Processing container_1381179532433_0016_01_000011 of type
> UPDATE_DIAGNOSTICS_MSG
> 2013-10-07 15:21:24,359 DEBUG
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler:
> deleteCgroup:
> /run/cgroups/cpu/hadoop-yarn/container_1381179532433_0016_01_000011
> 2013-10-07 15:21:24,359 WARN
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler:
> Unable to delete cgroup at:
> /run/cgroups/cpu/hadoop-yarn/container_1381179532433_0016_01_000011
> {code}
> CgroupsLCEResourcesHandler.clearLimits() has logic to wait for 500 ms for AM
> containers to avoid this problem. it seems this should be done for all
> containers.
> Still, waiting for extra 500ms seems too expensive.
> We should look at a way of doing this in a more 'efficient way' from time
> perspective, may be spinning while the deleteCgroup() cannot be done with a
> minimal sleep and a timeout.
--
This message was sent by Atlassian JIRA
(v6.1#6144)