[ 
https://issues.apache.org/jira/browse/YARN-1284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13789978#comment-13789978
 ] 

Alejandro Abdelnur commented on YARN-1284:
------------------------------------------

For the record, I've spend a couple hours trying an alternate approach 
suggested by [~rvs] while chatting offline about this. His suggestion was to 
initialize a trash cgroup next to the containers cgroups and when a container 
is cleanup transition the <container>/tasks to the trash/tasks, doing  the 
equivalent of a {{cat <container>/tasks >> trash/tasks}}. Tried doing that but 
it seems some of the Java IO native calls make a system call which is not 
supported by the cgroups filesystem implementation and I was getting the 
following stack trace:

{code}
java.io.IOException: Argument list too long
java.io.IOException: Argument list too long
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:318)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)
...
{code}

Given this, beside that I didn't get it to work properly, I would not be 
comfortable doing this as this may behave different in different Linux versions.

> LCE: Race condition leaves dangling cgroups entries for killed containers
> -------------------------------------------------------------------------
>
>                 Key: YARN-1284
>                 URL: https://issues.apache.org/jira/browse/YARN-1284
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 2.2.0
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>            Priority: Blocker
>         Attachments: YARN-1284.patch, YARN-1284.patch, YARN-1284.patch, 
> YARN-1284.patch
>
>
> When LCE & cgroups are enabled, when a container is is killed (in this case 
> by its owning AM, an MRAM) it seems to be a race condition at OS level when 
> doing a SIGTERM/SIGKILL and when the OS does all necessary cleanup. 
> LCE code, after sending the SIGTERM/SIGKILL and getting the exitcode, 
> immediately attempts to clean up the cgroups entry for the container. But 
> this is failing with an error like:
> {code}
> 2013-10-07 15:21:24,359 WARN 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exit code 
> from container container_1381179532433_0016_01_000011 is : 143
> 2013-10-07 15:21:24,359 DEBUG 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container:
>  Processing container_1381179532433_0016_01_000011 of type 
> UPDATE_DIAGNOSTICS_MSG
> 2013-10-07 15:21:24,359 DEBUG 
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> deleteCgroup: 
> /run/cgroups/cpu/hadoop-yarn/container_1381179532433_0016_01_000011
> 2013-10-07 15:21:24,359 WARN 
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler: 
> Unable to delete cgroup at: 
> /run/cgroups/cpu/hadoop-yarn/container_1381179532433_0016_01_000011
> {code}
> CgroupsLCEResourcesHandler.clearLimits() has logic to wait for 500 ms for AM 
> containers to avoid this problem. it seems this should be done for all 
> containers.
> Still, waiting for extra 500ms seems too expensive.
> We should look at a way of doing this in a more 'efficient way' from time 
> perspective, may be spinning while the deleteCgroup() cannot be done with a 
> minimal sleep and a timeout.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to