[ 
https://issues.apache.org/jira/browse/YARN-9294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16765895#comment-16765895
 ] 

Zhankun Tang commented on YARN-9294:
------------------------------------

[[email protected]],

Have you tried to do some manual cgroup isolation test without YARN to 
reproduce it?

Like create a directory under /sys/fs/cgroup/devices/hadoop-yarn and echo the 
values to cgroup device deny file and verify if the process is isolated as 
expected repeatedly?

I used to verify cgroup parameter with cgget and cgdelete. The tools can be 
installed by 
{code:java}
yum install libcgroup
yum install libcgroup-tools
cgget -r memory.limit_in_bytes -g 
memory:hadoop-yarn/container_1542945107795_0003_01_000002{code}
 

But I just verified in my Ubuntu VM, the "cgget" cannot show the denied devices 
although the isolation is working. Maybe the "cgget" also has this bug in REL.

>From your description, it seems we should dig with reproducing the flaky GPU 
>isolation first? Or try different OS kernel version?

 

> Potential race condition in setting GPU cgroups & execute command in the 
> selected cgroup
> ----------------------------------------------------------------------------------------
>
>                 Key: YARN-9294
>                 URL: https://issues.apache.org/jira/browse/YARN-9294
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: yarn
>    Affects Versions: 2.10.0
>            Reporter: Keqiu Hu
>            Assignee: Keqiu Hu
>            Priority: Critical
>
> Environment is latest branch-2 head
> OS: RHEL 7.4
> *Observation*
> Out of ~10 container allocations with GPU requirement, at least 1 of the 
> allocated containers would lose GPU isolation. Even if I asked for 1 GPU, I 
> could still have visibility to all GPUs on the same machine when running 
> nvidia-smi.
> The funny thing is even though I have visibility to all GPUs at the moment of 
> executing container-executor (say ordinal 0,1,2,3), but cgroups jailed the 
> process's access to only that single GPU after sometime. 
> The underlying process trying to access GPU would take the initial 
> information as source of truth and try to access physical 0 GPU which is not 
> really available to the process. This results in a 
> [CUDA_ERROR_INVALID_DEVICE: invalid device ordinal] error.
> Validated the container-executor commands are correct:
> {code:java}
> PrivilegedOperationExecutor command: 
> [/export/apps/hadoop/nodemanager/latest/bin/container-executor, --module-gpu, 
> --container_id, container_e22_1549663278916_0249_01_000001, --excluded_gpus, 
> 0,1,2,3]
> PrivilegedOperationExecutor command: 
> [/export/apps/hadoop/nodemanager/latest/bin/container-executor, khu, khu, 0, 
> application_1549663278916_0249, 
> /grid/a/tmp/yarn/nmPrivate/container_e22_1549663278916_0249_01_000001.tokens, 
> /grid/a/tmp/yarn, /grid/a/tmp/userlogs, 
> /export/apps/jdk/JDK-1_8_0_172/jre/bin/java, -classpath, ..., -Xmx256m, 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer,
>  khu, application_1549663278916_0249, 
> container_e22_1549663278916_0249_01_000001, ltx1-hcl7552.grid.linkedin.com, 
> 8040, /grid/a/tmp/yarn]
> {code}
> So most likely a race condition between these two operations? 
> cc [~jhung]
> Another potential theory is the cgroups creation for the container actually 
> failed but the error was swallowed silently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to