[
https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16216956#comment-16216956
]
Sunil G commented on YARN-7224:
-------------------------------
Thanks [~leftnoteasy] for the effort.
Few comments
# {{YarnConfiguration.NVIDIA_DOCKER_PLUGIN_ENDPOINT}}. This is more vendor
specific and hence its better to be in a separate config file altogether ?
Something like a json file
{code}
{
devices: gpu,fpga
"gpu" : {
"make" : "nvidia",
"endpoint" : "http://localhost:3048"
}
}
{code}
# IN {{GpuDockerCommandPlugin#init}}, else is not needed as *if* condition is
already giving a return statement.
# To me {{GpuDockerCommandPlugin}} is mostly tightly coupled with nvidia. So
could be rename it as NvidiaGpuDockerCommandPlugin
# I could see a validation to ensure that volume always has ":ro". Do we need
to validate this ?
{{-volume=nvidia_driver_352.68:/usr/local/nvidia:ro}}
# In {{getGpuIndexFromDeviceName}}, we night need to handle exception for
parseInt.
# Since we use Serializable for storing resource mapping, we do conversions
back and to int and string. I think its better we define all gpu device number
in string itself and access it via map.
# In {{updateDockerRunCommand}}, why are we setting source and destination
device as same ?
{{dockerRunCommand.addDevice(value, value);}}
# Could we return {{getAssignedGpus}} as Set<String>. Then in
{{getGpuIndexFromDeviceName}}, we can give the device name and look for it.
# one doubt
{code}
248 // Cannot get all assigned Gpu devices from docker plugin output
249 if (foundGpuDevices < assignedResources.size()) {
250 // TODO: We can do better for this, instead directly compare
device
251 // name, we should compare device's minor number with
specified GPU
252 // minor number.
253 throw new ContainerExecutionException(
254 "Cannot get all assigned Gpu devices from docker plugin
output");
{code}
Instead of having {{foundGpuDevices}}, this should be a part of NM's resource
capability vector. Ideally its better to associate device availabilty and its
usage in NMs resource usage vector itself rather than recomputing here ?
#
> Support GPU isolation for docker container
> ------------------------------------------
>
> Key: YARN-7224
> URL: https://issues.apache.org/jira/browse/YARN-7224
> Project: Hadoop YARN
> Issue Type: Sub-task
> Reporter: Wangda Tan
> Assignee: Wangda Tan
> Attachments: YARN-7224.001.patch, YARN-7224.002-wip.patch,
> YARN-7224.003.patch, YARN-7224.004.patch, YARN-7224.005.patch
>
>
> This patch is to address issues when docker container is being used:
> 1. GPU driver and nvidia libraries: If GPU drivers and NV libraries are
> pre-packaged inside docker image, it could conflict to driver and
> nvidia-libraries installed on Host OS. An alternative solution is to detect
> Host OS's installed drivers and devices, mount it when launch docker
> container. Please refer to \[1\] for more details.
> 2. Image detection:
> From \[2\], the challenge is:
> bq. Mounting user-level driver libraries and device files clobbers the
> environment of the container, it should be done only when the container is
> running a GPU application. The challenge here is to determine if a given
> image will be using the GPU or not. We should also prevent launching
> containers based on a Docker image that is incompatible with the host NVIDIA
> driver version, you can find more details on this wiki page.
> 3. GPU isolation.
> *Proposed solution*:
> a. Use nvidia-docker-plugin \[3\] to address issue #1, this is the same
> solution used by K8S \[4\]. issue #2 could be addressed in a separate JIRA.
> We won't ship nvidia-docker-plugin with out releases and we require cluster
> admin to preinstall nvidia-docker-plugin to use GPU+docker support on YARN.
> "nvidia-docker" is a wrapper of docker binary which can address #3 as well,
> however "nvidia-docker" doesn't provide same semantics of docker, and it
> needs to setup additional environments such as PATH/LD_LIBRARY_PATH to use
> it. To avoid introducing additional issues, we plan to use
> nvidia-docker-plugin + docker binary approach.
> b. To address GPU driver and nvidia libraries, we uses nvidia-docker-plugin
> \[3\] to create a volume which includes GPU-related libraries and mount it
> when docker container being launched. Changes include:
> - Instead of using {{volume-driver}}, this patch added {{docker volume
> create}} command to c-e and NM Java side. The reason is {{volume-driver}} can
> only use single volume driver for each launched docker container.
> - Updated {{c-e}} and Java side, if a mounted volume is a named volume in
> docker, skip checking file existence. (Named-volume still need to be added to
> permitted list of container-executor.cfg).
> c. To address isolation issue:
> We found that, cgroup + docker doesn't work under newer docker version which
> uses {{runc}} as default runtime. Setting {{--cgroup-parent}} to a cgroup
> which include any {{devices.deny}} causes docker container cannot be launched.
> Instead this patch passes allowed GPU devices via {{--device}} to docker
> launch command.
> References:
> \[1\] https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-driver
> \[2\] https://github.com/NVIDIA/nvidia-docker/wiki/Image-inspection
> \[3\] https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker-plugin
> \[4\] https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]