[ 
https://issues.apache.org/jira/browse/YARN-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16512022#comment-16512022
 ] 

Wangda Tan commented on YARN-8423:
----------------------------------

A possible simple fix to workaround the issue is to mark "releasing GPU" for 
container in killing stage, and add a wait logic inside 
{{GpuResourceAllocator#assignGpus}} before throw exception. But we may need a 
comprehensive solution since we have more resources to add, and this is a 
common severe issue for any resource need hard binding (like GPU / FPGA / CPU 
hard-binding: YARN-8320, cc: [~cheersyang]). 

Attached kill-container-nm.log which shows NM takes 2 mins to kill the (Docker) 
container. 

[~eyang], [~ebadger], [[email protected]], is it normal to wait minutes to 
kill docker container? If yes, is there any way to speed it up? If no, what I 
can provide to troubleshoot the issue? I saw this happen frequently in our 
docker-in-docker setup.

> GPU does not get released even though the application gets killed.
> ------------------------------------------------------------------
>
>                 Key: YARN-8423
>                 URL: https://issues.apache.org/jira/browse/YARN-8423
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: yarn
>            Reporter: Sumana Sathish
>            Assignee: Wangda Tan
>            Priority: Critical
>         Attachments: kill-container-nm.log
>
>
> Run an Tensor flow app requesting one GPU.
> Kill the application once the GPU is allocated
> Query the nodemanger once the application is killed.We see that GPU is not 
> being released.
> {code}
>  curl -i <NM>/ws/v1/node/resources/yarn.io%2Fgpu
> {"gpuDeviceInformation":{"gpus":[{"productName":"<productName>","uuid":"GPU-<UID>","minorNumber":0,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}},{"productName":"<productName>","uuid":"GPU-<UID>","minorNumber":1,"gpuUtilizations":{"overallGpuUtilization":0.0},"gpuMemoryUsage":{"usedMemoryMiB":73,"availMemoryMiB":12125,"totalMemoryMiB":12198},"temperature":{"currentGpuTemp":28.0,"maxGpuTemp":85.0,"slowThresholdGpuTemp":82.0}}],"driverVersion":"<version>"},"totalGpuDevices":[{"index":0,"minorNumber":0},{"index":1,"minorNumber":1}],"assignedGpuDevices":[{"index":0,"minorNumber":0,"containerId":"container_<containerID>"}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to