[ 
https://issues.apache.org/jira/browse/MESOS-8038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17089005#comment-17089005
 ] 

Charles Natali edited comment on MESOS-8038 at 4/21/20, 7:46 PM:
-----------------------------------------------------------------

[~bmahler]

I have a way to reproduce it systematically, albeit very contrived: using 
syscall fault injection.

 

Basically I just continuously start tasks allocating 1 GPU and just do "exit 0" 
(see attach python framework).

 

Then, I run the following  - inject a few seconds delay in all rmdir syscalls 
made by the agent:

 
{noformat}
# strace -p $(pgrep -f mesos-agent) -f -e inject=rmdir:delay_enter=3000000 -o 
/dev/null
{noformat}
 

After less than a minute, tasks start failing with this error:
{noformat}
Failed to launch container: Requested 1 gpus but only 0 available{noformat}
 

I'll try to see if I can find a simpler reproducer, but this seems to fail 
systematically for me.

 


was (Author: cf.natali):
[~bmahler]

I have a way to reproduce it systematically, albeit very contrived: using 
syscall fault injection.

 

Basically I just continuously start tasks allocating 1 GPU and just do "exit 0" 
(see attach python framework).

 

Then, I run the following  - inject a few seconds delay in all rmdir syscalls 
made by the agent:

 
{noformat}
# strace -p $(pgrep -f mesos-agent) -f -e inject=rmdir:delay_enter=3000000 -o 
/dev/null
{noformat}
 

After a few minutes, tasks start failing with this error:
{noformat}
Failed to launch container: Requested 1 gpus but only 0 available{noformat}
 

I'll try to see if I can find a simpler reproducer, but this to fail 
systematically for me.

 

> Launching GPU task sporadically fails.
> --------------------------------------
>
>                 Key: MESOS-8038
>                 URL: https://issues.apache.org/jira/browse/MESOS-8038
>             Project: Mesos
>          Issue Type: Bug
>          Components: containerization, gpu
>    Affects Versions: 1.4.0
>            Reporter: Sai Teja Ranuva
>            Assignee: Zhitao Li
>            Priority: Critical
>         Attachments: mesos-master.log, mesos-slave-with-issue-uber.txt, 
> mesos-slave.INFO.log, start_short_tasks_gpu.py
>
>
> I was running a job which uses GPUs. It runs fine most of the time. 
> But occasionally I see the following message in the mesos log.
> "Collect failed: Requested 1 but only 0 available"
> Followed by executor getting killed and the tasks getting lost. This happens 
> even before the the job starts. A little search in the code base points me to 
> something related to GPU resource being the probable cause.
> There is no deterministic way that this can be reproduced. It happens 
> occasionally.
> I have attached the slave log for the issue.
> Using 1.4.0 Mesos Master and 1.4.0 Mesos Slave.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to