>If two or more containers are sharing the same physical GPU, how does the
time slicing work?
I thinks this is as same as multiple processes sharing same physical GPU.

On Tue, Jan 5, 2016 at 1:48 AM, Anshuman Goswami <[email protected]
> wrote:

> Sorry as I am not very familiar with docker terminology.
> If two or more containers are sharing the same physical GPU, how does the
> time slicing work?
>
> On Mon, Jan 4, 2016 at 9:17 AM, haosdent <[email protected]> wrote:
>
>> As I know, GPUs have different nodes, nvidia-docker image isolate
>> different containers by specifying the using node through GPU= environment
>> variable.
>>
>> On Tue, Jan 5, 2016 at 1:08 AM, Anshuman Goswami <
>> [email protected]> wrote:
>>
>>> I am interested to know the granularity of isolation. Is it at kernel
>>> boundaries?
>>>
>>> On Mon, Jan 4, 2016 at 8:46 AM, tommy xiao <[email protected]> wrote:
>>>
>>>> currently, i use Docker to support GPU
>>>>
>>>> 2016-01-04 15:15 GMT+08:00 Nan Xiao <[email protected]>:
>>>>
>>>>> Hi all,
>>>>>
>>>>> I am investigating GPU feature on Mesos, and want to clarify some
>>>>> doubts:
>>>>>
>>>>> (1) This post (
>>>>> http://stackoverflow.com/questions/27872558/does-apache-mesos-recognize-gpu-cores
>>>>> ),
>>>>> refers "Mesos does not yet have support for gpu isolation". How can I
>>>>> understand "gpu isolation" here?
>>>>>
>>>>> (2) A recent post
>>>>> (https://mesosphere.com/blog/2015/11/10/mesos-nvidia-gpus/) said
>>>>> "Mesos now supports GPUs.".
>>>>> But from Mesos release note
>>>>> (
>>>>> https://git-wip-us.apache.org/repos/asf?p=mesos.git;a=blob_plain;f=CHANGELOG;hb=0.26.0
>>>>> ),
>>>>> I can't see any logs about GPU. From  the v0.26 code, I also can't see
>>>>> any special code for GPU. So what is
>>>>> the status about GPU feature on Mesos?
>>>>>
>>>>> Thanks very much in advance!
>>>>>
>>>>> Best Regards
>>>>> Nan Xiao
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Deshi Xiao
>>>> Twitter: xds2000
>>>> E-mail: xiaods(AT)gmail.com
>>>>
>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Haosdent Huang
>>
>
>


-- 
Best Regards,
Haosdent Huang

Reply via email to