Hello,

  I have a card with two nvidia GPUs. Currently I'm using it in one 
LXC. I compiled the nvidia drivers from their our official web site in 
the container. I created /dev/nvidia0, /dev/nvidia1 and /dev/nvidiactl 
devices into the container. From the container I can start an X server 
on :0. Then I'm using TurboVNC and virtualGL to use the 3D graphics 
capabilities of the card.

  As I have a two GPUs I'd like to dedicate one GPU to a container and 
the other one to other container. My approach is to compile the nvidia 
drivers in both containers, create /dev/nvidia0, /dev/nvidiactl into 
one container and /dev/nvidia1, /dev/nvidiactl into the other 
container. Then I should be able to start an X server in both 
containers. The main problem I have is that both containers try to use 
display :0 even if I start one with xinit -display :2

So I'd like to know if this approach seems doable and if people that 
already achieve this can share the configuration about cgroups, tty and 
nvidia device.

Currently I'm using:

lxc1 config:
lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0 used for X
lxc.cgroup.devices.allow = c 4:1 rwm # /dev/tty1 used for TurboVNC
lxc.cgroup.devices.allow = c 195:* rwm # nvidia device

Xorg is configured to use nvidia0

lxc2 config:
lxc.cgroup.devices.allow = c 4:2 rwm # /dev/tty2 used for X (not working yet)
lxc.cgroup.devices.allow = c 4:3 rwm # /dev/tty3 used for TurboVNC
lxc.cgroup.devices.allow = c 195:* rwm # nvidia device

Xorg is configured to use nvidia1


Regards,
Guillaume



------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
_______________________________________________
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users

Reply via email to