OK.  In that case, your general strategy will be:

- On the host, set up the headless 3D X server on Display :0 with each
GPU attached to a different screen.

- Figure out how to share the 3D X server connection from the host to a
container instance.  My first approach would be to try sharing the files
related to the 3D X server's Unix domain socket.  Again, I have not
personally experimented with this yet, so I do not yet know what issues
might be encountered.

- To launch an application in a container instance:
  - Launch Xvfb
  - export DISPLAY=:{d}.0  # {d} = X display of Xvfb instance
  - export VGL_DISPLAY=:0.{s}  # {s} = screen number of desired GPU
  - vglrun {application}


On 8/24/20 2:53 PM, Martin Pecka wrote:
> Ah, I'm sorry, it isn't evident from my first post - I'm not
> interested in transferring the "X buffer" anywhere. The apps we need
> to run only do offscreen rendering. But they need accelerated OpenGL
> for that, and can only do it through GLX (though EGL would be more
> appropriate). Does that help understanding the issue?
>
> If I get it correctly, people who need to see what X draws to the
> "onscreen buffer", will use TurboVNC, and people who only need
> offscreen rendering are good with Xvfb.
>
> Dne pondělí 24. srpna 2020 v 19:45:26 UTC+2 uživatel DRC napsal:
>
>     I don't understand how you plan to get the rendered pixels from
>     the X proxy to the client machine.  Xvfb won't do that.  You would
>     need an X proxy that has an image transport layer attached
>     (TurboVNC, as an example, is essentially Xvfb with an attached VNC
>     server.)  I also don't understand why you're still mentioning
>     vglclient, since that has no relevance to the server-side
>     configuration.  I can almost guarantee you that the technical
>     specifics you listed below are not correct, but in order to
>     correct them, I need a better understanding of what you are
>     ultimately trying to accomplish.  Let's bring the discussion up a
>     level and develop a mutual understanding of the proposed solution
>     architecture before we get mired down in the specifics of command
>     lines and environment variables and such.
>
>     I'm not sure, for instance, what you are expecting in terms of a
>     window popping up.  That will never happen on the 3D X server,
>     since VirtualGL is redirecting all OpenGL rendering into
>     off-screen Pbuffers.  It would only happen on the 2D X server, but
>     again, if you're trying to use Xvfb as a 2D X server, then I don't
>     understand how you expect to get the pixels from that X server to
>     the client machine without an image transport layer.  Also, unless
>     you change the DISPLAY environment variable, the application will
>     not display to the Xvfb instance anyhow.
>
>     On 8/24/20 10:29 AM, Martin Pecka wrote:
>>     Yes, I'm working with containers, only its Singularity HPC and
>>     not Docker, which makes some things more complicated (and some less).
>>
>>     So, to get back to the original question: If I don't run the
>>     vglclient, are the correct steps the following?
>>
>>     1) Get the headless 3D X server running on :0 (should be quite
>>     easy once the cluster admin agrees to do that)
>>     2) Each user runs:
>>         xvfb-run -a -s "-screen 0 1920x1080x24" env VGL_DISPLAY=:0
>>     vglrun /path/to/app arg1 arg2 ...
>>
>>     I  tested this on my (non-headless) laptop and it seemed to work:
>>
>>     No window popped up, so apparently :0 wasn't used. And nvidia-smi
>>     showed that the GPU is being fully utilized. In glxgears, I get
>>     around 700 FPS on my GeForce GTX 1050 via VGL compared to 3000
>>     FPS with direct on-screen rendering, but that's probably okay,
>>     we'll see when I test the full-blown apps.
>>
>>     If this approach is correct for this use-case, could I help
>>     making a part of documentation from it? We could also add the
>>     server-side setup part Jason posted.
>>
>>     Dne pondělí 24. srpna 2020 v 16:57:49 UTC+2 uživatel DRC napsal:
>>
>>         Yes, that is exactly what VirtualGL does, but the VirtualGL
>>         Client is not a required component of VirtualGL.  The
>>         VirtualGL Client is only used if you use the built-in VGL
>>         Transport, which is only useful in a remote X environment
>>         (i.e. when the 2D X server is on the client machine.)  Most
>>         users use VirtualGL with an X proxy these days, in which case
>>         the VirtualGL Client is not used.
>>
>>         Apart from that, it sounds like what you are trying to
>>         accomplish is the same as
>>         https://github.com/VirtualGL/virtualgl/issues/98: sharing the
>>         3D X server connection from host to guest in a container
>>         environment such as Docker.
>>
>>         On 8/24/20 9:15 AM, Martin Pecka wrote:
>>>         As the GPUs are shared among more users, I find it useful to
>>>         give a separate (but still accelerated) X display to each
>>>         user. I suppose telling everybody to use :0 wouldn't end up
>>>         very well (as long as everyone would be rendering offscreen,
>>>         it'd work, but I can't guarantee that). So I thought that
>>>         VirtualGL could be the thing that would guarantee that
>>>         nobody will be rendering onscreen.
>>>
>>>         Dne pondělí 24. srpna 2020 v 16:02:20 UTC+2 uživatel DRC napsal:
>>>
>>>             I don't fully understand what you're proposing.  The 3D
>>>             X server part of
>>>             your proposal should be no problem, as long as you
>>>             connect each GPU to a
>>>             separate screen on that X server (presumably, the 3D X
>>>             server would be
>>>             headless.)  But why is the VirtualGL Client involved?
>>>
>>>             Conceptually, it should be possible to share the 3D X
>>>             server connection
>>>             with, say, a Docker container, but given the extremely
>>>             limited resources
>>>             of this project, I have thus far been unable to dedicate
>>>             the time toward
>>>             researching how best to accomplish that
>>>             (https://github.com/VirtualGL/virtualgl/issues/98).
>>>
>>>             On 8/21/20 7:44 AM, Martin Pecka wrote:
>>>             > Hi, we're thinking about getting GLX support on our
>>>             HPC cluster which
>>>             > (currently) is completely headless. The idea is that
>>>             users should be
>>>             > able to run virtual containers which would be given
>>>             access to HW
>>>             > rendering with OpenGL. EGL would be better, but we're
>>>             stuck with OGRE
>>>             > rendering engine which doesn't have proper support for
>>>             the nvidia EGL.
>>>             >
>>>             > Could you comment on my idea? Is it a supported scenario?
>>>             >
>>>             > The multi-GPU server would run a single "3D X server",
>>>             probably Xorg.
>>>             > It would also run the virtualgl client. Containers
>>>             that want to do
>>>             > some OpenGL stuff would call a combination of xvfb and
>>>             vglrun. I.e.
>>>             > the whole setup only works with a single machine, not
>>>             a pair connected
>>>             > via ssh -X.
>>>             >
>>>             > Is that possible? Is there a tutorial for this kind of
>>>             setup?
>>>
>>>         -- 
>>>         You received this message because you are subscribed to the
>>>         Google Groups "VirtualGL User Discussion/Support" group.
>>>         To unsubscribe from this group and stop receiving emails
>>>         from it, send an email to [email protected].
>>>         To view this discussion on the web visit
>>>         
>>> https://groups.google.com/d/msgid/virtualgl-users/3973b3c2-f7ef-41b6-a580-14941b846b96n%40googlegroups.com
>>>         
>>> <https://groups.google.com/d/msgid/virtualgl-users/3973b3c2-f7ef-41b6-a580-14941b846b96n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>>
>>     -- 
>>     You received this message because you are subscribed to the
>>     Google Groups "VirtualGL User Discussion/Support" group.
>>     To unsubscribe from this group and stop receiving emails from it,
>>     send an email to [email protected].
>>     To view this discussion on the web visit
>>     
>> https://groups.google.com/d/msgid/virtualgl-users/d4c5899e-c602-4dc6-bf14-4a9d695255fan%40googlegroups.com
>>     
>> <https://groups.google.com/d/msgid/virtualgl-users/d4c5899e-c602-4dc6-bf14-4a9d695255fan%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> -- 
> You received this message because you are subscribed to the Google
> Groups "VirtualGL User Discussion/Support" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to [email protected]
> <mailto:[email protected]>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/virtualgl-users/d915ad0f-010b-4155-98f3-10147f589256n%40googlegroups.com
> <https://groups.google.com/d/msgid/virtualgl-users/d915ad0f-010b-4155-98f3-10147f589256n%40googlegroups.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/virtualgl-users/30f14f60-a9c6-5568-2b3f-d4a903dce47d%40virtualgl.org.

Reply via email to