Hello everyone,

Does anyone know how to solve the issue where running a headless Xorg
blocks access to the virtual consoles using HPE servers and iLO to connect?
So far, I've been working around it by using the serial console support in
the iLO.

Thanks,
Jason

---------------------------------------------------------------------------
 Jason Edgecombe | Linux Administrator
 UNC Charlotte | Office of OneIT
 9201 University City Blvd. | Charlotte, NC 28223-0001
 Phone: 704-687-1943
 [email protected] | oneit.charlotte.edu

---------------------------------------------------------------------------
If you are not the intended recipient of this transmission or a person
responsible for delivering it to the intended recipient, any disclosure,
copying, distribution, or other use of any of the information in this
transmission is strictly prohibited. If you have received this transmission
in error, please notify me immediately by reply e-mail or by telephone at
704-687-1943.  Thank you.


On Wed, Sep 21, 2022 at 5:40 PM 'DRC' via VirtualGL User Discussion/Support
<[email protected]> wrote:

> [*Caution*: Email from External Sender. Do not click or open links or
> attachments unless you know this sender.]
>
>
> Follow-up questions:
>
> - Are your 3D applications purely non-interactive?  That is, is it
> important for users to be able to interact with or see the output of the
> applications in real time?  If the answer is "never", then that simplifies
> the solution.
> - Do these applications use GLX?  EGL?  Vulkan?  Do they have X11 GUIs or
> need to use X11 for anything other than accessing the GPU?
>
>
> If the applications are sometimes or always interactive, have X11 GUIs,
> and use GLX or EGL to access a GPU, then ideally you would use TurboVNC
> with VirtualGL and the VirtualGL EGL back end.  The way that would work is:
> 1. The user submits a batch job.
>
> 2. The job scheduler picks an execution node and a GPU on that node.
>
> 3. The job scheduler starts a new TurboVNC session on the execution node.
> (Note that some job schedulers require the -fg switch to be passed to
> /opt/TurboVNC/bin/vncserver in order to prevent TurboVNC from immediately
> backgrounding itself.)
>
> 4. The job scheduler temporarily changes the permissions and ownership for
> the devices (/dev/dri/card*, /dev/dri/render*, /dev/nvidia*) corresponding
> to the chosen GPU so that only the submitting user can access the GPU.
>
> 5. The job scheduler executes the 3D application with DISPLAY pointed to
> the newly-created TurboVNC session and VGL_DISPLAY pointed to the chosen
> GPU's DRI device.
>
> There are multiple ways in which TurboVNC sessions can be managed:
>
> - Some sites use custom web portals that create a new TurboVNC session
> through a job scheduler; populate a VNC "connection info" file with the
> TurboVNC session's hostname, display number, and one-time password; and
> download the connection info file to the user's browser, where it can be
> opened with the TurboVNC Viewer.  Re-connecting to the TurboVNC session
> involves much the same process, except that the job scheduler simply
> generates a new OTP for the existing session rather than starting a new
> session.
>
> - Some sites do basically the same thing without the web portal.  In that
> case, the job scheduler prints the hostname and display number of the
> newly-created TurboVNC session, and users are required to enter that
> information into the TurboVNC Viewer manually and authenticate with the
> TurboVNC session using an authentication mechanism of the SysAdmin's
> choosing.  (TurboVNC supports static VNC passwords, Unix login credentials
> or any other PAM-based authentication mechanism, one-time passwords,
> time-based one-time passwords, X.509 certificates, and SSH, and SysAdmins
> can force a particular authentication and encryption mechanism to be used
> on a system-wide basis.)
>
> - If the users have direct SSH access to the execution nodes, then they
> could also use the TurboVNC Session Manager, which handles authentication,
> encryption, and session management through SSH.  (In that case, a user
> would only need to know the hostname of the execution node on which their
> session is running.)
>
> Potential wrinkles:
>
> - The VirtualGL EGL back end generally works fine with straightforward
> OpenGL applications, but there are a couple of esoteric application
> behaviors (generally related to complex X11/OpenGL interactions) that still
> trip it up.  You would need to test your applications and make sure that
> they all work properly with the EGL back end before declaring that a 3D X
> server will never be necessary.
>
> -If you are dealing with multi-GPU applications that expect to be able to
> directly connect to separate GPU-attached X servers/screens in order to
> access GPUs for the secondary rendering processes (e.g. ParaView "back in
> the days" before it supported EGL), then that complicates things.  It
> should still be possible to use VirtualGL as a GLX-to-EGL translator in
> that case.  It would just require special values of the VGL_DISPLAY and
> VGL_READBACK environment variables to be set for each rendering process.
>
> If the 3D applications are purely non-interactive, then you wouldn't
> necessarily need VirtualGL.  VirtualGL is basically only useful for
> displaying to a remote system, because the two most common Un*x remote
> display use cases are:  (1) remote X11 (client-side physical X display), in
> which case you need VirtualGL in order to avoid sending OpenGL primitives
> and data over the network, and (2) an X proxy (server-side virtual X
> display), in which case you need VirtualGL because X proxies lack GPU
> acceleration.  You generally only need VGL if a 3D application is
> displaying something that a user needs to see or interact with in real
> time.  However, you could still use VirtualGL as a GLX-to-EGL translator if
> your non-interactive 3D applications use GLX to access a GPU.  If the
> non-interactive 3D application needs an X server for some purpose, such as
> creating a dummy window or a Pixmap, then you could start an Xvfb instance
> instead of TurboVNC, since the user would never need to see or interact
> with the application's output in real time.
>
>
> tl;dr: I don't actually know how to start independent 3D X servers using a
> job scheduler, and I'm not sure if starting GPU-attached Xorg instances
> under non-root accounts is even possible.  (Someone else please correct me
> if I'm wrong.)  Sites that use job schedulers and need to use the VirtualGL
> GLX back end will typically run a full-time 3D X server with a dedicated
> screen for each GPU.  In that case, everything I said above applies, except
> that you would point VGL_DISPLAY to the GPU's screen rather than its DRI
> device.  The full-time 3D X server shouldn't use any GPU compute resources
> when it is idle, but it will use some GPU memory (not a lot, though-- like
> 32-64 MB if the GPU is configured for headless operation.)  However, the
> security situation is less palatable, since nothing would technically
> prevent another user from pointing VGL_DISPLAY to a screen attached to a
> GPU that has been allocated for another user.  I really think that the EGL
> back end is your best bet, if you can make it work.
> DRC
>
> On 9/21/22 1:37 PM, Doug O'Neal wrote:
>
> The cluster has nodes containing 3-8 nvidia GPUs each with slurm as the
> scheduler. The gpus are used mainly for AI and image processing. Display to
> a remote system is a secondary use. requirements will include
>
>    - Only the user submitting the batch job has access to the gpu and the
>    user has access only to the gpu(s) allocated through the batch system.
>    - Ideal situation is for Xorg or an equivalent daemon to be started
>    when the batch job starts and is killed when the job exits. Daemon should
>    run as the user, possibly with /dev/nvidia? owned by the user. A chown can
>    be included in the slurm prolog script.
>    - If Xorg has to be running continuously, it should not take resources
>    (gpu system time or memory) away from the non-display jobs when they have
>    the gpu allocated. Do we need one daemon per gpu and how to we restrict
>    access based on slurm resource requests?
>    - More minor but still a problem. Running Xorg headless still blocks
>    access to the virtual consoles using HPE servers and iLO to connect
>
> --
> You received this message because you are subscribed to the Google Groups
> "VirtualGL User Discussion/Support" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/virtualgl-users/fbd6771f-df33-7960-47ab-468086fcf144%40virtualgl.org
> <https://groups.google.com/d/msgid/virtualgl-users/fbd6771f-df33-7960-47ab-468086fcf144%40virtualgl.org?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/virtualgl-users/CAAR6MGCHd4d8ycxekPj7v%3Dd9oXCF-FmiGAhOaOTxbRxUKtNeXw%40mail.gmail.com.

Reply via email to