Hi,
DRC writes:
> Something just occurred to me. In the remote (vglconnect) shell
> session, run glxinfo and look at the server GLX extensions. If
> GLX_EXT_libglvnd is among them, then that probably explains why you're
> seeing a Mesa error message.
In order not to get confused, I will call client to my workstation (in
this case called "com") and server to the remote machine where the P100
Nvidia card is (in this case called "deim"), so from my client I do
vglconnect angelv@deim and in deim is where I want to run vglrun -cl
com
This doesn't seem to be the problem. I get this in the client and in the
server:
,----
| [angelv@com ~]$ glxinfo | grep -i glvnd
`----
,----
| [angelv@deim ~]$ vglrun glxinfo | grep -i glvnd
| libGL error: No matching fbConfigs or visuals found
| libGL error: failed to load driver: swrast
| [angelv@deim ~]$
`----
> If this hypothesis is correct, then setting VGL_PROBEGLX=0 in the
> environment should make the swrast message go away. If so, then that
> message truly is innocuous, but you can probably make it go away for
> good by installing the Mesa swrast driver (I forget which package
> contains that on Fedora.)
I wasn't sure if I had to set VGL_PROBEGLX in the client or the server,
so I tried in both. If set in the client the swrast message doesn't go
away, but if set in the server it does go away:
,----
| [angelv@com ~]$ export VGL_PROBEGLX=0 ; vglconnect angelv@deim
|
| [angelv@deim ~]$ vglrun +v -cl com glxspheres64
| [VGL] NOTICE: Added /usr/lib64/VirtualGL to LD_LIBRARY_PATH
| Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
| [VGL] Shared memory segment ID for vglconfig: 262146
| [VGL] VirtualGL v2.4 64-bit (Build 20170210)
| [VGL] Opening connection to 3D X server :0
| [VGL] NOTICE: Replacing dlopen("/usr/lib64/libdl.so.2") with
| dlopen("libdlfaker.so")
| libGL error: No matching fbConfigs or visuals found
| libGL error: failed to load driver: swrast
`----
,----
| [angelv@com ~]$ vglconnect angelv@deim
|
| [angelv@deim ~]$ setenv VGL_PROBEGLX 0
| [angelv@deim ~]$ vglrun +v -cl com glxspheres64
| [VGL] NOTICE: Added /usr/lib64/VirtualGL to LD_LIBRARY_PATH
| Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
| [VGL] Shared memory segment ID for vglconfig: 294914
| [VGL] VirtualGL v2.4 64-bit (Build 20170210)
| [VGL] Opening connection to 3D X server :0
| [VGL] NOTICE: Replacing dlopen("/usr/lib64/libdl.so.2") with
| dlopen("libdlfaker.so")
| Visual ID of window: 0x21
| Context is Direct
| [VGL] Using Pbuffers for rendering
| OpenGL Renderer: Tesla P100-PCIE-12GB/PCIe/SSE2
| [VGL] Using 1 / 20 CPU's for compression
| [VGL] Using pixel buffer objects for readback (BGR --> BGR)
| [VGL] Client version: 2.1
| 1097.198122 frames/sec - 1224.473105 Mpixels/sec
`----
> If my hypothesis is incorrect, then here are other things to try:
So, at this point I don't know if your hypothesis is correct, incorrect
or both at the same time :-)
> -- Check LD_LIBRARY_PATH and LD_PRELOAD and make sure they aren't
> pointing to a directory that contains another libGL DSO.
> -- 'vglrun ldd /opt/VirtualGL/bin/glxspheres64' and make sure that it
> isn't picking up a non-nVidia libGL DSO somehow.
I unset LD_LIBRARY_PATH and LD_PRELOAD (which was unset in my case), and
I get the same problem with swrast.
,----
| [angelv@deim ~]$ vglrun ldd /usr/bin/glxspheres64
| [VGL] NOTICE: Automatically setting VGL_CLIENT environment variable to
| [VGL] 50ce, the IP address of your SSH client.
| linux-vdso.so.1 (0x00007ffc5a8fa000)
| libdlfaker.so => /usr/lib64/VirtualGL/libdlfaker.so (0x0000152579a1d000)
| librrfaker.so => /usr/lib64/VirtualGL/librrfaker.so (0x0000152579794000)
| libGL.so.1 => /usr/lib64/libGL.so.1 (0x0000152579508000)
| libX11.so.6 => /usr/lib64/libX11.so.6 (0x00001525791ca000)
| libGLU.so.1 => /usr/lib64/libGLU.so.1 (0x0000152578f5d000)
| libm.so.6 => /usr/lib64/libm.so.6 (0x0000152578c47000)
| libc.so.6 => /usr/lib64/libc.so.6 (0x0000152578872000)
| libdl.so.2 => /usr/lib64/libdl.so.2 (0x000015257866e000)
| libxcb.so.1 => /usr/lib64/libxcb.so.1 (0x0000152578446000)
| libxcb-glx.so.0 => /usr/lib64/libxcb-glx.so.0 (0x000015257822b000)
| libxcb-keysyms.so.1 => /usr/lib64/libxcb-keysyms.so.1
| (0x0000152578028000)
| libX11-xcb.so.1 => /usr/lib64/libX11-xcb.so.1 (0x0000152577e26000)
| libturbojpeg.so.0 => /usr/lib64/libturbojpeg.so.0 (0x0000152577bb3000)
| libXv.so.1 => /usr/lib64/libXv.so.1 (0x00001525779ae000)
| libXext.so.6 => /usr/lib64/libXext.so.6 (0x000015257779c000)
| libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x000015257757d000)
| libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00001525771f4000)
| /lib64/ld-linux-x86-64.so.2 (0x0000152579e27000)
| libGLX.so.0 => /usr/lib64/libGLX.so.0 (0x0000152576fc2000)
| libGLdispatch.so.0 => /usr/lib64/libGLdispatch.so.0 (0x0000152576d0c000)
| libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x0000152576af5000)
| libXau.so.6 => /usr/lib64/libXau.so.6 (0x00001525768f1000)
| [angelv@deim ~]$
`----
What I found is that the file libGL.so.1.0.0 is owned by a libglvnd-glx
package, whih at this point I'm not sure if it is a problem or not.
,----
| root@deim ~]# rpm -qf /usr/lib64/libGL.so.1.0.0
| libglvnd-glx-0.2.999-24.20170818git8d4d03f.fc26.x86_64
|
| [root@deim ~]# rpm -qf /usr/lib64/libGLX_nvidia.so.384.81
| xorg-x11-drv-nvidia-libs-384.81-2.fc25.x86_64
`----
> -- Re-install the nVidia drivers with 32-bit support, if you haven't
> already, and install the 32-bit VirtualGL package along with the 64-bit
> package.
Do you think this could solve it even if glxspheres64 is a 64-bits code?
Many thanks for your help,
AdV
--
You received this message because you are subscribed to the Google Groups
"VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/virtualgl-users/0cd2c9e5-4bae-4cc1-b776-abe14213425c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.