I ran the commands you suggested (I went with -p 1m) and am still seeing a 
big difference. I just find it strange to see it clearly working with 
glxspheres64, but not much else.

$ *glxspheres64 -p 1000000*
Polygons in scene: 999424 (61 spheres * 16384 polys/spheres)
GLX FB config ID of window: 0xfe (8/8/8/0)
Visual ID of window: 0x2bf
Context is Direct
OpenGL Renderer: llvmpipe (LLVM 9.0.1, 256 bits)
3.292760 frames/sec - 2.370366 Mpixels/sec
3.317006 frames/sec - 2.387820 Mpixels/sec
$ *vglrun -sp glxspheres64 -p 1000000*
Polygons in scene: 999424 (61 spheres * 16384 polys/spheres)
GLX FB config ID of window: 0x6b (8/8/8/0)
Visual ID of window: 0x288
Context is Direct
OpenGL Renderer: Mesa DRI Intel(R) HD Graphics P4600/P4700 (HSW GT2)
62.859812 frames/sec - 45.251019 Mpixels/sec
59.975806 frames/sec - 43.174903 Mpixels/sec

BTW, GNOME is now working (where I ran the above). I'm trying to run the 
whole desktop in VGL, but *vncserver -wm ~/gnome -vgl* doesn't seem to do 
anything differently than it does without -vgl. Again, my gnome script is:

#!/bin/sh
dbus-launch gnome-session

That said, the desktop isn't broken now so that's an improvement on KDE. 
But how can I run the whole of GNOME under VGL?

I think if I can get the desktop running in VGL and still not see the 
performance in apps that I do locally (apart from in glxspheres!) I will 
take that as the most I can do with my system over VNC (unless you find it 
helpful for me to debug further).

Thanks,


On Friday, 17 April 2020 19:04:48 UTC+1, DRC wrote:
>
> On 4/17/20 10:36 AM, Shak wrote:
>
> I ran glmark on the host display normally and then with software 
> rendering. I've attached the results at the end of this message. I've 
> attached this for completion rather than to contradict your hunch, but they 
> do tie up with the numbers I see via VGL so I don't think this is a CPU/VNC 
> issue.
>
> Hmmm...  Well, you definitely are seeing a much greater speedup with 
> glmark2 absent VirtualGL, so I can only guess that the benchmark is 
> fine-grained enough that it's being affected by VGL's per-frame overhead.  
> A more realistic way to compare the two drivers would be using '[vglrun 
> -sp] /opt/VirtualGL/bin/glxspheres -p {n}', where {n} is a fairly high 
> number of polygons (at least 100,000.) 
>
>
> I've tried repeating my experiments using gnome, in case the issue is with 
> KDE. However I get the following when trying to run vglrun:
>
> $ *vglrun glxspheres64*
> /usr/bin/vglrun: line 191: hostname: command not found
> [VGL] NOTICE: Automatically setting VGL_CLIENT environment variable to
> [VGL]    10.10.7.1, the IP address of your SSH client.
> Polygons in scene: 62464 (61 spheres * 1024 polys/spheres)
> libGL error: failed to authenticate magic 1
> libGL error: failed to load driver: i965
> GLX FB config ID of window: 0x6b (8/8/8/0)
> Visual ID of window: 0x21
> Context is Direct
> OpenGL Renderer: llvmpipe (LLVM 9.0.1, 256 bits)
> 17.228616 frames/sec - 17.859872 Mpixels/sec
> 16.580449 frames/sec - 17.187957 Mpixels/sec
>
> You need to install whatever package provides /usr/bin/hostname for your 
> Linux distribution.  That will eliminate the vglrun error, although it's 
> probably unrelated to this problem. (Because of the error, vglrun is 
> falsely detecting an X11-forward SSH environment and setting VGL_CLIENT, 
> which would normally be used for the VGL Transport.  However, since 
> VirtualGL auto-detects an X11 proxy environment and enables the X11 
> Transport, the value of VGL_CLIENT should be ignored in this case.)
>
> I honestly have no clue how to proceed.  I haven't observed these problems 
> in any of the distributions I officially support, and I have no way to test 
> Arch.
>
> I'm not sure what to make of these. I am using *vncserver -wm ~/gnome*, 
> where gnome is the following script.
>
> #!/bin/sh
> dbus-launch gnome-session
>
> I feel that I am close but still a way off. 
>
> FWIW I have previously tried using nomachine which is able to give me the 
> perceived GL acceleration by "mirroring" my host display, but that just 
> feels like the wrong way to achieve this (not least because it requires a 
> monitor attached to use).
>
> Thanks,
>
> ==== RENDER TESTS ====
>
> $ *glmark2*
> =======================================================
>     glmark2 2017.07
> =======================================================
>     OpenGL Information
>     GL_VENDOR:     Intel Open Source Technology Center
>     GL_RENDERER:   Mesa DRI Intel(R) HD Graphics P4600/P4700 (HSW GT2)
>     GL_VERSION:    3.0 Mesa 20.0.4
> =======================================================
> [build] use-vbo=false: FPS: 2493 FrameTime: 0.401 ms
> =======================================================
>                                   glmark2 Score: 2493
> =======================================================
>
> $ *LIBGL_ALWAYS_SOFTWARE=1 glmark2*
> ** GLX does not support GLX_EXT_swap_control or GLX_MESA_swap_control!
> ** Failed to set swap interval. Results may be bounded above by refresh 
> rate.
> =======================================================
>     glmark2 2017.07
> =======================================================
>     OpenGL Information
>     GL_VENDOR:     VMware, Inc.
>     GL_RENDERER:   llvmpipe (LLVM 9.0.1, 256 bits)
>     GL_VERSION:    3.1 Mesa 20.0.4
> =======================================================
> ** GLX does not support GLX_EXT_swap_control or GLX_MESA_swap_control!
> ** Failed to set swap interval. Results may be bounded above by refresh 
> rate.
> [build] use-vbo=false: FPS: 420 FrameTime: 2.381 ms
> =======================================================
>                                   glmark2 Score: 420
> =======================================================
>
>
> On Thursday, 16 April 2020 23:21:59 UTC+1, DRC wrote: 
>>
>> On 4/16/20 3:19 PM, Shak wrote:
>>
>> Thank you for the quick tips. I have posted some results at the end of 
>> this post, but they seem inconsistent. glxspheres64 shows the correct 
>> renderer respectively and the performance shows the 6x results I was 
>> expecting. However I do not see the same gains in glmark2, even though it 
>> also reports the correct renderer in each case. Again, I see a glmark of 
>> 2000+ when running it in display :0.
>>
>> I don't know much about glmark2, but as with any benchmark, Amdahl's Law 
>> applies.  That means that the total speedup from any enhancement (such as a 
>> GPU) is limited by the percentage of clock time during which that 
>> enhancement is used.  Not all OpenGL workloads are GPU-bound in terms of 
>> performance.  If the geometry and window size are both really small, then 
>> the performance could very well be CPU-bound.  That's why, for instance, 
>> GLXgears is a poor OpenGL benchmark.  Real-world applications these days 
>> assume the presence of a GPU, so they're going to have no qualms about 
>> trying to render geometries with hundreds of thousands or even millions of 
>> polygons.  When you try to do that with software OpenGL, you'll see a big 
>> difference vs. GPU acceleration-- a difference that won't necessarily show 
>> up with tiny geometries. 
>>
>> You can confirm that that's the case by running glmark2 on your local 
>> display without VirtualGL and forcing the use of the swrast driver.  I 
>> suspect that the difference between swrast and i965 won't be very great in 
>> that scenario, either.  (I should also mention that Intel GPUs aren't the 
>> fastest in the world, so you're never going to see as much of a speedup-- 
>> nor as large of a speedup in as many cases-- as you would see with AMD or 
>> nVidia.)
>>
>> The other thing is, if the benchmark is attempting to measure unrealistic 
>> frame rates-- like hundreds or thousands of frames per second-- then there 
>> is a small amount of per-frame overhead introduced by VirtualGL that may be 
>> limiting that frame rate.  But the reality is that human vision can't 
>> usually detect more than 60 fps anyhow, so the difference between, say, 200 
>> fps and 400 fps is not going to matter to an application user.  At more 
>> realistic frame rates, VGL's overhead won't be noticeable.
>>
>> Performance measurement in a VirtualGL environment is more complicated 
>> than performance measurement in a local display environment, which is why 
>> there's a whole section of the VirtualGL User's Guide dedicated to it.  
>> Basically, since VGL introduces a small amount of per-frame overhead but no 
>> per-vertex overhead, at realistic frame rates and with modern server and 
>> client hardware, it will not appear any slower than a local display.  
>> However, some synthetic benchmarks may record slower performance due to the 
>> aforementioned overhead.
>>
>>
>> In the meantime I have been trying to get the DE as a whole to run under 
>> acceleration. I record my findings here as a possible clue to my VGL issues 
>> above. In my .vnc/xstartup.turbovnc I use the following command:
>>
>> #normal start - works with llvmpipe and vglrun
>> #exec startplasma-x11
>>
>> #VGL start
>> exec vglrun +wm startplasma-x11
>>
>> And I also start tvnc with:
>>
>> $vncserver -3dwm
>>
>> I'm not sure if vglrun, +wm or -3dwm are redundant or working against 
>> each other, but I've also tried various combinations to no avail.
>>
>> Just use the default xstartup.turbovnc script ('rm 
>> ~/.vnc/xstartup.turbovnc' and re-run /opt/TurboVNC/bin/vncserver to create 
>> it) and start TurboVNC with '-wm startplasma-x11 -vgl'.
>>
>> * -3dwm is deprecated.  Use -vgl instead.  -3dwm/-vgl (or setting 
>> '$useVGL = 1;' in /etc/turbovncserver.conf or ~/.vnc/turbovncserver.conf) 
>> simply instructs xstartup.turbovnc to run the window manager startup script 
>> using 'vglrun +wm'.
>>
>> * Passing -wm to /opt/TurboVNC/bin/vncserver (or setting '$wm = 
>> {script};' in turbovncserver.conf) instructs xstartup.turbovnc to execute 
>> the specified window manager startup script rather than 
>> /etc/X11/xinit/xinitrc.
>>
>> * +wm is a feature of VirtualGL, not TurboVNC.  Normally, if VirtualGL 
>> detects that an OpenGL application is not monitoring StructureNotify 
>> events, VGL will monitor those events on behalf of the application (which 
>> allows VGL to be notified when the window changes size, thus allowing VGL 
>> to change the size of the corresponding Pbuffer.)  This is, however, 
>> unnecessary with window managers and interferes with some of them (compiz, 
>> specifically), so +wm disables that behavior in VirtualGL.  It's also a 
>> placeholder in case future issues are discovered that are specific to 
>> compositing window managers (+wm could easily be extended to handle those 
>> issues as well.)
>>
>> Interestingly I had to update the vglrun script to have the full paths to 
>> /usr/lib/libdlfaker.so and the others otherwise I see the following in the 
>> TVNC logs:
>>
>> ERROR: ld.so: object 'libdlfaker.so' from LD_PRELOAD cannot be preloaded 
>> (cannot open shared object file): ignored.
>> ERROR: ld.so: object 'libvglfaker.so' from LD_PRELOAD cannot be preloaded 
>> (cannot open shared object file): ignored.
>>
>> That said, my desktop is still broken even when these errors disappear.
>>
>> Could my various issues be to do with KDE? 
>>
>> The LD_PRELOAD issues can be fixed as described here:
>>
>> https://cdn.rawgit.com/VirtualGL/virtualgl/2.6.3/doc/index.html#hd0012
>>
>> All of that aside, I have not personally tested the bleeding-edge KDE 
>> Plasma release, which is what Arch presumably ships, so I have no idea 
>> whether it works with VirtualGL or TurboVNC.  The window managers I have 
>> tested are listed here:
>>
>> https://turbovnc.org/Documentation/Compatibility22
>>
>>
>> On Thursday, 16 April 2020 20:02:13 UTC+1, DRC wrote: 
>>>
>>> You can't really determine which OpenGL renderer is in use by just 
>>> looking at dlopen() calls.  In a TurboVNC environment, swrast will be used 
>>> for any GLX/OpenGL commands sent to the TurboVNC X server (the "2D X 
>>> server"), and VirtualGL does send a couple of GLX/OpenGL commands to the 2D 
>>> X server to probe its capabilities.  That's probably why swrast is being 
>>> loaded, but if everything is working properly, the OpenGL renderer string 
>>> should still report that the Intel driver is in use for actual rendering.  
>>> Compare the output of /opt/VirtualGL/bin/glxinfo on the local display with 
>>> 'vglrun /opt/VirtualGL/bin/glxinfo' in TurboVNC, or just run 
>>> /opt/VirtualGL/bin/glxspheres64 (using vglrun in TurboVNC), which reports 
>>> the OpenGL renderer string as well.
>>>
>>> On 4/16/20 10:34 AM, Shak wrote:
>>>
>>> I am trying to set up VirtualGL + TurboVNC on Arch with the Plasma KDE 
>>> desktop. The host is itself a VM using an passed through Intel P4600 IGD. I 
>>> believe that the passthrough itself is successful as I see GL performance 
>>> when running a desktop on display :0. 
>>>
>>> I am using glmark2 to check what is happening. As a measure, I get a 
>>> score of around 2000 on the host, and 300 when using llvmpipe and similar 
>>> when using vglrun. vglrun glinfo is reporting that its using the Intel Mesa 
>>> driver (vs llvmpipe), so the set up at least looks okay, but it seems that 
>>> vglrun is still using software rendering.
>>>
>>> Passing +v +tr to vglrun in both cases shows me the traces at the end of 
>>> this post, which aligns with what I am seeing (highlighted yellow). I've 
>>> also highlighted green some other lines which confuse me.
>>>
>>> I read this 
>>> <https://sourceforge.net/p/virtualgl/mailman/message/31296895/> that 
>>> indicated that Mesa isn't supported and in the docs that Arch's VirtiualGL 
>>> package might be broken. I do not know if these apply here (or are still 
>>> valid).
>>>
>>> Also when trying to accelerate the whole DE using exec vglrun 
>>> plasmastart-x11 in my xstartup.turbovnc, I seem to get a broken desktop. I 
>>> am not sure if this is related to the above but have mentioned it at as 
>>> another data point (if its unrelated then tips to fix would be 
>>> appreciated!).
>>>
>>> Thanks,
>>>
>>> vglrun traces:
>>>
>>> ==== on host ====
>>>
>>> [VGL] NOTICE: Added /usr/lib to LD_LIBRARY_PATH
>>> [VGL] Shared memory segment ID for vglconfig: 131106
>>> [VGL] VirtualGL v2.6.3 64-bit (Build 20200214)
>>> [VGL 0x92aa5780] XOpenDisplay (name=NULL dpy=0x557453a35740(:0) ) 
>>> 0.335932 ms
>>> [VGL] dlopen (filename=libGL.so flag=4098[VGL] NOTICE: Replacing 
>>> dlopen("libGL.so") with dlopen("libvglfaker.so")
>>>  retval=0x7f2c935804f0)
>>> [VGL] Opening connection to 3D X server :0
>>> [VGL] dlopen (filename=libGLX_mesa.so.0 flag=1 retval=0x557453a53170)
>>> [VGL] dlopen (filename=libGLX_mesa.so.0 flag=258 retval=0x557453a53170)
>>> [VGL] dlopen (filename=/usr/lib/dri/tls/i965_dri.so flag=258 
>>> retval=0x00000000)
>>> [VGL] dlopen (filename=/usr/lib/dri/i965_dri.so flag=258 
>>> retval=0x557453a69340)
>>> [VGL 0x92aa5780] glXGetProcAddressARB ((char 
>>> *)procName=glXSwapIntervalEXT [INTERPOSED]) 0.005960 ms
>>> [VGL 0x92aa5780] glXChooseFBConfig (dpy=0x557453a35740(:0) screen=0 
>>> attrib_list=[0x8012=0x0001 0x8010=0x0001 0x8011=0x0001 0x0022=0x8002 
>>> 0x0008=0x0001 0x0009=0x0001 0x000a=0x0001 0x000b=0x0001 0x000c=0x0001 
>>> 0x000d=0x0000 0x0002=0x0001 0x0005=0x0001 ] glxattribs=[0x8010=0x0001 
>>> 0x000c=0x0001 0x000d=0x0000 0x0002=0x0001 0x0005=0x0001 0x0008=0x0001 
>>> 0x0009=0x0001 0x000a=0x0001 0x000b=0x0001 0x8011=0x0001 0x0022=0x8002 ] 
>>> [VGL] dlopen (filename=libGLX_mesa.so.0 flag=258 retval=0x557453a53170)
>>> [VGL] dlopen (filename=/usr/lib/dri/tls/i965_dri.so flag=258 
>>> retval=0x00000000)
>>> [VGL] dlopen (filename=/usr/lib/dri/i965_dri.so flag=258 
>>> retval=0x557453a69340)
>>> configs[0]=0x557453b9a040(0x67) configs[1]=0x557453b9a7c0(0x6f) 
>>> configs[2]=0x557453b9af40(0x77) configs[3]=0x557453b9b300(0x7b) 
>>> configs[4]=0x557453b9bc60(0x85) configs[5]=0x557453b9c3e0(0x8d) 
>>> configs[6]=0x557453b9bd50(0x86) configs[7]=0x557453b9c4d0(0x8e) 
>>> configs[8]=0x557453b9b030(0x78) configs[9]=0x557453b9b3f0(0x7c) 
>>> configs[10]=0x557453b9fa40(0xc7) configs[11]=0x557453b9fe00(0xcb) 
>>> configs[12]=0x557453b9ffe0(0xcd) configs[13]=0x557453ba00d0(0xce) 
>>> *nelements=14 ) 6.473064 ms
>>> [VGL 0x92aa5780] glXGetFBConfigAttrib (dpy=0x557453a35740(:0) 
>>> config=0x557453b9a040(0x67) attribute=2(0x2) *value=32(0x20) ) 0.008106 ms
>>>
>>>
>>> ==== on tvnc ====
>>>
>>> [VGL] NOTICE: Added /usr/lib to LD_LIBRARY_PATH
>>> [VGL] Shared memory segment ID for vglconfig: 131076
>>> [VGL] VirtualGL v2.6.3 64-bit (Build 20200214)
>>> [VGL 0x9f624780] XOpenDisplay (name=NULL dpy=0x555b28562740(:1) ) 
>>> 2.894163 ms
>>> [VGL] dlopen (filename=libGL.so flag=4098[VGL] NOTICE: Replacing 
>>> dlopen("libGL.so") with dlopen("libvglfaker.so")
>>>  retval=0x7f90a00ff4f0)
>>> [VGL] Opening connection to 3D X server :0
>>> [VGL] dlopen (filename=libGLX_mesa.so.0 flag=1 retval=0x555b28583a20)
>>> [VGL] dlopen (filename=libGLX_mesa.so.0 flag=258 retval=0x555b28583a20)
>>> [VGL] dlopen (filename=/usr/lib/dri/tls/i965_dri.so flag=258 
>>> retval=0x00000000)
>>> [VGL] dlopen (filename=/usr/lib/dri/i965_dri.so flag=258 
>>> retval=0x555b285964a0)
>>> [VGL 0x9f624780] glXGetProcAddressARB ((char 
>>> *)procName=glXSwapIntervalEXT [INTERPOSED]) 0.045061 ms
>>> [VGL 0x9f624780] glXChooseFBConfig (dpy=0x555b28562740(:1) screen=0 
>>> attrib_list=[0x8012=0x0001 0x8010=0x0001 0x8011=0x0001 0x0022=0x8002 
>>> 0x0008=0x0001 0x0009=0x0001 0x000a=0x0001 0x000b=0x0001 0x000c=0x0001 
>>> 0x000d=0x0000 0x0002=0x0001 0x0005=0x0001 ] glxattribs=[0x8010=0x0001 
>>> 0x000c=0x0001 0x000d=0x0000 0x0002=0x0001 0x0005=0x0001 0x0008=0x0001 
>>> 0x0009=0x0001 0x000a=0x0001 0x000b=0x0001 0x8011=0x0001 0x0022=0x8002 ] 
>>> [VGL] dlopen (filename=libGLX_mesa.so.0 flag=258 retval=0x555b28583a20)
>>> [VGL] dlopen (filename=/usr/lib/dri/tls/swrast_dri.so flag=258 
>>> retval=0x00000000)
>>> [VGL] dlopen (filename=/usr/lib/dri/swrast_dri.so flag=258 
>>> retval=0x555b28705370)
>>> configs[0]=0x555b286c7170(0x67) configs[1]=0x555b286c78f0(0x6f) 
>>> configs[2]=0x555b286c8070(0x77) configs[3]=0x555b286c8430(0x7b) 
>>> configs[4]=0x555b286c8d90(0x85) configs[5]=0x555b286c9510(0x8d) 
>>> configs[6]=0x555b286c8e80(0x86) configs[7]=0x555b286c9600(0x8e) 
>>> configs[8]=0x555b286c8160(0x78) configs[9]=0x555b286c8520(0x7c) 
>>> configs[10]=0x555b286ccb70(0xc7) configs[11]=0x555b286ccf30(0xcb) 
>>> configs[12]=0x555b286cd110(0xcd) configs[13]=0x555b286cd200(0xce) 
>>> *nelements=14 ) 44.402122 ms
>>> [VGL 0x9f624780] glXGetFBConfigAttrib (dpy=0x555b28562740(:1) 
>>> config=0x555b286c7170(0x67) attribute=2(0x2) *value=32(0x20) ) 0.003815 ms
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "VirtualGL User Discussion/Support" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected].
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/virtualgl-users/ada84490-cfbe-41c7-8919-c0f00241ba82%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/virtualgl-users/ada84490-cfbe-41c7-8919-c0f00241ba82%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "VirtualGL User Discussion/Support" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/virtualgl-users/f462fa23-9363-43f1-9001-ced5eae3f925%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/virtualgl-users/f462fa23-9363-43f1-9001-ced5eae3f925%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> -- 
> You received this message because you are subscribed to the Google Groups 
> "VirtualGL User Discussion/Support" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/virtualgl-users/bd7beb3a-2cfe-4f2b-8440-4ddd2dd812a8%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/virtualgl-users/bd7beb3a-2cfe-4f2b-8440-4ddd2dd812a8%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/virtualgl-users/16cd2e22-2380-447c-b3ca-a494a17e324f%40googlegroups.com.

Reply via email to