Thanks for linking me to the background reading, that was very educational. 
I see that I can use VirtualGL Transport to call my GL applications without 
a remote desktop, but this will introduce latency between the client-side 
2D X server, and the remote 3D X server. Perhaps a startup script on the 
client which  transparently launches a remote desktop to the TurboVNC 
server is a better solution, because the 3D and 2D X servers have access to 
the same shared memory for PBuffer swaps. Did I understand the advantage of 
the X proxy correctly? Collaborative visualization is not a concern for me, 
but the X proxy seems like a better solution in any case.

Regarding the modifications to VirtualGL which would obviate the 3D X 
server; in the background reading you mention:

... the application must still use indirect OpenGL rendering to send 3D 
> commands and data to the X proxy. It is, of course, much faster to use 
> indirect rendering over a local socket rather than a remote socket, but 
> there is still some overhead involved.


 Is the 3DX to 2DX chatter the bottleneck, is the gigabit network the 
bottleneck, or are they cumulative bottlenecks?

You mentioned a few open and closed source solutions like TurboVNC, and I 
noticed you did not mention NVIDIA's own remote visualization solutions, 
GeForce Experience and the Moonlight client. Remote visualization + GL 
capability appears to be an area where NVIDIA should be leading, but it 
seems they are not...am I wrong? I do not work for NVIDIA so speak freely, 
ha!

-- 
You received this message because you are subscribed to the Google Groups 
"VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to virtualgl-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/virtualgl-users/6c3e96d8-f1d6-497d-a31d-7c89add83969%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to