It is worth pointing out that DCV, a proprietary solution that IBM developed in parallel with VirtualGL and which is now owned by Amazon, does support OpenGL protocol compression and streaming, but it mainly uses this capability to allow OpenGL rendering to occur on a different server than the application server. When using this capability, the OpenGL commands are intercepted on the application server, compressed, and forwarded to a rendering server, and the OpenGL-rendered frames are compressed on the rendering server and streamed directly to the client. My experience with this feature (which is admittedly 5-6 years out of date) is that it has a lot of performance compromises, much as you describe.
On a side note, I was the one who sent IBM the U. Stuttgart research paper that inspired both VGL and DCV. At the time, Landmark was going down the path of working with IBM to develop the solution, but IBM was not receptive to code contributions that were necessary to make their solution compatible with our applications, so we developed our own. I chose not to implement an OpenGL forwarding feature because it would have introduced compatibility and maintainability problems, as previously described, and the performance issues would have limited the usefulness of such a feature anyhow. > On Jun 11, 2019, at 3:30 AM, falde <[email protected]> wrote: > > There are already solutions that stream OpenGL commands. I will give you a > hint. A 1x PCI 1.0 slot transfers 250 MB/s. That is Megabyte not Megabit. So > it is about 2 Gbit. That isn't considered enough for anything but a very low > end GPU. How many people do you think have internet connections that are 2 > Gbit or above? And even if you have mote than 2 Gbit internet connection the > latency would be to high. OpenGL as a network protocol requires extremely low > latency. It is designed for for PCI. Infiniband is a low latency network > technology that are designed for supercomputing. I tried running GL > pipelining over that. It lagged. Not as much as pipelining it over Ethernet > but it still lagged a lot. > > It would be possible to have some scene graph API that you run over a network > and that renders with OpenGL cient side. This is basically what X11 is and > remote X11 can be pretty fast. It doesn't have any 3D graphics but if you use > raw X11 graphics it can be fast. That is what NX does. It runs an X11 proxy > on the "terminal server" that does "X11 compression" and runs another proxy > on the client that use an X11 server which in the right configuration use > OpenGL to render. Modern applications use a lot of things on top of X11 that > can slow this down a lot. Such as antialiased font which is rendered to an > image before sent to the X11 server. > > So yes it is possible to use client side GPU Power. But if you want anything > beyond what is fast in X11 then you will have to create your own high level > protocol. That minimizes traffic between client and server. And that in turns > also means that you will have to port applications to your protocol. Well you > could take for example JavaFX and implement the API as a network protocol and > perhaps even run all JavaFX applications without modifications with full > client side rendering. There of course can be things that some applications > do that slows this down a lot. > > https://www.mcpressonline.com/programming-other/java/ibm-explains-how-to-use-the-remote-abstract-windowing-toolkit-rawt > Remote Java AWT. Note that this was released when many developers already > moved on to Java Swing. However it is still maintained by IBM so they are > using it for something. > > The most optimized remote application clients are multiplayer game engines. > They usually have a protocol designed specifically for that game engines. And > many game engines are game specific, an example of that would be Starcraft. > > And of course another way to do remote application that often perform way > better than any thin client solution is HTML5 + JavaScript + WebGL. Again > this requires you to port applications to it. > >> On Wednesday, February 27, 2019 at 5:39:16 AM UTC+1, Iar De wrote: >> I understand, that VGL is for offloading OpenGL to an X server with 3D >> support, but, do you have a solution to do it the other way around, almost: >> Is it possible, while serving a remote application, get OpenGL commands >> instead of a rendered image on server, so OpenGL portion of the screen gets >> rendered on client's GPU locally, not on the remote server? > > -- > You received this message because you are subscribed to the Google Groups > "VirtualGL User Discussion/Support" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/virtualgl-users/e488cbda-442f-4029-90a6-e95b9c7f8d5e%40googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/22F46683-44F3-40EF-BA45-477BB28ECBC4%40virtualgl.org. For more options, visit https://groups.google.com/d/optout.
