On 8/16/12 12:05 PM, marty scholes wrote: > I tested on the console before installing VGL. Everything "just worked" > and still does. I never thought of testing on the console under VGL. I > will try to do that. Honestly, I am still confused about the whole > OpenGL vs. GLX thing. From what I can gather, OpenGL is an API on the > host, kind of like an API for disk access, while GLX is OpenGL with an > API through an X server using the X protocol. Assuming I have that > right (and I probably don't), then I would expect that VirtualBox or > anything else that wants to use OpenGL can do so with or without an X > server, but that clearly is not the case. Everything needs an X > server. This whole thing seems rather strange.
Sun used to have the GLP extension on SPARC platforms that allowed direct access to OpenGL without an X server, but it never gained any adoption outside of Sun, so it died whenever Sun OpenGL and the XVR graphics cards died. EGL is a potential alternative, but nVidia and ATI don't seem to have any interest in exposing that API (BTW, I'm not sure if there is a standardized way to capitalize nVidia. "nVIDIA" is an attempt to duplicate their logo, and I use "nVidia" because I don't like typing all of those capitals. Wikipedia uses "Nvidia", nVidia's web site uses "NVIDIA". I've also seen it as "NVidia.") At least with nVidia, their position has always been to use headless X servers as an alternative (although VGX may provide some form of X-server-less access. Not sure.) As far as the API's, it's a little more complicated than what you say above. OpenGL is all about "contexts", which are basically buckets in which it can store all of the various rendering states. Each context has to be bound to a "drawable" before it can be used to actually render something, and that's where GLX comes in. GLX interfaces OpenGL to X windows, so using that API, you can create an OpenGL context using an X visual and then bind that context to an X drawable (a window or a pixmap.) Microsoft has a similar API (WGL) that is used to interface OpenGL context with Win32 window objects and device contexts, etc. You can't just write an OpenGL application. You have to use some sort of interfacing API like GLX or WGL or, on mobile devices, EGL, to allow your application to interface properly with the display system. On Unix/Linux, all of the 3D applications use GLX, because they are designed to be run in windows on the local X server. Needless to say, without GLX or some equivalent (and "some equivalent" doesn't exist yet in any broadly accepted form), you can't render OpenGL to a window on Unix machines. When there is 3D hardware acceleration involved, GLX will typically use DRI (Direct Rendering Infrastructure) so that it doesn't have to send the OpenGL commands through the X server once the context has been established. When DRI is not used, then every single OpenGL command has to be sent to the X server and then to the 3D hardware. That approach is called "indirect OpenGL rendering", and prior to VirtualGL and other server-side rendering solutions, indirect OpenGL was the only way to display 3D apps remotely with hardware acceleration. In that case, the hardware acceleration was on the client machine, so every OpenGL command had to be sent over the network. As you can imagine, this becomes a performance nightmare for heavy-hitting applications. The Background article on VirtualGL.org goes into more detail on this. Even if the X server and application are running on the same machine, there is significant overhead to transferring all of the OpenGL calls through the X server, which is why DRI is used. However, it should also be noted that VirtualBox uses a form of indirect OpenGL rendering to provide 3D hardware acceleration. It intercepts 3D calls within the guest and forwards them to the host using indirect OpenGL. The host then, ideally, uses DRI to communicate the OpenGL commands down to the 3D graphics card. Thus, there is some CPU overhead to using hardware-accelerated 3D within VirtualBox, but it shouldn't generally create a perceptible performance hit unless the geometry being rendered is exceedingly complex (millions of polygons.) > Which brings me to a question I am trying to understand: how is the card > virtualized? How do multiple users / servers / processes use the same > hardware? Do they all use the same hardware? I understand how a CPU or > disk drive or network port or memory is virtualized and shared among > multiple processes, but does the same happen with a GPU? I also know > the degradation characteristics as CPU / disk / network / memory become > overprovisioned. How does a GPU degrade with more usage? VirtualGL is basically a GLX implementation. It sits between the 3D application and the system's GLX implementation, and when it receives GLX function calls, it rewrites the arguments to those calls. It does this so it can redirect all of the GLX calls to the "3D X server" (usually display :0), ensuring that the rendering will be hardware-accelerated and the GLX calls won't be sent over the network, and they won't be sent to an X proxy that doesn't support GLX. VirtualGL also re-directs all of the rendering that was originally intended for an X window into a corresponding pixel buffer (Pbuffer), an off-screen rendering buffer that it maintains on the 3D graphics card. There is one of these for every OpenGL window being displayed by the application. Because all of the rendering occurs in Pbuffers, you can have multiple users banging away on the same GPU, and their 3D rendering jobs won't interfere with each other, because none of them is using the visible screen. In terms of performance, it all depends on the type of apps you're using. I think the biggest danger is probably running out of video memory, which could occur if multiple users were running a lot of really texture-intense applications. For typical CAD/visualization applications, though, you can easily share a GPU among multiple users. As a reference, my biggest customer runs about 1 GPU for every 10 users, although granted they aren't all banging 100% on the system at once. How does a GPU degrade? When everything is resident in video memory, it will degrade somewhat linearly with the number of 100% active users, but it can quickly become very non-linear if the video memory gets exhausted. "100% active" is the key there, though. There is a lot of "think time" in most applications. You rotate/pan/zoom the display, which causes 3D activity, then you sit and stare at it for a while. Games have a heavier workload, but even they aren't 100%. The above explanations are probably fraught with technical inaccuracies, so I apologize in advance. Best I can do off the cuff. >> Also, looking more closely at your post onVirtualBox.org >> <http://virtualbox.org/>, are you still not using the > nVidia drivers? >> I don't really expect VGL on Solaris to work properly at all without them. > > Solaris 11 shipped with nVidia (is that how the word is cased?) drivers, > but in a moment of stupidity I tricked myself into installing newer > drivers from nVidia's web site, thoroughly borking the console. I have > since backed down to the nVidia drivers shipped with Solaris 11. > >> Also, you are trying to run both VirtualGL and SRSS on the same machine, >> correct? > > Good question. I have read about high bandwidth needs on the > interconnect between VGL and SRSS. Both are services run on the > hardware and same OS instance in the global zone. > > Thanks again for all of your help here. If / when I become too > annoying, just let me know. > > Cheers, > Marty > > ------------------------------------------------------------------------------ Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ VirtualGL-Users mailing list VirtualGL-Users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtualgl-users