On 3/14/12 12:42 PM, Arthur Huillet wrote:
> My objective is to be able to compare to other solutions (and, if other
> solutions happen to beat VirtualGL, try to figure out why and do something
> about it).
> I heard various people claim "better performance than VirtualGL" when
> presenting their own GPU "virtualization" solution. It's important to be able
> to know if they're right and if so by how much.

Well, I'm personally proud that commercial vendors feel like we're
enough of a gold standard that they have to measure themselves against us.

However, I'm really dubious about such claims unless I know what they
mean by "performance."  I mean, at the end of the day, what is VirtualGL
really doing?  It's marshaling GLX commands, emulating 3D features of
the "remote" X server without requiring said X server to have 3D, and
reading back pixels.  The former is basically zero-overhead, and the
only company I know of who would be in a position to improve upon the
latter is nVidia (by including VirtualGL-like hooks into libGL, thus
eliminating the need to read back the pixels at all.)  Apart from that,
VirtualGL's performance is entirely dependent on the method by which
images are delivered to the client.  If someone wants to claim "better
performance than TurboVNC" or "better performance than the VGL
Transport", then fair enough.  Show me how much better their algorithm
compresses data or how it uses less CPU time.  The VGL Transport is a
pure motion-JPEG protocol, so it's always going to be fast, but it's not
always going to be the most efficient in terms of bandwidth usage.  On
low-color output, TurboVNC does a much better job, since it can adapt
the subencoding type to the number of colors in a particular tile, but
that's a more critical capability for TurboVNC, since it's sending both
2D and OpenGL pixels, whereas the VGL Transport is sending only OpenGL
pixels.

However, regardless of all of that, librrfaker, in and of itself, is
typically not where the main bottleneck is.  The main bottleneck is
usually either the network (if on a WAN) or the client CPU (if on a LAN.)

VirtualGL is the way it is largely so it can avoid being
hardware-dependent.  I could envision an X proxy solution that did all
of its 2D rendering to a PBO and provided a DRI driver for bringing in
OpenGL pixels, then some form of hook for a VNC server to use to send
updates out.  However, that X proxy would be useless on systems that
didn't have 3D hardware.  I could also envision VirtualGL becoming part
of the 3D driver infrastructure, but of course that would be a
proprietary solution at the moment.


>> Statements like "VirtualGL's readback reduces performance by 20%" aren't
>> very meaningful.  20% relative to what?  It's a moot point unless we can
>> somehow reduce that overhead and still provide the same functionality.
>> It would be like saying "my car's tires reduce gas mileage by 20%."
> 
> It's funny you should mention it, as my previous field of work was
> in tire design. (And you don't necessarily want your tires to make you "save
> gas"... because if they do it usually means they'll have poorer grip.)

LOL


> In my opinion, *measuring* performance of a remote graphics solution is a
> lost cause. There are too many variables, different use cases, and the mere
> definition of "acceptable situation" (in terms of number of updates per 
> second,
> bandwidth and so on) depends on the physical person using the solution. I 
> often
> get the question "how many sessions can I put at the same time on hardware
> XXX?", and there's no generic answer based on a magical formula.

On this we definitely agree.  At Sun, we actually did do some empirical
studies on this so we could come up with a rough estimate of users per
GPU that were appropriate for specific applications, but these numbers
generally did not apply outside of that specific application.


> The reason why I want to do this is for comparison purposes (and I'll do the
> FBO vs. pbuffer test as well, just to get an idea).
> 
> Also, in terms of "GPU sharing", is there any kind of measurement that can be
> conducted to determine for example how much of the GPU a given session is
> using? I know about nvidia-smi, but I don't believe such a tool gives a usable
> metric.

I've been looking for such a tool for years.  You can monitor VRAM usage
straightforwardly using CUDA, but I think what we really both want is
the GPU equivalent of top.

------------------------------------------------------------------------------
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
_______________________________________________
VirtualGL-Users mailing list
VirtualGL-Users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/virtualgl-users

Reply via email to