On May 4, 2012, at 5:16 AM, "Florian Kemmer 
(florian.kem...@hs-furtwangen.de)"<florian.kem...@hs-furtwangen.de> wrote:

> Hello everybody,
> 
> first of all: I have read the post "VirtualGL performance impact" [1] and also
> seen the conclusion that benchmarking is a "lost cause".
> 
> However, this case is somewhat different. I do not (well, not really) care 
> about
> theoretical details or performance impact. I simply want to be able to compare
> multiple different remote graphics solutions, i.e. which product is capable of
> delivering best performance (in FPS) to the end user. As these tests should be
> done under different network conditions, I'd like to automate them as far as
> possible (bash scripts, etc).
> 
> For the beginning I decided to try VirtualGL as it comes with a few tools for
> that (glxspheres64, tcbench, ...). However, I'm having some trouble in
> understanding and using these tools -- and maybe the whole thing.
> 
> My testing environment consists of two computers ("server" and "client")
> connected via gigabit LAN. Setting up VirtualGL and TurboVNC was straight
> forward and is up and running. Both are running on CentOS 6.2 and the latest
> versions of VGL (2.3) and TVNC (1.0.90).
> 
> glxspheres:
> When I remotely run glxspheres it shows very "random" fps numbers. Random in a
> sense that it sometimes represents what really is shown via VNC (e.g. about 
> 3.x
> FPS with reduced network throughput) and sometime a much higher value under 
> same
> network conditions (probably the number of frames that are rally being 
> rendered
> on the hardware). I have not yet found a way to enforce giving the first 
> values,
> which I'd be more interested in. *Sometimes* it *seems* to be that after
> changing the display options of the VNC Viewer (especially when switching to
> "Lossless Tight") the values tend to represent the displayed FPS.
> What is the intended behavior here and/or how do I get the number of FPS that
> are seen in VNC? If possible at all.

VNC basically delivers updates whenever the client requests them, so this can 
effectively act as a sort of frame spoiling mechanism under certain 
circumstances. Refer to the VirtualGL 2.1/TurboVNC 0.4 performance study under 
"Reports" on VirtualGL.org for a more long-winded explanation, but for your 
purposes, it is sufficient to just understand that both VirtualGL and TurboVNC 
can spoil frames if any stage of the image delivery pipeline isn't fast enough 
to keep up with the 3D application. The circumstances under which TurboVNC does 
this can be a little hard to identify sometimes.

However, what you're most interested in is the frame rate as perceived by the 
client, so disabling frame spoiling in VirtualGL 'vglrun -sp' and using TCbench 
should be sufficient to measure that.

Arthur is right that VNC doesn't really know what a frame is, but when running 
a full-screen 3D application in VirtualGL, each frame should be sent as a 
single RFB update, thus the concept of updates/second and frames/second are 
almost always the same in this case.

You can measure updates/second in the TurboVNC viewer very simply by setting 
the TVNC_PROFILE env. var. to 1.

> 
> tcbench:
> As the FPS glxspheres aren't "usable" (at least not by me), I wanted to use
> tcbench. Unfortunately, it sometimes (~95% of the cases?) ignores all 
> parameters
> such as sample rate or benchmark runtime. Just starting it two times in a row
> and this might occur. I tried tcbench of different versions, everytime the 
> same.
> Secondly, I did not find a way to automatically select a window to be
> benchmarked. As stated above, I'd like to automate it as much as possible.
> Manually selecting windows isn't exactly what I wish ;)

I'm not sure why it would be ignoring parameters. I've certainly never seen 
that. You can automatically start TCBench if you know the X window ID you want 
to measure. That's about as automated as it gets, though. Benchmarking remote 
display systems is tricky, and I caution against doing it without keeping a 
keen eye on what is happening. You can't just blindly report frames/second and 
expect it to be meaningful. You have to also report metrics such as 
"CPU-limited fps" (for both client and server) and "network-limited fps", which 
show where the real bottlenecks are.

> 
> If you have any other spontaneous idea on how to set up a benchmark course, 
> I'd
> happily listen to all suggestions :)

First read the afore-mentioned report, which explains my basic methodology. 
Also read the Performance Measurement sections in the VGL User's Guide.

> 
> Thanks in advance
> Florian
> 
> [1] http://comments.gmane.org/gmane.comp.video.opengl.virtualgl.user/712
> 
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and 
> threat landscape has changed and how IT managers can respond. Discussions 
> will include endpoint security, mobile security and the latest in malware 
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> VirtualGL-Users mailing list
> VirtualGL-Users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/virtualgl-users

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
VirtualGL-Users mailing list
VirtualGL-Users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/virtualgl-users

Reply via email to