It's possible that, for whatever reason, it's not giving you a visual
with a depth buffer.  You can use 'vglrun +tr' and examine the trace
output to see what FB config ID is being mapped to the X visual whenever
the application calls glXCreateContext(), then you can run
'/opt/VirtualGL/bin/glxinfo -c' on display :0 (or whatever display
you're using as your 3D X server) to see what attributes that FB config has.

I just checked in a fix yesterday which works around an issue whereby,
under rare and somewhat hard-to-define circumstances, the nVidia GLX
client library would return a bizarre 128-bit StaticGray FB config
whenever VirtualGL tried to call glXChooseFBConfig() to obtain an FB
config for one of its Pbuffers.  The workaround is currently in the CVS
head and 2.2 stable branch (tag "stablebranch_2_2") if you want to try
it out.

As far as performance, I'll give my standard spiel on that:

VirtualGL's default behavior is to spoil frames so that, in interactive
applications, the 3D rendering on the server will keep pace with user
interaction.  Thus, as long as a sufficient frame rate (typically 15-20
fps or above) is maintained so that the human eye doesn't perceive the
frame changes, then the application will "feel local".  There are
definite exceptions to this, such as immersive environments in which a
higher frame rate has to be maintained to avoid nausea, but what I'm
referring to is the normal case of running desktop 3D applications.  In
fact, with frame spoiling, applications are still quite usable even
below 15 fps-- although you will be able to perceive the frames change,
the rendering still keeps pace with your mouse movements.

It is precisely this frame spoiling mechanism, however, that makes
benchmarks meaningless in a VirtualGL environment, so frame spoiling
must be disabled (via 'vglrun -sp') when running benchmarks.  Otherwise,
the benchmark is going to be running nearly unimpeded on the server, but
only a very small percentage of those frames will actually make it to
the client.  In an interactive application, the frame rate is gated by
the speed of user interaction, so letting the 3D app run unimpeded
doesn't really tell you anything useful.

Once frame spoiling is disabled, then you can use a 3D benchmark to
measure the frame rate of the entire VGL system.  It's important to
understand what you're measuring, though.  Since VGL is a pipeline, 3D
rendering and pixel readback occur in parallel to compression and
transmission on the server, so in most cases, the compression will be
the primary bottleneck limiting the frame rate (depending on the client,
you may find that decompression is the bottleneck instead.  And, on slow
networks, the network can become the bottleneck.)  However, I
re-emphasize that you can't simply hold this up next to local rendering
and say "local is 100 fps, but VirtualGL is only 40 fps, thus local is
better."  It's an apples-to-oranges comparison.  Because of frame
spoiling, VGL will feel local, even though the actual number of frames
being drawn in front of the user is not as high as the local case.  It
goes without saying, also, that pretty much anything higher than 60 fps
has no meaning, either, since that exceeds the refresh rate of most
monitors.

Thus, the notion of benchmarks in a VirtualGL environment is a bit
different.  Typically, people run 3D benchmarks so they can attempt to
quantify whether 3D rendering will become a bottleneck to user
interaction.  In VirtualGL, however, 3D rendering is usually not the
bottleneck, so you have to have an understanding of what the other
likely bottlenecks are in order to get meaningful data out of the
benchmarks.  You also have to really examine the user's experience,
which is harder to quantify.  We have some tools, such as TCBench (which
measures the frame rate as seen on the client), which can be helpful
there.  It's unavoidable, however, that nebulous concepts such as
"perceptual performance" come into the conversation.

Further, GLXGears is a horrible benchmark to use under any
circumstances.  It has a very low triangle count and a small window
size, so its execution time is dominated by overhead rather than actual
OpenGL rendering time.  Even when running locally, the numbers it
generates have little correlation to actual 3D performance.  In a
VirtualGL environment, the images it generates are not complex enough to
really give the JPEG compressor a workout (too much solid color, and
most of the window is static from frame to frame.)

That's why we have GLXSpheres.  GLXSpheres generates more complicated
images, with a more realistic percentage of change between frames as
well as a more realistic use of color.  It also defaults to a somewhat
larger geometry, and you can increase the number of triangles even
further via command line options to simulate the types of geometry
workloads you may encounter in a real-world application.  It has other
features designed specifically to give VirtualGL a workout.

Try comparing, for instance,

  vglrun -sp /opt/VirtualGL/bin/glxspheres -p 1000000

vs. running

  /opt/VirtualGL/bin/glxspheres -p 1000000

in Mesa, and I think you'll find that the difference between VirtualGL
and Mesa is a lot more than 4x.  :)


On 7/15/11 12:51 PM, Mark Howison wrote:
> Hi,
> 
> We've been trying to install VirtualGL with TurboVNC on our GPU cluster that 
> uses NVIDIA M2050 cards. We can successfully run glxgears through vglrun, 
> access the pbuffer, etc. but we see these strange artifacts around the 
> triangles, like in the attached screenshot. If you zoom in, it looks as if 
> the borders of the triangles aren't being rasterized correctly. This happens 
> both when we use the NVIDIA OpenGL driver and with MESA.
> 
> Do you have any ideas what might be causing this? We have all of this 
> installed on a stripped-down CentOS image, with some libraries provided over 
> a parallel file system, so this is by no means a standard environment.
> 
> Also, we are seeing frame rates for glxgears in the 1100-1200 FPS range. Does 
> that sound like reasonable performance? With MESA, we get more like 250 FPS 
> (dual quad-core Nehalems).
> 
> Thanks,
> 
> Mark Howison
> Application Scientist
> Center for Computation & Visualization
> Brown University
> 
> 
> 
> 
> ------------------------------------------------------------------------------
> AppSumo Presents a FREE Video for the SourceForge Community by Eric 
> Ries, the creator of the Lean Startup Methodology on "Lean Startup 
> Secrets Revealed." This video shows you how to validate your ideas, 
> optimize your ideas and identify your business strategy.
> http://p.sf.net/sfu/appsumosfdev2dev
> 
> 
> 
> _______________________________________________
> VirtualGL-Users mailing list
> VirtualGL-Users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/virtualgl-users

------------------------------------------------------------------------------
AppSumo Presents a FREE Video for the SourceForge Community by Eric 
Ries, the creator of the Lean Startup Methodology on "Lean Startup 
Secrets Revealed." This video shows you how to validate your ideas, 
optimize your ideas and identify your business strategy.
http://p.sf.net/sfu/appsumosfdev2dev
_______________________________________________
VirtualGL-Users mailing list
VirtualGL-Users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/virtualgl-users

Reply via email to