Hi Max.

On 17 February 2015 at 09:31, Max Reimann <[email protected]> wrote:

> I found a working solution by disabling culling alltogether on the main
> camera by setting
> Code:
> setCullingMode(osg::CullStack::NO_CULLING);
>
> I guess its not a very scalable solution but it works :D
>

The problem isn't due to culling, it's due to the the combination of the
driver and OS trying to be "clever" by making GL objects that haven't been
used in the previous frame no longer resident on the graphics card.  When
you do eventually need the previous unseen GL objects then driver then has
to copy them over.

If you case your graphics card does seem able to hold everything resident
so it shouldn't being removing these unneeded GL objects, but even if it
didn't remove them the actual cost of copying should be relatively small,
so for a big frame drop to be occurring is down to a driver/OS problem. Yes
it's pretty retarted of the driver/OS but certainly combinations have been
historically bad at this.  Windows seems particular bad and managing a
large number GL objects, so I'm take a punt a suggest you are using
Windows.  If you can test other OS's it's a good sanity check so well worth
doing as often you'll see peculiarities in the performance that point the
finger in particular places.

Now if you are stuck on windows and crappy driver/OS combination that are
introducing these glitches then you'll need to look at what might be
causing the driver/OS to have a fit.  The biggest of the performance hits
you typically get is utilization of virtual memory, if the OS starts
swapping things in/out of main memory performance goes down very rapidly.
Have a look at the overall memory management of the machine to see how you
app is doing w.r.t main memory and GPU memory usage and cache misses etc.
I can't recall the name of the tool I have used under Windows, but it was a
Microsoft one.  I'm under Linux right now so can't check.  Perhaps others
will recall it's name.  It's a free download.

Next culprit can be the driver caching too many frames in the OpenGL fifo
that it consumes too much memory itself, and the behaviour starts becoming
erratic.  Once the OpenGL fifo fills the application thread dispatching
data to the fifo stalls and on the OSG side you'd see the draw dispatch
times suddenly go up.  Some drivers can try and cache several frames worth
of data, in theory to help performance, but it can make it really erratic.
Unfortunately the marketing obsession of frame rates in games benchmarks
that things in drivers all too often get optimized for max average frame
rates with v-sync off, here allowing lots of frames in the FIFO can help,
but if you want an app that hits a solid 60Hz it can work against you.  One
way to break this is to add a sync into the swap buffers to prevent the OSG
frame loop putting more than one or two frames worth of data in the FIFO at
any time.

In the svn/trunk version of the OSG has support for using glFence/sync
objects that you can now use.  Running osgviewer you can enable it by
adding the --sync option to the command line.

As a general note, I'd suggest using the OSG's on screen stats to see what
is taking the time.  From your description it'll be the driver/OS being
stupid simply because it's trying to be too "clever".

Another thing you can do to investigate the issue is make sure window
compositors are switched off as this another place where the OS/driver can
do stuff that just plain messes things up.

Robert.
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to