I just had a lightbulb go off.

For a long time I've been scratching my head about multi-window performance with osg examples.  It seemed that whenever I used two windows (or more) the performance suffered for no apparent reason.  For example, rendering the cow (the graphics card equivalent of a drop in the bucket), on a camera configration that uses two RenderSurfaces, caused my frame rate to go from 60 hz, to 30 hz (or full screen refresh rate to 1/2 screen refresh rate). 

I've just realized what that reason is.  In a meeting not too long ago with Nvidia, I was told that the NVidia driver imposed a block within Swapbuffers if you are two frames ahead.  I was told that this was done to "maximize pipelining" within the graphics chip.  I objected to this based on various reasons, not the least of which is that I want frame control in the application, not in the graphics driver.

So, what is happening here is that for every frame of an application using two windows, in which we are issuing Swapbuffers for each window, we are being blocked twice waiting for vertical retrace.

Does this ring a bell with anyone else?  I need to do some deeper testing to confirm this, but it is an issue I'd like to take up with NVidia if the ball is in their court here.

-don
_______________________________________________
osg-users mailing list
[email protected]
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/

Reply via email to