Hi,

I'm trying to measure the render time of my application. The render time is 
defined as the time between the start of a new frame (begin of frame loop) and 
the time where all operations on CPU and GPU for that frame have finished. 

However, I want to exclude swapBuffers from the measurement because I can not 
turn off VSYNC on my test system and swapBuffers will block until the VSYNC 
signal arrives, invalidating my measurements.

I have already successfully implemented this measurement by using a custom 
SwapCallback and attaching it to my graphicsContext. Within the SwapCallback, I 
first issue a call to glFinish() to make sure all GPU operations are finished, 
then measure the time and then call swapBuffers afterwards.

This works fine as long as the threading mode of the viewer is set to 
singleThreaded. However, I'm using multiple cameras viewing the same scene from 
different angles, so for maximum performance I've set the threading mode to 
ThreadPerCamera. Now Im getting measurements that no longer make sense (the 
duration is way too short), and I'm uncertain what actually causes this or how 
to fix it.

How do I best go about measuring this time?
In pseudocode, it would look smth like:

for(;;)
{
        tp1 = timepoint();
        //osg render scene multithreaded
        synchronizeWithOSGThreads();
        glFinish();
        tp2 = timepoint();
        renderTime = tp2 - tp1;
        swapBuffers();
}

Thank you!

Cheers,
Philipp

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=69155#69155





_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to