Thanks Robert!

I tested two alternate implementations, one with a flat graph where I 
premultiplied the transforms together and used the result in 
AutoTransform::setPosition (so the graph had many AutoTransforms and 1 Geode), 
and another where I kept the original graph but used a separate instance of 
AutoTransform for each point (instead of sharing a single AutoTransform node, 
to take advantage of the cached transform).  I was surprised to discover that 
when I zoomed out so that every object would be visible, the second 
implementation was slightly faster by about 2-3 fps.  According to the 
statistics, the first graph had a shorter cull time (which makes sense to me, 
since there were fewer nodes to traverse) but the second graph more than made 
up for it in the draw traversal.  I can't explain why this would happen.  I 
would think the second graph would have a slower draw time, since there are 
more matrix multiplications to do.  What might cause this?  I am using the 
DrawThreadPerContext threa
 d model on a dual-core CPU.

In any case, I am planning to add level of detail nodes so these objects are 
not visible when zoomed way out, so I don't need to improve things more just 
yet.  The performance at my initial zoom level is acceptable for now.

... 

Thank you!

Cheers,
Michael

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=48869#48869





_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to