Hi Thomas,

On Wed, Oct 27, 2010 at 12:12 PM, Thomas Hogarth
<thomas.hoga...@googlemail.com> wrote:
> At the moment I'd say my scene isn't complex enough to really tell,
> everything is in one location and there aren't many geodes. Main reason I'm
> looking into it is the apple docs state that it's a waste of cpu to depth
> sort and I'm just looking for all the extra performance I can find (doing
> computer vision). I won't expect any miracles :).

I wonder if the Apple docs actually mean it's pointless depth sorting
opaque objects, something that the OSG doesn't do by default anyway -
it states sorts for the opaque bin.

For the transparent bin you still need to render from back to front to
get the correct blending, unless the driver knows about transparent
objects needing to be render back to front.

As for looking for an optimization here, I really don't think it's
worth your while, you are attempting premature optimization - deciding
what the bottlenecks are before you've even tested it.  I would
recommend doing actual benchmarks on the target hardware/OS with the
types of datasets you will be using and then profile it closely to
establish what are the bottlenecks.

Robert.
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to