Hi Frederic,

> I tried with Texture::setUnRefImageDataAfterApply(false) and it works well. 
> However, as I read about this, texture memory is now duplicated (once in 
> OpenGL and once in OSG). Isn't there a way to do the same thing in OpenGL by 
> sharing the contexts or something like that? As I said, I tried to share a 
> single context in the traits configuration but it didn't work. For now, our 
> application doesn't use too much memory but this could become a problem when 
> we'll be generating visual data from our database!

It's possible to share contexts in the OSG, I have no clue as to why
it hasn't worked in your case, there is just too many unknowns -  you
have your code, I don't so you're the only one really in a position to
debug it.

As for general desirability of share GL objects between contexts, yes
it can reduce memory usage, but it forces you to use the OSG single
threaded otherwise two contexts will be contended for the same
resources that deliberately aren't mutex locked for performance
reasons.   There is also on a limited set of cases where
drivers/hardware will actually share OpenGL contexts.

> As for the osgUtil::Optimizer, we're not using it anywhere in our code... Is 
> it called by the Viewer class during initialization or something?

The Viewer doesn't run the Optimizer.  Some plugins run it on their
own data though.

> Would there be another way to enable texture sharing for dynamically created 
> rendering contexts while optimizing memory usage?

?? Sounds a bit like a magic wand. OpenGL only allows you to share all
OpenGL objects or none, you don't get to share some.

If you want to tightly manage the OpenGL memory footprint then the new
texture + buffer object pool is what you'll want to use.

Robert.
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to