On 9/10/07, Paul Martz <[EMAIL PROTECTED]> wrote: > Hi Robert -- Assuming you've massaged your database so that the highest res > textures only get loaded/displayed at close ranges, how would you use OSG to > avoid the (seemingly inevitable) cost of loading the large texture from disk > and sending it over the bus to the graphics card when it is time to be > rendered? The naïve approach -- just let OSG load it -- appears to cause a > guaranteed frame rate hit. > > I imagine you could break the texture up into smaller chunks and let OSG > page each in incrementally as it comes into range. > > Or, you could preload all textures onto all displays at startup with the > GLObjectsVisitor, though this could tax system RAM for large datasets. > > Could PROXY_TEXTURE help? At least it would tell OpenGL "hey, some big data > is coming, eventually" but there would still be the disk access and bus > transmission cost, so I imagine it's not a solution. > > Robert, your thoughts please.
We don't have a texture manage at present, but one could in theory develop one to load balance memory on the GPU. With GPU memory sizes going up all the time this is probably getting less of a problem. W.r.t loading incrementally and compiling OpenGL data. In 2.1.x and SVN there is support for using a background compile context - this is a pbuffer per graphics window that is shared with the graphics context and has its own graphics thread that runs the compile and object clean up. One can enable compile contexts via: osgviewer mydata.ive --cc Also see osg::GraphicsContext::createCompileContext(..) function. Robert. _______________________________________________ osg-users mailing list [email protected] http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

