Hi John,

I'll add some general comments as others have already provide quite a few
responses.

In OpenGL when you create a texture object you first have the memory for
the imagery in the applications main memory, when you (via osg::Texture2D)
pass that image data to OpenGL fifo, then afterwards the the driver takes
the data from the fifo and creates an internal representation of that data
that is suitable for passing directly to the graphics hardware.  This
representation may have different pixel format depending upon the hardware
and the texture settings you used, and also may create mipmaps for you.
 Finally when the actual texture is needed on the GPU it'll be copied to
local memory on the graphics card.   The result of this pipeline is several
copies of your data, it's not a bug, but just the way that OpenGL/hardware
manages things.

The Texture::setUnRefImageAfterApply(true) usage tells the OSG to unref the
image after the data has been passed into the OpenGL fifo, and as long as
no other references are kept to the osg::Image this data will be deleted so
getting rid of one copy of the data which is why it helps.

Another aspect to take into account is that files on disk that are
compressed in .jpeg etc. are all much smaller than they are once they are
loaded into memory.  However, if you use an OpenGL compressed format such
as S3TC then it'll be stored in a format that you can pass directly to
OpenGL without any unpacking, here the memory usage will be consistent from
disk to OSG memory to driver memory and to the GPU memory, with a caveat
that if you generate mipmaps at runtime this will increase the footprint by
around 40%.

Robert.
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to