A specialized tool I'm working on needs to have a 16-bit Alpha channel
for post-render analysis. I render to a screen-sized texture (1920x1080
NPOT) and then read it back to the CPU side and assess the values.
I originally developed it with 8-bit per gun RGBA (32-bit total) where it
works fine, but precision is very poor. I then switched my RTT camera
destination texture to use GL_RGBA16:
RTTtex->setInternalFormat( GL_RGBA16 );
RTTtex->setSourceFormat(GL_RGBA);
RTTtex->setSourceType(GL_UNSIGNED_SHORT);
This seems to work lovely, but now reading back to the texture and
fetching pixels from the texture seems problematic.
I capture with a post-draw callback (m_tex is the RTT texture):
void CameraImageCaptureCallback::operator()( osg::RenderInfo& ri ) const
{
osg::Image* image = m_tex->getImage();
// Get the texture image.
m_tex->apply( *( ri.getState() ) );
image->readImageFromCurrentTexture( ri.getContextID(), false,
GL_UNSIGNED_SHORT );
}
I _think_ the UNSIGNED_SHORT is appropriate here (it was UNSIGNED BYTE
when I was using regular 8-bit-per-channel rendering).
To run this callback, I:
osg::ref_ptr< CameraImageCaptureCallback > captureCB;
outputImage = new osg::Image;
outputImage->setPixelFormat(GL_RGBA);
outputImage->setDataType(GL_UNSIGNED_SHORT); // different when in 8-bit
mode
osg::Texture2D *accumTex = GetAccumulationTexture(); // this is the
texture that rendering writes to
accumTex->setImage( outputImage );
captureCB = new CameraImageCaptureCallback(accumTex);
GetPrerenderCamera()->setPostDrawCallback( captureCB.get() );
// Run the viewer one more time to capture to the attached image.
vpEnv.viewer->frame();
Later, I examine the image pixel by pixel with:
alpha = lessonOutput->getColor(xLoop, yLoop).a();
All of this works dandy in 8-bit mode, but add 8 more bits and it goes to
heck.
This message:
http://forum.openscenegraph.org/viewtopic.php?t=9879
leads me to believe that some parts of OSG may not be supporting
GL_RGBA16 properly, and in fact, I don't see it listed in the tokens in
Image::computeNumComponents() though less-well-known tokens
like GL_RGB16I_EXT are there.
I tried using GL_RGB16I_EXT as my RTT texture format, but OSG fails to
setup the RTT FBO, so I didn't succeed with that avenue. I was able to
use GL_RGB16F_ARB but it seemed to behave very oddly all around, often
refusing to clear the buffer when told to, so I abandoned that.
A I missing any steps to setup for a GL_RGBA16 texture? I got very
confused around all of the formats and internal formats and knowing what I
needed to preconfigure in the osg::Image versus what OSG was going to setup
for me. After the readImageFromCurrentTexture
call, the Image's pixelFormat is set to GL_RGBA16, which makes calls like
getColor fail because GL_RGBA16 isn't recognized.
Is there a better way to render to a 16-bit int format? Am I just missing
a critical step?
--
Chris 'Xenon' Hanson, omo sanza lettere. [email protected]
http://www.alphapixel.com/
Training • Consulting • Contracting
3D • Scene Graphs (Open Scene Graph/OSG) • OpenGL 2 • OpenGL 3 • OpenGL 4 •
GLSL • OpenGL ES 1 • OpenGL ES 2 • OpenCL
Digital Imaging • GIS • GPS • Telemetry • Cryptography • Digital Audio •
LIDAR • Kinect • Embedded • Mobile • iPhone/iPad/iOS • Android
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org