Hi All,
Following the osgprerender example I have configured a CameraNode to
render to a frame buffer object. That seems to work fine for me and I
can use a rectangle to display the captured texture fine.
Now what I would like to do is to read back a small part of the frame
buffer object texture into CPU memory. The osgprerender example shows an
implementation where the whole viewport is captured but I can not see a
mechanism for specifying a smaller area of the viewport to capture.
Does anyone know of a way I can achieve this?
I have tried putting a glReadPixels in the CameraNode post draw callback
but this does not work. Do I have to set the glReadBuffer or re-apply
the frame buffer object?
Regards,
Andy.
________________________________________________________________________
This e-mail has been scanned for all viruses by Star.The service is powered by
MessageLabs.
________________________________________________________________________
_______________________________________________
osg-users mailing list
[email protected]
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/