HI Bruce,
You can create a subclass from osg::Drawable, or use Drawable draw
callback, or use a Camera pre/post draw callback to do your own OpenGL
work. When doing this you'll need to make sure that the OSG knows
about the state you have changed so it can correctly manage state -
this is is crucial as the OSG does lazy state updating to reduce the
number of OpenGL calls, and if you change OpenGL state without telling
it then it'll just out of sync and won't apply the state when it needs
to. osg::State object has a number of haveApplied*() methods for
telling it that you've changed state.
Personally I would suggest that that the native OSG is actually the
best way to tackle your problem. The OSG has support for textures
including PBO's and texture subloading, it also has support for GLSL
shaders, so everything you want to do is very straight forward to do
using native OSG classes. Yes it does mean learning about them, but
it'll be a lot easier and loss problematic than learning about
creating your callback and informing the osg::State state tracking
about your state changes.
Robert.
On Thu, Apr 1, 2010 at 1:48 AM, Bruce Wheaton br...@spearmorgan.com wrote:
I'm moving my backend to OSG, but I'm too chicken to do it all at once.
Where would be the best place to run existing code?
The code is a set of texture transfers (individual planes of videos) and
then some drawing with a shader to convert it to floating point RGB.
The transfer code should:
run every single frame, before other operations (resulting textures get
used),
should ideally be split - transfer starts (PBOs) and transfer completion,
could optionally use a pre-process camera and osg::uniforms,
Can be overlapped with existing rendering (the tail of the previous drawing)
- first phase won't affect textures in use.
Should be done once per GPU (assuming contexts are shared on each card).
Is there a callback or two that would fit? Is it kosher to do the work in a
draw callback, or is there an earlier point that is in the OpenGL context?
Maybe an example?
Since I know the 'best advice' will probably be to bite the bullet, as a
sanity check, I mention that my code is an ffmpeg video channel that decodes
the video into native planes, then transfers multiple planes and uses a
shader and multiple texturing to convert to floating point and do various
color correction etc. I think fitting that into the existing ImageSequence,
which seems to assume single frames of RGB, would be too time consuming at
the moment.
Regards,
Bruce Wheaton
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org