Hi, thanks to the gurus for all the excellent guidance you provide to us
lesser mortals..

 

I can likely mush my way through this but I wanted to ask the opinions of
the osg gurus on my design approach before getting mired down in a less
desirable approach.

 

We have a geometry graph that we want to render to two separate places:  (1)
a bitmap image for additional automated analysis or use downstream in a
shader (that will also render to a similarly sized memory image), and (2)
the window context that can show the progress on the display, but not
necessarily replicate exactly the underlying hi-resolution image in memory,
for performance reasons.  They'll probably have different resolutions, say
1024 x 1024 for the bitmap and whatever window size the user sets for the
display.  I'm thinking two separate cameras, and I've seen in the examples
(osg distortion?) where the bitmap is setup, and I've also seen in other
posts several discussions about multiple camera for multiple "views."  Then,
I need to have these cameras under program control, say, iterating a
rotation angle in 1 deg increments, and iterating the "distance" over some
range and increment (or the corresponding camera position), drawing a frame
of each (and ultimately processing the resulting image with downstream
algorithms and code).  Real time frame rates are probably not a reasonable
expectation for this analyzer.

 

I want the view version to give low-res feedback about the progress and
correctness of the underlying hi-res memory image processing.

 

My questions are currently:

1.        What's the best way to connect the two cameras so they're the same
position & orientation, only rendering to different targets (e.g., update
the matrix of one with the other using an update traversal, put both under a
transform node in a tree, etc.)?

2.       What's the best way to programmatically (vs TrackBallManipulator
default) control the camera, and/or switch between these methods?

 

Is my design sound, or should I pursue some other approach?  If the separate
window context with different resolution is a problematic design, I can let
go of that, but I think we're really pursuing the memory image processing
approach, ultimately needing a shader program and then finally combining two
memory bitmaps, respecting the transparency of one, analogous to some kind
of specially programmed x-ray view, then extract the transparent parts and
average the color of it.

 

Thanks,

Bob

 

 

_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to