Hi Daniel,
Thanks for the explanation. I finally understand where you are coming
from, and why things work with FBO and pbuffers but no frame buffer.
First up I'm not inclined to merge the waning output, as it catches
you usage, but would wrongly emit a warning if the end users wanted to
modify and record the results of working on the main screen. Yes FBO
and pbuffers wouldn't work in the usage case, but then how would you
detect this?
Perhaps only one could emit your warning only when the user had
actually required a FBO or pbuffer but got a frame buffer instead.
Anyway there will always be limits in how much the OSG can cover over
the gaps between FBO, pbuffer and frame buffer RTT. I we try to
encode fallbacks for nieche usage then the code will just grow more
and more complicated and convoluted and in the end buggy. So I am
very wary about begining to go down this route.
Personally I'd be tempted to make the feature you are after only
supported on systems that support FBO or pbuffer. pbuffer is pretty
widely available, and FBO is getting more so as time goes on.
Another thing to bare in mind w.r.t frame buffer RTT is that is
someone puts a window over the area that you are reading from the
OpenGL implementation might just give back the contents of the colour
buffer in the form of the overlapping window. There isn't any
workarounds for this, its always going to be a poor mans solutions.
I don't think Producer support pbuffer under AGL,CGL OSX yet, but OSX
does support pbuffers so perhaps this is one avenue to look into - to
add this implementation.
Robert.
On 8/8/06, Daniel Larimer <[EMAIL PROTECTED]> wrote:
All,
Patrick and I have identified a change that we are not sure how to
implement or if you want to implement it. Essentially it boils down
to the difference between pbuffers, fbos, vs FRAME_BUFFER rendering.
When you use FBO or PBUFFER you expect that the current state of the
COLOR, DEPTH, STENCIL, and ALPHA buffers are the same as the last
time you rendered; however, if you are using FRAME_BUFFER mode then
the buffers are no longer in the same condition as you last left
them. This is not a problem if you call glClear() at start of every
frame because you put the buffers in a known state.
The only way to make the behavior appear the same using all three
rendering methods would be to restore the Frame Buffer to the current
state of the texture when the clear mask is 0.
If the user does no updates to the texture then you have an extra 2
copies... texture to frame buffer and frame buffer to texture.
If the user only updates a subset of the texture or is using
accumulation across the entire buffer then this is the EXACT behavior
you NEED.
At the end of the day, the user will probably place a switch above
the camera node to prevent any update from occurring if no change is
made to the texture solving the first performance problem.
What are your thoughts? What is the best way to restore the color/
depth buffers?
I have attached a patch that will print out a nice little warning
message if the user attempts to use this combination and shows you
where I think the new code needs to go.
Dan
900a900,904
>
> if( _clearMask & GL_COLOR_BUFFER_BIT && _camera-
>getRenderTargetImplementation() == osg::CameraNode::FRAME_BUFFER )
> {
> osg::notify(osg::NOTICE)<<"restore old image, we are using
FRAME_BUFFER but are not clearing the image so the user expects the
old image to be there just like when using FBO or pbuffers. Not
currently implemented, sorry\n";
> }
_______________________________________________
osg-users mailing list
[email protected]
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/