Re: [osg-users] multi threaded with slave camera and fbo

2008-06-27 Thread Robert Osfield
Hi Wojtek,

I believe the Image::readPixels and Image::allocateImage are valid,
changes these because of some high level behaviour is going amiss due
to it relying upon a parameter that is modified on the fly suggest to
me that perhaps the high level code is at fault i.e. the Camera
attachments/FBO setup perhaps needs to be refactored slightly.

Robert.

On Thu, Jun 26, 2008 at 9:33 PM, Wojciech Lewandowski
[EMAIL PROTECTED] wrote:

 Hi Robert,

 I am not sure if this is a bug, but in my opinion Image::allocateImage
 should not change internalTextureFormat or at least Image::readPixels
 should
 record and restore internalFromat after calling allocateImage.

I don't follow, are we talking about the the format in osg::Image or
the texture?

 Sorry for being imprecise. All the time I was talking about
 osg::Image::_internalTextureFormat member.

 I may submit a fix but I am not sure what is desired course of action.
 Should I remove the line changing it from Image::allocateImage or rather
 modify Image::readPixels to preserve it from modification when
 Image::allocateImage is called ? See the bits of code that could be
 affected, below.


 osg::Image::allocateImage is called from Image::readPixels, Line 608
 [...]
 void Image::readPixels(int x,int y,int width,int height,
GLenum format,GLenum type)
 {
 allocateImage(width,height,1,format,type);

 glPixelStorei(GL_PACK_ALIGNMENT,_packing);

 glReadPixels(x,y,width,height,format,type,_data);
 }
 [...]

 osg::Image::_internalTextureFormat modified in:
 Image.cpp, Image::allocateImage( int s,int t,int r,  GLenum format,GLenum
 type, int packing), Line 556
 [...]
 if (_data)
 {
 _s = s;
 _t = t;
 _r = r;
 _pixelFormat = format;
 _dataType = type;
 _packing = packing;
 _internalTextureFormat = format;
 }
 [...]


 Image::_internalTextureFormat is used (through getInternalTextureFormat
 accessor method) to select Renderbuffer format:
 FrameBufferObject.cpp,
 FrameBufferAttachment::FrameBufferAttachment(Camera::Attachment
 attachment), Line 365:
 [...]
 osg::Image* image = attachment._image.get();
 if (image)
 {
 if (image-s()0  image-t()0)
 {
 GLenum format = attachment._image-getInternalTextureFormat();
 if (format == 0)
 format = attachment._internalFormat;
 _ximpl = new Pimpl(Pimpl::RENDERBUFFER);
 _ximpl-renderbufferTarget = new osg::RenderBuffer(image-s(),
 image-t(), format);
 [...]


 Cheers,

 Wojtek



 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-27 Thread Wojciech Lewandowski

Robert,

I totaly agree that high level FBO code could be modified to avoid potential 
glitches with doubling resources.


But on low level discussion, I also think that indirect modification of 
internalTextureFormat in readPixels may bring trouble for pieces of code 
relying on Image::getInternalTextureFormat(). I did a quick scan and these 
code includes initialization of FBO Renderbuffers and initilization of 
Textures (obviously).


As far as I understand Image::setInternalTextureFormat is created to let 
programer decide what will be internal format of the texture and 
renderbuffer created from this image. Right ?


So now imagine this case. I want to create texture with RGBA32F internal 
format from an Image. I create image and set its internalTextureFormat to 
RGBA32. To fill image data I call readPixels from frame buffer. Then I 
create Texture from this image and surpirisngly end up with RGBA ubyte 
Texture. So I wanted RGBA32F texture but got RGBA. I don't think this is OK.


Cheers,
Wojtek


Hi Wojtek,

I believe the Image::readPixels and Image::allocateImage are valid,
changes these because of some high level behaviour is going amiss due
to it relying upon a parameter that is modified on the fly suggest to
me that perhaps the high level code is at fault i.e. the Camera
attachments/FBO setup perhaps needs to be refactored slightly.

Robert.

On Thu, Jun 26, 2008 at 9:33 PM, Wojciech Lewandowski
[EMAIL PROTECTED] wrote:


Hi Robert,


I am not sure if this is a bug, but in my opinion Image::allocateImage
should not change internalTextureFormat or at least Image::readPixels
should
record and restore internalFromat after calling allocateImage.



I don't follow, are we talking about the the format in osg::Image or
the texture?


Sorry for being imprecise. All the time I was talking about
osg::Image::_internalTextureFormat member.

I may submit a fix but I am not sure what is desired course of action.
Should I remove the line changing it from Image::allocateImage or rather
modify Image::readPixels to preserve it from modification when
Image::allocateImage is called ? See the bits of code that could be
affected, below.


osg::Image::allocateImage is called from Image::readPixels, Line 608
[...]
void Image::readPixels(int x,int y,int width,int height,
   GLenum format,GLenum type)
{
allocateImage(width,height,1,format,type);

glPixelStorei(GL_PACK_ALIGNMENT,_packing);

glReadPixels(x,y,width,height,format,type,_data);
}
[...]

osg::Image::_internalTextureFormat modified in:
Image.cpp, Image::allocateImage( int s,int t,int r,  GLenum format,GLenum
type, int packing), Line 556
[...]
if (_data)
{
_s = s;
_t = t;
_r = r;
_pixelFormat = format;
_dataType = type;
_packing = packing;
_internalTextureFormat = format;
}
[...]


Image::_internalTextureFormat is used (through getInternalTextureFormat
accessor method) to select Renderbuffer format:
FrameBufferObject.cpp,
FrameBufferAttachment::FrameBufferAttachment(Camera::Attachment
attachment), Line 365:
[...]
osg::Image* image = attachment._image.get();
if (image)
{
if (image-s()0  image-t()0)
{
GLenum format = 
attachment._image-getInternalTextureFormat();

if (format == 0)
format = attachment._internalFormat;
_ximpl = new Pimpl(Pimpl::RENDERBUFFER);
_ximpl-renderbufferTarget = new 
osg::RenderBuffer(image-s(),

image-t(), format);
[...]


Cheers,

Wojtek



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-27 Thread Robert Osfield
Hi Wojtek,

 So now imagine this case. I want to create texture with RGBA32F internal
 format from an Image. I create image and set its internalTextureFormat to
 RGBA32. To fill image data I call readPixels from frame buffer. Then I
 create Texture from this image and surpirisngly end up with RGBA ubyte
 Texture. So I wanted RGBA32F texture but got RGBA. I don't think this is OK.

This suggest to me that one needs to pass more info into
Image::readPixels to control the internaltextureFormat, or possible
configuring things so that allocate reuses the original setting for
internaltextureFormat if its non default setting.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-27 Thread Wojciech Lewandowski

Hi Robert,


So now imagine this case. I want to create texture with RGBA32F internal
format from an Image. I create image and set its internalTextureFormat to
RGBA32. To fill image data I call readPixels from frame buffer. Then I
create Texture from this image and surpirisngly end up with RGBA ubyte
Texture. So I wanted RGBA32F texture but got RGBA. I don't think this is 
OK.


This suggest to me that one needs to pass more info into
Image::readPixels to control the internaltextureFormat, or possible
configuring things so that allocate reuses the original setting for
internaltextureFormat if its non default setting.


Thats exactly what I was asking for. Lets either fix Image::readPixels to 
preserve internalTextureFormat, or Image::allocateImage to not change it.


Cheers,
Wojtek


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-27 Thread Robert Osfield
Hi Wojtek,

On Fri, Jun 27, 2008 at 11:24 AM, Wojciech Lewandowski
[EMAIL PROTECTED] wrote:
 This suggest to me that one needs to pass more info into
 Image::readPixels to control the internaltextureFormat, or possible
 configuring things so that allocate reuses the original setting for
 internaltextureFormat if its non default setting.

 Thats exactly what I was asking for. Lets either fix Image::readPixels to
 preserve internalTextureFormat, or Image::allocateImage to not change it.

I'm uneasy about changing the existing behaviour as we may well be
fixing one problem for one set of users, but introducing problems for
users that currently rely upon the existing behaviour.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-27 Thread Wojciech Lewandowski

Hi Robert,


I'm uneasy about changing the existing behaviour as we may well be
fixing one problem for one set of users, but introducing problems for
users that currently rely upon the existing behaviour.


As I had my doubts as well, I have asked if allocateImage or readPixels is a 
better place for modification. Actually if I was pushed to choose, I would 
rather modify readPixels because changing internalTextureFormat as a result 
of this call is completely unintuitive. Its even more unexpected if one does 
not even directly call readPixels but only attaches image to the camera.


My postition is that its better to fix it now than risk that even more users 
will rely on this unexpected side effect. Do as you think is appropriate. 
Since I know about this I am well prepared for all endings ;-)


Cheers,
Wojtek 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-27 Thread Wojciech Lewandowski

Thanks Robert,

Attached is Image.cpp modified to not update already set 
_internalTextureFormat.


I haven't actually decided upon the merits of merging this yet though,
but it could at least be tested on this specific problem you guys are
seeing.


It does the trick. I put the breakpoint in the line where Renderbuffers are 
created, and now they use the the same exact pixelformat set from 
Image::getInternalTextureFormat().


Cheers,
Wojtek 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-27 Thread James Killian


I have started a new thread called OSG thread profiling results are in!!
This will have those test results.

James Killian
- Original Message - 
From: Cedric Pinson [EMAIL PROTECTED]

To: OpenSceneGraph Users osg-users@lists.openscenegraph.org
Sent: Thursday, June 26, 2008 4:33 PM
Subject: Re: [osg-users] multi threaded with slave camera and fbo


Your mails are black so i need to select the test to read it. Anyway i am 
interested by all result you have. If you can post them


Cedric

James Killian wrote:

 I too find some interesting issues with this thread. :)
 Robert,
This proposal you mention for 2.6 will it help balance the cpu workload 
against the  gpu I/O bottleneck?
 I've been doing some osg performance benchmark research on thread 
synchronization using the Intel Threaded compiler, and so far the results 
are looking really good except for a 26% over-utilization due to 
sleeping.  I do want to say awesome job to those responsible for 
threading, the amount of critical section use looked very good!  All the 
worker threads also had good profiling results. The ultimate test I want 
to try today deals with an intentional GPU bottleneck... where I have a 
quadcore that pipes graphics out a PCI graphics card.  If anyone is 
interested I'll post these test results.  I know now that using a quad 
core there is lack of parallelization (e.g. 25% 85% 15% 15%), but that is 
a different battle for a different time.
 I do want to get to the bottom of the profiling and determine how well 
the workload is balanced against the gpu i/o, and see if there is some 
opportunity for optimization here.


- Original Message -
*From:* Robert Osfield mailto:[EMAIL PROTECTED]
*To:* OpenSceneGraph Users
mailto:osg-users@lists.openscenegraph.org
*Sent:* Thursday, June 26, 2008 7:06 AM
*Subject:* Re: [osg-users] multi threaded with slave camera and fbo

Hi Guys,

I've just skimmed through this thread.  Interesting issues :-)

Wojtek explanations of the double buffered SceneView is spot on, in
this case leading to two FBO's,  Creation of two FBO's for RTT is
something I've been aware of since the inception of the
DrawThreadPerContext/CullThreadPerCameraDrawThreadPerContext, but as
yet haven't had the opportunity to refactor the code to avoid it.

The problem actually stems from the reuse of SceneView to do 
something

that it was never intended to handle, and in terms of osgViewer
implementation SceneView was used simply because it was the line of
least resistance i.e. it worked and code be adapted to help speed up
the implementation of osgViewer, and mostly it's actually worked out
ok, the stop gap has worked out pretty well.

However, it has always been my plan to rewrite the 
osgViewer::Renderer

so that it doesn't use SceneView at all, let alone double buffering
them, instead it's my plan to use a single CullVisitor which
alternately populates double buffered RenderStage.   This approach
would be far leaner and less obfusticated, and we should be able to
clean up some of the artefacts as well.   The downside is by not 
using

SceneView we'll loose all the stereo rendering support, so instead
we'll need to refactor osgViewer so that stereo rendering is done at
the viewer level i.e. using slave cameras - this means more coding
work, but the actually end result would be far better both design 
wise

as well as in flexibility and performance.

I just need to time to go off and do this work, I might be able to 
get

it done for 2.6, but spare time certainly isn't something that I'm
blessed with right now.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
mailto:osg-users@lists.openscenegraph.org

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



--
+33 (0) 6 63 20 03 56  Cedric Pinson mailto:[EMAIL PROTECTED] 
http://www.plopbyte.net



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-26 Thread James Killian

I too find some interesting issues with this thread. :)

Robert,
This proposal you mention for 2.6 will it help balance the cpu workload against 
the  gpu I/O bottleneck?

I've been doing some osg performance benchmark research on thread 
synchronization using the Intel Threaded compiler, and so far the results are 
looking really good except for a 26% over-utilization due to sleeping.  I do 
want to say awesome job to those responsible for threading, the amount of 
critical section use looked very good!  All the worker threads also had good 
profiling results. 

The ultimate test I want to try today deals with an intentional GPU 
bottleneck... where I have a quadcore that pipes graphics out a PCI graphics 
card.  If anyone is interested I'll post these test results.  I know now that 
using a quad core there is lack of parallelization (e.g. 25% 85% 15% 15%), but 
that is a different battle for a different time.

I do want to get to the bottom of the profiling and determine how well the 
workload is balanced against the gpu i/o, and see if there is some opportunity 
for optimization here.


  - Original Message - 
  From: Robert Osfield 
  To: OpenSceneGraph Users 
  Sent: Thursday, June 26, 2008 7:06 AM
  Subject: Re: [osg-users] multi threaded with slave camera and fbo


  Hi Guys,

  I've just skimmed through this thread.  Interesting issues :-)

  Wojtek explanations of the double buffered SceneView is spot on, in
  this case leading to two FBO's,  Creation of two FBO's for RTT is
  something I've been aware of since the inception of the
  DrawThreadPerContext/CullThreadPerCameraDrawThreadPerContext, but as
  yet haven't had the opportunity to refactor the code to avoid it.

  The problem actually stems from the reuse of SceneView to do something
  that it was never intended to handle, and in terms of osgViewer
  implementation SceneView was used simply because it was the line of
  least resistance i.e. it worked and code be adapted to help speed up
  the implementation of osgViewer, and mostly it's actually worked out
  ok, the stop gap has worked out pretty well.

  However, it has always been my plan to rewrite the osgViewer::Renderer
  so that it doesn't use SceneView at all, let alone double buffering
  them, instead it's my plan to use a single CullVisitor which
  alternately populates double buffered RenderStage.   This approach
  would be far leaner and less obfusticated, and we should be able to
  clean up some of the artefacts as well.   The downside is by not using
  SceneView we'll loose all the stereo rendering support, so instead
  we'll need to refactor osgViewer so that stereo rendering is done at
  the viewer level i.e. using slave cameras - this means more coding
  work, but the actually end result would be far better both design wise
  as well as in flexibility and performance.

  I just need to time to go off and do this work, I might be able to get
  it done for 2.6, but spare time certainly isn't something that I'm
  blessed with right now.

  Robert.
  ___
  osg-users mailing list
  osg-users@lists.openscenegraph.org
  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-26 Thread Wojciech Lewandowski

Hi Robert,

I am not sure if you read the piece where I mentioned that 
Image::allocateImage resets Image::internalTextureFormat to image pixel 
format. Image::allocateImage is called from Image::readPixels. So calling 
readPixels also changes internal format. This was the main culprit that 
second FBO for even SceneView was created with byte renderbuffer despite the 
fact that first FBO was initalized to RGBA32F. That happened because, when 
an image is attached to RTT camera, FBO Renderbuffer format is selected 
after this image internalTextureFormat. In our example internalTextureFormat 
was initially set to RGBA32F so first FBO Renderbuffer was initialized to 
float but readPixels caused reset to RGBA. So in next frame, when second FBO 
was created it initilized with standard 32 bit ubyte format.


I am not sure if this is a bug, but in my opinion Image::allocateImage 
should not change internalTextureFormat or at least Image::readPixels should 
record and restore internalFromat after calling allocateImage.


Cheers,
Wojtek.


Hi Guys,

I've just skimmed through this thread.  Interesting issues :-)

Wojtek explanations of the double buffered SceneView is spot on, in
this case leading to two FBO's,  Creation of two FBO's for RTT is
something I've been aware of since the inception of the
DrawThreadPerContext/CullThreadPerCameraDrawThreadPerContext, but as
yet haven't had the opportunity to refactor the code to avoid it.

The problem actually stems from the reuse of SceneView to do something
that it was never intended to handle, and in terms of osgViewer
implementation SceneView was used simply because it was the line of
least resistance i.e. it worked and code be adapted to help speed up
the implementation of osgViewer, and mostly it's actually worked out
ok, the stop gap has worked out pretty well.

However, it has always been my plan to rewrite the osgViewer::Renderer
so that it doesn't use SceneView at all, let alone double buffering
them, instead it's my plan to use a single CullVisitor which
alternately populates double buffered RenderStage.   This approach
would be far leaner and less obfusticated, and we should be able to
clean up some of the artefacts as well.   The downside is by not using
SceneView we'll loose all the stereo rendering support, so instead
we'll need to refactor osgViewer so that stereo rendering is done at
the viewer level i.e. using slave cameras - this means more coding
work, but the actually end result would be far better both design wise
as well as in flexibility and performance.

I just need to time to go off and do this work, I might be able to get
it done for 2.6, but spare time certainly isn't something that I'm
blessed with right now.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-26 Thread Robert Osfield
Hi Wojtek,

On Thu, Jun 26, 2008 at 5:22 PM, Wojciech Lewandowski
[EMAIL PROTECTED] wrote:
 I am not sure if you read the piece where I mentioned that
 Image::allocateImage resets Image::internalTextureFormat to image pixel
 format. Image::allocateImage is called from Image::readPixels. So calling
 readPixels also changes internal format. This was the main culprit that
 second FBO for even SceneView was created with byte renderbuffer despite the
 fact that first FBO was initalized to RGBA32F. That happened because, when
 an image is attached to RTT camera, FBO Renderbuffer format is selected
 after this image internalTextureFormat. In our example internalTextureFormat
 was initially set to RGBA32F so first FBO Renderbuffer was initialized to
 float but readPixels caused reset to RGBA. So in next frame, when second FBO
 was created it initilized with standard 32 bit ubyte format.

I did spot this, but didn't really have anything to add.  Clearly it's
a bug, but without putting time into fully understanding the failure
mechanism I can't give any guidance.  With all the build problems
kicking off this week I'm a bit too stretched thinly to go chase this
up.

 I am not sure if this is a bug, but in my opinion Image::allocateImage
 should not change internalTextureFormat or at least Image::readPixels should
 record and restore internalFromat after calling allocateImage.

I don't follow, are we talking about the the format in osg::Image or
the texture?

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-26 Thread Wojciech Lewandowski


Hi Robert,

 I am not sure if this is a bug, but in my opinion Image::allocateImage
 should not change internalTextureFormat or at least Image::readPixels
should
 record and restore internalFromat after calling allocateImage.

I don't follow, are we talking about the the format in osg::Image or
the texture?

Sorry for being imprecise. All the time I was talking about
osg::Image::_internalTextureFormat member.

I may submit a fix but I am not sure what is desired course of action.
Should I remove the line changing it from Image::allocateImage or rather
modify Image::readPixels to preserve it from modification when
Image::allocateImage is called ? See the bits of code that could be
affected, below.


osg::Image::allocateImage is called from Image::readPixels, Line 608
[...]
void Image::readPixels(int x,int y,int width,int height,
   GLenum format,GLenum type)
{
allocateImage(width,height,1,format,type);

glPixelStorei(GL_PACK_ALIGNMENT,_packing);

glReadPixels(x,y,width,height,format,type,_data);
}
[...]

osg::Image::_internalTextureFormat modified in:
Image.cpp, Image::allocateImage( int s,int t,int r,  GLenum format,GLenum
type, int packing), Line 556
[...]
if (_data)
{
_s = s;
_t = t;
_r = r;
_pixelFormat = format;
_dataType = type;
_packing = packing;
_internalTextureFormat = format;
}
[...]


Image::_internalTextureFormat is used (through getInternalTextureFormat
accessor method) to select Renderbuffer format:
FrameBufferObject.cpp,
FrameBufferAttachment::FrameBufferAttachment(Camera::Attachment
attachment), Line 365:
[...]
osg::Image* image = attachment._image.get();
if (image)
{
if (image-s()0  image-t()0)
{
GLenum format = attachment._image-getInternalTextureFormat();
if (format == 0)
format = attachment._internalFormat;
_ximpl = new Pimpl(Pimpl::RENDERBUFFER);
_ximpl-renderbufferTarget = new osg::RenderBuffer(image-s(),
image-t(), format);
[...]


Cheers,

Wojtek



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-25 Thread Wojciech Lewandowski

Hi Cedric,


If someone has some clue or advise to dig it


Sorry I had no time to look at the repro you created. I looked at your 
earlier modified
prerender example though. I got curious and started to debug it I think I 
found some additional important circumstances and workaround that my help 
you. But topic is quite mixed nad complex and I may have problems explaining 
it. But here it goes:


Both DrawThreadPerContext and CullThreadPerCameraDrawThreadPerContext modes
use osgViewer::Renderer thread with double buffered SceneViews.
SingleThreaded and CullDrawThreadPerContext use single SceneView for
rendering. (CullDrawThreadPerContext also uses Renderer but only with one
SceneView see osgViewer::Rendered::cull_draw method in comparison to
osgViewer::Renderer::draw  osgViewer::Renderer::cull)

Double buffered SceneViews mean that there are two interleaved SceneViews
performing cull and draw operations for subsequent odd/even frames. These
two scene views share some resources but may also create some separate
resources. For example, if texture is attached to RTT camera, each of these 
SceneViews will create two separate FBOs for
this camera but these FBOs will share camera texture. But when you attach 
the image to RTT camera, each of these FBOs will create
spearate render buffer and will read pixels to the camera image from the 
buffer.


What seems to be the case in your modified osgprerender looks like there is
either some problems with refreshing the image in one of these SceneViews. I 
ran your example through gliIntercept and found something really weird. 
First SceneView FBO creates Renderbuffer with RGBA_32F format but second 
SceneView creates RenderBuffer with RGBA format. So you end up with 
situation when odd RTT camera frames are rendered into float framebuffer but 
even frames are rendered into ubyte framebuffer. Apparently readPixels from 
float buffer fails somehow and only read pixels from ubyte work as intended.


I got curious why first SceneView FBO uses float buffer but second uses 
ubyte buffer. I think the answer is following: Apparently first frame drawn 
by prerender RTT camera proceeds before rendered texture is initialized and 
aplied to draw final scene. When first FBO is created its render buffer is 
based on initial image internal format (RGBA_32F).  FBO is build and used to 
render first frame and then its contents are read to the image. Then main 
camera draws scene using texture initilized from updated image. When this 
texture gets applied for the first time, image internal image format gets 
changed to texture format (RGBA) and thus second FBO is created using this 
different format


So we end up with odd prerender frames rendered into RGBA_32F buffer and 
even frames rendered into RGBA byte buffer. But this does not explain why 
readPixels produce so visually different results. It looks like there might 
be additional bug in OSG  or OpenGL in reading pixels from RGBA_32F 
framebuffer.


Now time for the conclusion. I don't have much time to dig this further, and 
see why readPixels fail, maybe will investigate this some other day.  So I 
don't have a real fix, but you may try a simple workaround. Set initial 
internal image format to RGBA instead of RGBA_32F. I did that and it seemed 
to remove the discrepancy. Alternatively make sure that texture build from 
image has its internal format set to RGBA_32F. But I did not try this 
option.


Cheers,
Wojtek





J.P. Delport wrote:

Hi,

for our current app we use singlethreaded. FBO is a requirement
because of multiple render targets.

Best would be to fix multithreaded and FBO. For this we will need
small test apps that reliably trigger errors.

Problem is that I think most people are unsure whether they are
abusing OSG (not using the library correctly, not setting dynamic
correctly, not blocking correctly...) or whether it is a bug.

jp

Cedric Pinson wrote:

What do you use to have a robust solution ? maybe i should just use
something different than fbo ?

Cedric

J.P. Delport wrote:

Hi,

Cedric Pinson wrote:

Hi,

I would like to know if other found some strange issue with multi
threaded, and render slave camera to fbo.


Yes, there have been quite a few discussions about multithreaded and
fbo in the recent months, but AFAIK nobody has put a finger on the
exact problem yet.

Attached is a simple modded version of osgprerender that also
displays something strange in multithreaded. I'm not sure if it is
related though.

run with:
./osgprerender --image

The image flashes every second frame for some reason. (Turn sync to
vblank on so it does not flash too fast.)

If, at the bottom of the .cpp, one enables the single threaded
option, the flashing disappears.

I have tried setting properties to dynamic on scene nodes, but it
does not seem to help, or I am missing a critical one.

jp




___
osg-users 

Re: [osg-users] multi threaded with slave camera and fbo

2008-06-25 Thread Cedric Pinson

Well thank you for helping,
You give me a lot of imformation, i will dig it

Cedric

Wojciech Lewandowski wrote:

Hi Cedric,


If someone has some clue or advise to dig it


Sorry I had no time to look at the repro you created. I looked at your 
earlier modified
prerender example though. I got curious and started to debug it I 
think I found some additional important circumstances and workaround 
that my help you. But topic is quite mixed nad complex and I may have 
problems explaining it. But here it goes:


Both DrawThreadPerContext and CullThreadPerCameraDrawThreadPerContext 
modes

use osgViewer::Renderer thread with double buffered SceneViews.
SingleThreaded and CullDrawThreadPerContext use single SceneView for
rendering. (CullDrawThreadPerContext also uses Renderer but only with one
SceneView see osgViewer::Rendered::cull_draw method in comparison to
osgViewer::Renderer::draw  osgViewer::Renderer::cull)

Double buffered SceneViews mean that there are two interleaved SceneViews
performing cull and draw operations for subsequent odd/even frames. These
two scene views share some resources but may also create some separate
resources. For example, if texture is attached to RTT camera, each of 
these SceneViews will create two separate FBOs for
this camera but these FBOs will share camera texture. But when you 
attach the image to RTT camera, each of these FBOs will create
spearate render buffer and will read pixels to the camera image from 
the buffer.


What seems to be the case in your modified osgprerender looks like 
there is
either some problems with refreshing the image in one of these 
SceneViews. I ran your example through gliIntercept and found 
something really weird. First SceneView FBO creates Renderbuffer with 
RGBA_32F format but second SceneView creates RenderBuffer with RGBA 
format. So you end up with situation when odd RTT camera frames are 
rendered into float framebuffer but even frames are rendered into 
ubyte framebuffer. Apparently readPixels from float buffer fails 
somehow and only read pixels from ubyte work as intended.


I got curious why first SceneView FBO uses float buffer but second 
uses ubyte buffer. I think the answer is following: Apparently first 
frame drawn by prerender RTT camera proceeds before rendered texture 
is initialized and aplied to draw final scene. When first FBO is 
created its render buffer is based on initial image internal format 
(RGBA_32F).  FBO is build and used to render first frame and then its 
contents are read to the image. Then main camera draws scene using 
texture initilized from updated image. When this texture gets applied 
for the first time, image internal image format gets changed to 
texture format (RGBA) and thus second FBO is created using this 
different format


So we end up with odd prerender frames rendered into RGBA_32F buffer 
and even frames rendered into RGBA byte buffer. But this does not 
explain why readPixels produce so visually different results. It looks 
like there might be additional bug in OSG  or OpenGL in reading pixels 
from RGBA_32F framebuffer.


Now time for the conclusion. I don't have much time to dig this 
further, and see why readPixels fail, maybe will investigate this some 
other day.  So I don't have a real fix, but you may try a simple 
workaround. Set initial internal image format to RGBA instead of 
RGBA_32F. I did that and it seemed to remove the discrepancy. 
Alternatively make sure that texture build from image has its internal 
format set to RGBA_32F. But I did not try this option.


Cheers,
Wojtek





J.P. Delport wrote:

Hi,

for our current app we use singlethreaded. FBO is a requirement
because of multiple render targets.

Best would be to fix multithreaded and FBO. For this we will need
small test apps that reliably trigger errors.

Problem is that I think most people are unsure whether they are
abusing OSG (not using the library correctly, not setting dynamic
correctly, not blocking correctly...) or whether it is a bug.

jp

Cedric Pinson wrote:

What do you use to have a robust solution ? maybe i should just use
something different than fbo ?

Cedric

J.P. Delport wrote:

Hi,

Cedric Pinson wrote:

Hi,

I would like to know if other found some strange issue with multi
threaded, and render slave camera to fbo.


Yes, there have been quite a few discussions about multithreaded and
fbo in the recent months, but AFAIK nobody has put a finger on the
exact problem yet.

Attached is a simple modded version of osgprerender that also
displays something strange in multithreaded. I'm not sure if it is
related though.

run with:
./osgprerender --image

The image flashes every second frame for some reason. (Turn sync to
vblank on so it does not flash too fast.)

If, at the bottom of the .cpp, one enables the single threaded
option, the flashing disappears.

I have tried setting properties to dynamic on scene nodes, but it
does not seem to help, or I am missing a critical one.

jp


Re: [osg-users] multi threaded with slave camera and fbo

2008-06-25 Thread Wojciech Lewandowski

Hi Cedric,

I just have found one more bit of info. Image internalTextureFormat gets 
reset by Image::allocateImage called from Image::readPixels when RTT camera 
buffer contents are read into image after first draw. So this does not 
happen when texture is applied for final scene draw.


I am not sure if resseting internal format from GL_RGBA32F_ARB to GL_RGBA 
should not be considered as a bug ?


However, it still does not explain what happens with image during and after 
readPixels got called when render buffer was GL_RGBA32F_ARB.


Cheers,
Wojtek



Well thank you for helping,
You give me a lot of imformation, i will dig it

Cedric

Wojciech Lewandowski wrote:

Hi Cedric,


If someone has some clue or advise to dig it


Sorry I had no time to look at the repro you created. I looked at your 
earlier modified
prerender example though. I got curious and started to debug it I think I 
found some additional important circumstances and workaround that my help 
you. But topic is quite mixed nad complex and I may have problems 
explaining it. But here it goes:


Both DrawThreadPerContext and CullThreadPerCameraDrawThreadPerContext 
modes

use osgViewer::Renderer thread with double buffered SceneViews.
SingleThreaded and CullDrawThreadPerContext use single SceneView for
rendering. (CullDrawThreadPerContext also uses Renderer but only with one
SceneView see osgViewer::Rendered::cull_draw method in comparison to
osgViewer::Renderer::draw  osgViewer::Renderer::cull)

Double buffered SceneViews mean that there are two interleaved SceneViews
performing cull and draw operations for subsequent odd/even frames. These
two scene views share some resources but may also create some separate
resources. For example, if texture is attached to RTT camera, each of 
these SceneViews will create two separate FBOs for
this camera but these FBOs will share camera texture. But when you attach 
the image to RTT camera, each of these FBOs will create
spearate render buffer and will read pixels to the camera image from the 
buffer.


What seems to be the case in your modified osgprerender looks like there 
is
either some problems with refreshing the image in one of these 
SceneViews. I ran your example through gliIntercept and found something 
really weird. First SceneView FBO creates Renderbuffer with RGBA_32F 
format but second SceneView creates RenderBuffer with RGBA format. So you 
end up with situation when odd RTT camera frames are rendered into float 
framebuffer but even frames are rendered into ubyte framebuffer. 
Apparently readPixels from float buffer fails somehow and only read 
pixels from ubyte work as intended.


I got curious why first SceneView FBO uses float buffer but second uses 
ubyte buffer. I think the answer is following: Apparently first frame 
drawn by prerender RTT camera proceeds before rendered texture is 
initialized and aplied to draw final scene. When first FBO is created its 
render buffer is based on initial image internal format (RGBA_32F).  FBO 
is build and used to render first frame and then its contents are read to 
the image. Then main camera draws scene using texture initilized from 
updated image. When this texture gets applied for the first time, image 
internal image format gets changed to texture format (RGBA) and thus 
second FBO is created using this different format


So we end up with odd prerender frames rendered into RGBA_32F buffer and 
even frames rendered into RGBA byte buffer. But this does not explain why 
readPixels produce so visually different results. It looks like there 
might be additional bug in OSG  or OpenGL in reading pixels from RGBA_32F 
framebuffer.


Now time for the conclusion. I don't have much time to dig this further, 
and see why readPixels fail, maybe will investigate this some other day. 
So I don't have a real fix, but you may try a simple workaround. Set 
initial internal image format to RGBA instead of RGBA_32F. I did that and 
it seemed to remove the discrepancy. Alternatively make sure that texture 
build from image has its internal format set to RGBA_32F. But I did not 
try this option.


Cheers,
Wojtek





J.P. Delport wrote:

Hi,

for our current app we use singlethreaded. FBO is a requirement
because of multiple render targets.

Best would be to fix multithreaded and FBO. For this we will need
small test apps that reliably trigger errors.

Problem is that I think most people are unsure whether they are
abusing OSG (not using the library correctly, not setting dynamic
correctly, not blocking correctly...) or whether it is a bug.

jp

Cedric Pinson wrote:

What do you use to have a robust solution ? maybe i should just use
something different than fbo ?

Cedric

J.P. Delport wrote:

Hi,

Cedric Pinson wrote:

Hi,

I would like to know if other found some strange issue with multi
threaded, and render slave camera to fbo.


Yes, there have been quite a few discussions about multithreaded and
fbo in the recent months, but AFAIK nobody has put a finger 

Re: [osg-users] multi threaded with slave camera and fbo

2008-06-25 Thread Wojciech Lewandowski

Hi again Cedric,

I think I have last piece of the puzzle. It looks like readPixels work 
perfectly correct with float Renderbuffer. I was blaming it because scene 
background was properly darkened by postRender camera callback but rendered 
cessna model seemed unafected by image darkennig process. Image darkenning 
was done by simple color scaling by 0.5.


It turned out that cessna interior was also properly scaled. But when 
Renderbuffer was float, render buffer color compononts were not clamped to 
0..1 range (which is obvious for float buffers, but I always forget about 
it;-). Shaders were substituting colors with vertices and multiplying them 
by 2 (Sine uniform)  so even after scaling by 0.5 we were still having color 
components much larger than 1.0. Thats why cessna interior seemed not 
darkened at all.


Jeez, I learned a lot today ;-) Thanks for interesting example ;-)

Cheers,

Wojtek


Hi Cedric,

I just have found one more bit of info. Image internalTextureFormat gets 
reset by Image::allocateImage called from Image::readPixels when RTT 
camera buffer contents are read into image after first draw. So this does 
not happen when texture is applied for final scene draw.


I am not sure if resseting internal format from GL_RGBA32F_ARB to GL_RGBA 
should not be considered as a bug ?


However, it still does not explain what happens with image during and 
after readPixels got called when render buffer was GL_RGBA32F_ARB.


Cheers,
Wojtek



Well thank you for helping,
You give me a lot of imformation, i will dig it

Cedric

Wojciech Lewandowski wrote:

Hi Cedric,


If someone has some clue or advise to dig it


Sorry I had no time to look at the repro you created. I looked at your 
earlier modified
prerender example though. I got curious and started to debug it I think 
I found some additional important circumstances and workaround that my 
help you. But topic is quite mixed nad complex and I may have problems 
explaining it. But here it goes:


Both DrawThreadPerContext and CullThreadPerCameraDrawThreadPerContext 
modes

use osgViewer::Renderer thread with double buffered SceneViews.
SingleThreaded and CullDrawThreadPerContext use single SceneView for
rendering. (CullDrawThreadPerContext also uses Renderer but only with 
one

SceneView see osgViewer::Rendered::cull_draw method in comparison to
osgViewer::Renderer::draw  osgViewer::Renderer::cull)

Double buffered SceneViews mean that there are two interleaved 
SceneViews
performing cull and draw operations for subsequent odd/even frames. 
These

two scene views share some resources but may also create some separate
resources. For example, if texture is attached to RTT camera, each of 
these SceneViews will create two separate FBOs for
this camera but these FBOs will share camera texture. But when you 
attach the image to RTT camera, each of these FBOs will create
spearate render buffer and will read pixels to the camera image from the 
buffer.


What seems to be the case in your modified osgprerender looks like there 
is
either some problems with refreshing the image in one of these 
SceneViews. I ran your example through gliIntercept and found something 
really weird. First SceneView FBO creates Renderbuffer with RGBA_32F 
format but second SceneView creates RenderBuffer with RGBA format. So 
you end up with situation when odd RTT camera frames are rendered into 
float framebuffer but even frames are rendered into ubyte framebuffer. 
Apparently readPixels from float buffer fails somehow and only read 
pixels from ubyte work as intended.


I got curious why first SceneView FBO uses float buffer but second uses 
ubyte buffer. I think the answer is following: Apparently first frame 
drawn by prerender RTT camera proceeds before rendered texture is 
initialized and aplied to draw final scene. When first FBO is created 
its render buffer is based on initial image internal format (RGBA_32F). 
FBO is build and used to render first frame and then its contents are 
read to the image. Then main camera draws scene using texture initilized 
from updated image. When this texture gets applied for the first time, 
image internal image format gets changed to texture format (RGBA) and 
thus second FBO is created using this different format


So we end up with odd prerender frames rendered into RGBA_32F buffer and 
even frames rendered into RGBA byte buffer. But this does not explain 
why readPixels produce so visually different results. It looks like 
there might be additional bug in OSG  or OpenGL in reading pixels from 
RGBA_32F framebuffer.


Now time for the conclusion. I don't have much time to dig this further, 
and see why readPixels fail, maybe will investigate this some other day. 
So I don't have a real fix, but you may try a simple workaround. Set 
initial internal image format to RGBA instead of RGBA_32F. I did that 
and it seemed to remove the discrepancy. Alternatively make sure that 
texture build from image has its internal format set to 

Re: [osg-users] multi threaded with slave camera and fbo

2008-06-25 Thread Cedric Pinson

Cool :)
Then thinking back on my problem i think i have an issue about 
synchronization (because i have rtt works it looks like it's not the 
good frame renderer). It seems that the projection matrix i set (in 
ortho) is not yet updated for the current frame i want to grab. In fact 
for my two camera rtt, only one seems synchronized with the projection 
matrix.
And because it does not work for the cull draw on a different tread, it 
could makes sense. I have to dig it, but i have not the time yet, (i 
changed the threading model as a work around).

Thank you to answer to this thread it was interesting to read you answers

Cedric


Wojciech Lewandowski wrote:

Hi again Cedric,

I think I have last piece of the puzzle. It looks like readPixels work 
perfectly correct with float Renderbuffer. I was blaming it because 
scene background was properly darkened by postRender camera callback 
but rendered cessna model seemed unafected by image darkennig process. 
Image darkenning was done by simple color scaling by 0.5.


It turned out that cessna interior was also properly scaled. But when 
Renderbuffer was float, render buffer color compononts were not 
clamped to 0..1 range (which is obvious for float buffers, but I 
always forget about it;-). Shaders were substituting colors with 
vertices and multiplying them by 2 (Sine uniform)  so even after 
scaling by 0.5 we were still having color components much larger than 
1.0. Thats why cessna interior seemed not darkened at all.


Jeez, I learned a lot today ;-) Thanks for interesting example ;-)

Cheers,

Wojtek


Hi Cedric,

I just have found one more bit of info. Image internalTextureFormat 
gets reset by Image::allocateImage called from Image::readPixels when 
RTT camera buffer contents are read into image after first draw. So 
this does not happen when texture is applied for final scene draw.


I am not sure if resseting internal format from GL_RGBA32F_ARB to 
GL_RGBA should not be considered as a bug ?


However, it still does not explain what happens with image during and 
after readPixels got called when render buffer was GL_RGBA32F_ARB.


Cheers,
Wojtek



Well thank you for helping,
You give me a lot of imformation, i will dig it

Cedric

Wojciech Lewandowski wrote:

Hi Cedric,


If someone has some clue or advise to dig it


Sorry I had no time to look at the repro you created. I looked at 
your earlier modified
prerender example though. I got curious and started to debug it I 
think I found some additional important circumstances and 
workaround that my help you. But topic is quite mixed nad complex 
and I may have problems explaining it. But here it goes:


Both DrawThreadPerContext and 
CullThreadPerCameraDrawThreadPerContext modes

use osgViewer::Renderer thread with double buffered SceneViews.
SingleThreaded and CullDrawThreadPerContext use single SceneView for
rendering. (CullDrawThreadPerContext also uses Renderer but only 
with one

SceneView see osgViewer::Rendered::cull_draw method in comparison to
osgViewer::Renderer::draw  osgViewer::Renderer::cull)

Double buffered SceneViews mean that there are two interleaved 
SceneViews
performing cull and draw operations for subsequent odd/even frames. 
These

two scene views share some resources but may also create some separate
resources. For example, if texture is attached to RTT camera, each 
of these SceneViews will create two separate FBOs for
this camera but these FBOs will share camera texture. But when you 
attach the image to RTT camera, each of these FBOs will create
spearate render buffer and will read pixels to the camera image 
from the buffer.


What seems to be the case in your modified osgprerender looks like 
there is
either some problems with refreshing the image in one of these 
SceneViews. I ran your example through gliIntercept and found 
something really weird. First SceneView FBO creates Renderbuffer 
with RGBA_32F format but second SceneView creates RenderBuffer with 
RGBA format. So you end up with situation when odd RTT camera 
frames are rendered into float framebuffer but even frames are 
rendered into ubyte framebuffer. Apparently readPixels from float 
buffer fails somehow and only read pixels from ubyte work as intended.


I got curious why first SceneView FBO uses float buffer but second 
uses ubyte buffer. I think the answer is following: Apparently 
first frame drawn by prerender RTT camera proceeds before rendered 
texture is initialized and aplied to draw final scene. When first 
FBO is created its render buffer is based on initial image internal 
format (RGBA_32F). FBO is build and used to render first frame and 
then its contents are read to the image. Then main camera draws 
scene using texture initilized from updated image. When this 
texture gets applied for the first time, image internal image 
format gets changed to texture format (RGBA) and thus second FBO is 
created using this different format


So we end up with odd prerender frames rendered into RGBA_32F 

Re: [osg-users] multi threaded with slave camera and fbo

2008-06-24 Thread Cedric Pinson
What do you use to have a robust solution ? maybe i should just use 
something different than fbo ?


Cedric

J.P. Delport wrote:

Hi,

Cedric Pinson wrote:

Hi,

I would like to know if other found some strange issue with multi 
threaded, and render slave camera to fbo. 


Yes, there have been quite a few discussions about multithreaded and 
fbo in the recent months, but AFAIK nobody has put a finger on the 
exact problem yet.


Attached is a simple modded version of osgprerender that also displays 
something strange in multithreaded. I'm not sure if it is related though.


run with:
./osgprerender --image

The image flashes every second frame for some reason. (Turn sync to 
vblank on so it does not flash too fast.)


If, at the bottom of the .cpp, one enables the single threaded option, 
the flashing disappears.


I have tried setting properties to dynamic on scene nodes, but it does 
not seem to help, or I am missing a critical one.


jp



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  


--
+33 (0) 6 63 20 03 56  Cedric Pinson mailto:[EMAIL PROTECTED] 
http://www.plopbyte.net


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] multi threaded with slave camera and fbo

2008-06-23 Thread Cedric Pinson

Hi,

I would like to know if other found some strange issue with multi 
threaded, and render slave camera to fbo. Here what i do.
In order to get screenshoot of top view and front view of a scene, i 
added two slave camera (AbsoluteRF) and share the same graphiccontext.
I start from the osgprerender camera intialisation with some 
adjustement. The issue is it works for the first slave camera, but not 
the second (if i called twice the function i get a correct screenshoot), 
so it looks like a synchronisation issue or something like that. I know 
it's secure to make modification in the scene graph during in the update 
traversal. So during the traversal i add an postdrawcallback to the two 
slave camerao that will just save the result of the image rendered 
(respective to each camera) to the disk and then the callback is removed.
When i force the setThreadingModel(osgViewer::Viewer::SingleThreaded); 
everything works as it should be, so I imagine i missed something on 
thread synchronisation. I thought making things in the traversal update 
was good.
I tried different thread model to try to understand, in SingleThreaded 
and CullDrawThreadPerContext it works, but DrawThreadPerContext and 
CullThreadPerCameraDrawThreadPerContext works for the first slave 
camera. Just for informations i have a dual core with nvidia system on 
linux.

I use osg 2.4 and i have the same issue with the last svn version 2.5.3

Any clue ?

void MyClass::addScreenShootCamera(osgViewer::Viewer* viewer, 
osg::NodeCallback* callback, const osg::Matrix view, const std::string 
filename)

{
  int tex_width = 300;
  int tex_height = 200;
  osg::ref_ptrosg::Camera camera = new osg::Camera;
  // set up the background color and clear mask.
  camera-setClearColor(osg::Vec4(0.1f,0.1f,0.3f,1.0f));
  camera-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
  // set viewport
  camera-setGraphicsContext(viewer-getCamera()-getGraphicsContext());
  camera-setReferenceFrame(osg::Transform::ABSOLUTE_RF);
  camera-setViewport(0,0,tex_width,tex_height);
  double ratio = tex_height * 1.0 /tex_width;
  camera-setProjectionMatrixAsOrtho(-1, 1,-ratio, ratio, -1, 1);
  camera-setViewMatrix(view);

  osg::Camera::RenderTargetImplementation renderImplementation = 
osg::Camera::FRAME_BUFFER_OBJECT;

  camera-setRenderTargetImplementation(renderImplementation);
  osg::Image* image = new osg::Image;
  image-allocateImage(tex_width, tex_height, 1, GL_RGBA, 
GL_UNSIGNED_BYTE);

  image-setFileName(filename);
  // attach the image so its copied on each frame.
  camera-attach(osg::Camera::COLOR_BUFFER, image);
//   camera-setDataVariance(osg::Object::DYNAMIC);
  viewer-addSlave(camera.get());
  AdaptProjectionForSceneSizeForFrontView* callback_casted = 
dynamic_castAdaptProjectionForSceneSizeForFrontView*(callback);

  callback_casted-useImage(image);
  callback_casted-setCamera(camera.get());
  camera-setUpdateCallback(callback);
}

any clue ?

// boring code
struct AdaptProjectionForSceneSizeForFrontView : public osg::NodeCallback
{
 osg::ref_ptrosgViewer::Viewer _viewer;
 bool _needUpdate;
 osg::ref_ptrosg::Image _image;
 osg::ref_ptrosg::Camera _camera;
 AdaptProjectionForSceneSizeForFrontView(osgViewer::Viewer* viewer) : 
_viewer(viewer), _needUpdate (true) {}

 void useImage(osg::Image* image) { _image = image; }
 void setCamera(osg::Camera* camera) { _camera = camera;}
 void needUpdate(bool state) { _needUpdate = state; }
 virtual void updateProjection(osg::Camera* cam, const 
osg::BoundingBox sceneSize)

 {
   double ratio = _image-t() * 1.0/ _image-s();
   float width = sceneSize._max[0] - sceneSize._min[0];
   float height = sceneSize._max[2] - sceneSize._min[2];
   if (height  width * ratio)
 width = height / ratio;
   height = width * ratio;
  
   width *= 0.5;

   height *= 0.5;
   std::cout  front  std::endl;
   std::cout  -width + sceneSize.center()[0] width + 
sceneSize.center()[0]   
  -height + sceneSize.center()[2] height + 
sceneSize.center()[2]  std::endl;

   cam-setProjectionMatrixAsOrtho(-width + sceneSize.center()[0],
   width + sceneSize.center()[0],
   -height + sceneSize.center()[2],
   height + sceneSize.center()[2],
   -1,
   1);
 }
 void operator()(osg::Node* node, osg::NodeVisitor* nv) {
   if (_needUpdate  nv-getVisitorType() == 
osg::NodeVisitor::UPDATE_VISITOR  _viewer-getSceneData()) {

 osg::BoundingBox sceneSize;
 osg::ref_ptrosg::ComputeBoundsVisitor bb = new 
osg::ComputeBoundsVisitor;

 _viewer-getSceneData()-accept(*bb);
 sceneSize = bb-getBoundingBox();
 osg::Camera* cam = dynamic_castosg::Camera*(node);
 if (cam) {
   std::cout  Scene size   bb-getBoundingBox()._min 
 bb-getBoundingBox()._max  std::endl;

   updateProjection(cam, sceneSize);
   _camera-setPostDrawCallback(new