Hi Johny,

Could you do as requested and create a small example, i.e. something
that others can compile and run to see what the problem is?

Extracting code from a wider program sometimes can be enough, but to
compile and test your program I'd need to write the extra code around
it to get a compilable program, I'd have to guess what types you are
using, guess what type of data you are assigning.  With each guess
we'd need to make we add an extra variable to takes away from what you
are seeing on screen at your end so less chance of knowing what we see
is what you see.

As I said, the best way is create an example, there are plenty of OSG
examples that you could modify.

Robert.

On 18 February 2017 at 12:57, Johny Canes <[email protected]> wrote:
> Okay,
>
>
> Code:
>
> // window / slave
>         camera = new osg::Camera();
>         osg::Viewport* viewport;
>
>         /// {
>         osg::ref_ptr<osg::GraphicsContext::Traits> traits = new 
> osg::GraphicsContext::Traits;
>         traits->x = 300 + 0;
>         traits->y = 100 + 0;
>         traits->width = nr::width;
>         traits->height = nr::height;
>         traits->windowDecoration = true;
>         traits->doubleBuffer = true;
>         traits->sharedContext = 0;
>         traits->samples = 4; // MSAA
>         traits->vsync = false;
>
>         viewport = new osg::Viewport(0, 0, traits->width, traits->height);
>
>         gc = osg::GraphicsContext::createGraphicsContext( traits.get() );
>         gc->getState()->setUseModelViewAndProjectionUniforms( true );
>         gc->getState()->setUseVertexAttributeAliasing( true );
>
>         GLenum buffer = traits->doubleBuffer ? GL_BACK : GL_FRONT;
>
>         camera = viewer.getCamera();
>         camera->setName( "Main" );
>         camera->setGraphicsContext( gc.get() );
>
>         camera->setClearColor(osg::Vec4(1.0f, 0.0f, 0.0f, 1.0f));
>         camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
>
>         camera->setViewport( new osg::Viewport(0, 0, traits->width, 
> traits->height) );
>         //camera->getOrCreateStateSet()->setTextureAttributeAndModes( 0, 
> texture, osg::StateAttribute::ON );
>
>         camera->setDrawBuffer( buffer );
>         camera->setReadBuffer( buffer );
>         camera->setRenderOrder( osg::Camera::RenderOrder::PRE_RENDER );
>         //camera->setRenderTargetImplementation( 
> osg::Camera::FRAME_BUFFER_OBJECT );
>
>         camera->attach( osg::Camera::COLOR_BUFFER0, texture );
>         //camera->attach( osg::Camera::COLOR_BUFFER, texture, 0, 0, false, 0, 
> 0 );
>
>         //viewer.addSlave( camera, osg::Matrix(), osg::Matrix() );
>
>         //viewer.setCamera( camera ); // unnecessary / messes up z-ordering 
> ...
>
>         //camera->addChild( root.get() );
>         /// }
>
>
>
>
> This makes a window. I get it that a window is backed by a camera / GC.
>
> So naturally, since I'm using the original camera, my camera, 'Main', will 
> render to its window. It would be ideal to turn this off, and only have this 
> camera render to a hidden buffer (FBO?). Using a pbuffer / pbuffer-rtt is 
> overkill and I'm not sure I understand that approach.
>
> Cheers,
> Johny
>
> ------------------
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=70251#70251
>
>
>
>
>
> _______________________________________________
> osg-users mailing list
> [email protected]
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to