Re: [osg-users] render quad texture to arbitrary quadrilateral

2020-10-28 Thread OpenSceneGraph Users
Hello Tom!

Looks like there was an issue in uploading the screenshot? 

Could you post a minimal/complete/reproducible example of the code that you 
use? Maybe someone will be able to spot the issue just by looking at the 
code.

-- Vaillancourt


On Tuesday, 27 October 2020 18:58:06 UTC-4, Tom Pollok wrote:
>
> Hello,
>
> im trying to render a texture to an quadrilateral. The  quadrilateral is 
> the calculated by the intersection view frustum with aplane. So the image 
> basically a perspective projection of the original image. 
>
>
> Unfortunately the texture coordinates are not interpolated correctly such 
> that the triangles look wrong.
>
> I know that is is a very basic computer graphics problem. But i have some 
> trouble solving it. 
> Does anyone have solved this problem already for arbitrary quadrilaterals? 
> My guess would be using fragment shaders, but im not experienced in with 
> GLSL. Another option is transforming the texture using a homography and 
> then rendering a quad, but it feels like this the brute force solution.
>
> Id be very thankful if somebody could help me. Ideally of somebody knows 
> where this problem has been solved already with openscenegraph.
>

-- 
You received this message because you are subscribed to the Google Groups 
"OpenSceneGraph Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to osg-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/osg-users/7f2f6ce0-b2ef-481a-8ecd-213332c18608o%40googlegroups.com.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture and osgQt (osgQOpenGL)

2019-09-20 Thread Wouter Roos
Finally had some time to look at it in more detail and it was a problem on my 
side, all is working now with setting the default fbo id. I've made a pull 
request, thanks again for the pointer.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=76724#76724





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture and osgQt (osgQOpenGL)

2019-09-16 Thread Wouter Roos
Thank you, that looks helpful. I have quickly tried implementing it but it does 
not seem to fix the problem I'm having, however it now does now react to 
keyboard inputs again, so it looks like it is a step in the right direction. I 
will try I bit more later today.

gwaldron wrote:
> Read this - it might help:
> http://forum.osgearth.org/solved-black-screen-with-drape-mode-in-a-QOpenGLWidget-td7592420.html#a7592421
>  
> (http://forum.osgearth.org/solved-black-screen-with-drape-mode-in-a-QOpenGLWidget-td7592420.html#a7592421)
>  
>  Glenn Waldron / osgEarth
> 
> 
>  --
> Post generated by Mail2Forum


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=76685#76685





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture and osgQt (osgQOpenGL)

2019-09-16 Thread Glenn Waldron
Read this - it might help:

http://forum.osgearth.org/solved-black-screen-with-drape-mode-in-a-QOpenGLWidget-td7592420.html#a7592421


Glenn Waldron / osgEarth


On Mon, Sep 16, 2019 at 2:03 PM Wouter Roos  wrote:

> Hi all,
> I'm really struggling with getting RTT to work under the latest version of
> osgQt and using osgQOpenGL. I am aware of the discussion around adding the
> setDrawBuffer(GL_BACK) and setReadBuffer(GL_BACK) to the camera for double
> buffered contexts, but no matter what settings I set for the window and
> QSurface, the screen remains empty. I am using the original osgPrerender
> example for testing. The same works fine with the previous version (the one
> that was updated to incorporate the camera buffer changes) of osgQt, which
> uses GraphicsWindowQt.
> Given the state of the current master of osgQt and the problems compiling
> and running it to begin with; is anybody using the osgQOpenGL at the
> moment? Is it working with a render to texture camera?
>
> Kind regards,
>
> Wouter
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=76682#76682
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to Texture and osgQt (osgQOpenGL)

2019-09-16 Thread Wouter Roos
Hi all,
I'm really struggling with getting RTT to work under the latest version of 
osgQt and using osgQOpenGL. I am aware of the discussion around adding the 
setDrawBuffer(GL_BACK) and setReadBuffer(GL_BACK) to the camera for double 
buffered contexts, but no matter what settings I set for the window and 
QSurface, the screen remains empty. I am using the original osgPrerender 
example for testing. The same works fine with the previous version (the one 
that was updated to incorporate the camera buffer changes) of osgQt, which uses 
GraphicsWindowQt.
Given the state of the current master of osgQt and the problems compiling and 
running it to begin with; is anybody using the osgQOpenGL at the moment? Is it 
working with a render to texture camera?

Kind regards,

Wouter

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=76682#76682





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture with GL3.

2017-05-23 Thread Nickolai Medvedev
I found problem. HUD-Camera does not work when i create OpenGL 3.3 core 
context. Only clear color is visible. If not create context, all is worked.


Code:


const int width(1920), height(1080);
const std::string version("3.3");
osg::ref_ptr< osg::GraphicsContext::Traits > traits = new 
osg::GraphicsContext::Traits();
traits->x = 0;
traits->y = 0;
traits->width = width;
traits->height = height;
traits->windowDecoration = true;
traits->doubleBuffer = true;
traits->vsync = false;

traits->glContextVersion = version;
traits->glContextFlags = GL_CONTEXT_FLAG_FORWARD_COMPATIBLE_BIT;
traits->glContextProfileMask = GL_CONTEXT_CORE_PROFILE_BIT;

osg::ref_ptr< osg::GraphicsContext > gc = 
osg::GraphicsContext::createGraphicsContext( traits.get() );
if( !gc.valid() )
{
osg::notify( osg::FATAL ) << "Unable to create OpenGL v" << version << " 
context." << std::endl;
return (1);
}

gc->realize();
gc->makeCurrent();

osgViewer::Viewer* viewer = new osgViewer::Viewer;

osg::Camera* cam = viewer->getCamera();
cam->setGraphicsContext( gc.get() );
cam->setComputeNearFarMode(osg::CullSettings::DO_NOT_COMPUTE_NEAR_FAR);
cam->setProjectionMatrix(osg::Matrix::perspective(45.0, 
(double)width/(double)height, 0.1, 10.0) );
cam->setViewport(new osg::Viewport(0, 0, width, height));

//HUD-Camera created as in the example osghud
osg::ref_ptr hud_camera = new HUDCamera; //Inherited from osg::Camera
hud_camera->setGraphicsContext(gc.get()); 
hud_camera->setClearColor(osg::Vec4(0.0,0.0,0.0,1.0)); //Anyway, still see the 
classic blue osg color
hud_camera->addChild(RTTCamera::createScreenQuad(1.0f, 1.0f, 1920.0f, 
1080.0f)); //Create geometry for TextureRectangle






Any ideas?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=70965#70965





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture with GL3.

2017-05-23 Thread Nickolai Medvedev
Hi, Robert.

All right, now i see. Probably, something wrong with my code.
I'm use new OSG 3.5.6, trying to port deferred renderer with light system to 
OpenGL 3.3.

Thank you for your answer.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=70964#70964





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture with GL3.

2017-05-23 Thread Robert Osfield
HI Nickolai,

There are no differences between RTT setup in the OSG for GL2 and GL3,
or any other GL/GLES combinations for that matter.  The osgprerender
or osgprerendercubemap examples are decent places to start to learn
what you need to do.

Robert.

On 23 May 2017 at 11:33, Nickolai Medvedev  wrote:
> Hi, community!
>
> How to correctly create a render to texture in GL3 context?
> What is the difference between RTT GL2 and GL3 in OSG.
>
> Thank you!
>
> Cheers,
> Nickolai
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=70962#70962
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to texture with GL3.

2017-05-23 Thread Nickolai Medvedev
Hi, community!

How to correctly create a render to texture in GL3 context?
What is the difference between RTT GL2 and GL3 in OSG.

Thank you!

Cheers,
Nickolai

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=70962#70962





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture ONLY?

2016-08-18 Thread Christian Buchner
One more thing: Rendering to a pbuffer does not automatically give you the
option to access your rendered content as a texture.

The technique to render to texture with pbuffers is called pbuffer-rtt and
implemented in several OSG samples with the "

*--pbuffer-rtt" command line option.*

*This may differ a bit from the camera setup I've outlined above.*



*Christian*

2016-08-18 17:07 GMT+02:00 Christian Buchner :

>
> On Windows, create a graphics context with the pbuffer flag set to true
> and windowDecoration set to false.
>
> osg::ref_ptr traits = new
> osg::GraphicsContext::Traits;
> traits->x = 0;
> traits->y = 0;
> traits->width = 640;
> traits->height = 480;
> traits->red = 8;
> traits->green = 8;
> traits->blue = 8;
> traits->alpha = 8;
> traits->windowDecoration = false;
> traits->pbuffer = true;
> traits->doubleBuffer = false; // or true as needed
> traits->sharedContext = 0;
>
> m_pbuffer = osg::GraphicsContext::createGraphicsContext(traits.
> get());
> if (!m_pbuffer.valid())
> {
> osg::notify(osg::NOTICE) << "Pixel buffer has not been created
> successfully. NOTE: update your dependencies folder if you see this error!"
> << std::endl;
> exit(1);
> }
> else
> {
> // Create an osgViewer running on top of a pbuffer graphics
> context
> m_viewer = new osgViewer::Viewer();
>
> // in my case I use a slave camera with ortho projection
> // to render whatever is needed
> m_camera = new osg::Camera;
> m_camera->setGraphicsContext(m_pbuffer.get());
> m_camera->setComputeNearFarMode(osg::Camera::DO_NOT_COMPUTE_NEAR_FAR);
> m_camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
> m_camera->setViewMatrix(osg::Matrix());
> m_camera->setProjectionMatrix(osg::Matrix::ortho2D(0, 1.0, 0,
> 1.0));
> m_camera->setViewport(new osg::Viewport(0, 0, 640, 480));
> m_camera->setDrawBuffer(GL_FRONT);
> m_camera->setReadBuffer(GL_FRONT);
> m_viewer->addSlave(m_camera.get(), osg::Matrixd(),
> osg::Matrixd());
> m_viewer->realize();
>
>
> I do not know if the same would work on Linux, as pbuffers on Linux are an
> optional extension that might not be supported.
>
> I get this to render at arbitrary frame rates, entirely decoupled from the
> screen's VBLANK interval.
>
> Christian
>
>
> 2016-08-18 16:47 GMT+02:00 Chris Thomas :
>
>> Hi,
>>
>> OK, I based my initial integration into my app on osgteapot.cpp. As with
>> all the other examples, it os run via
>>
>> viewer.run();
>>
>> And this creates an output window in OSX (and I am assuming any other OS
>> its run on). And thats the issue I have, I need OSG to run "headless", that
>> is to say, producing no visible window in the OS.
>>
>> If OSG is rendering away, to a non visible buffer, I can then expose this
>> to the user via my UI api (see above). Having this visible viewer, is the
>> issue right now. Is there an option to run viewer with no visible
>> display/window, or is there an alternative to viewer() ?
>>
>> Thank you!
>>
>> Cheers,
>> Chris
>>
>> --
>> Read this topic online here:
>> http://forum.openscenegraph.org/viewtopic.php?p=68420#68420
>>
>>
>>
>>
>>
>> ___
>> osg-users mailing list
>> osg-users@lists.openscenegraph.org
>> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>>
>
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture ONLY?

2016-08-18 Thread Christian Buchner
On Windows, create a graphics context with the pbuffer flag set to true
and windowDecoration set to false.

osg::ref_ptr traits = new
osg::GraphicsContext::Traits;
traits->x = 0;
traits->y = 0;
traits->width = 640;
traits->height = 480;
traits->red = 8;
traits->green = 8;
traits->blue = 8;
traits->alpha = 8;
traits->windowDecoration = false;
traits->pbuffer = true;
traits->doubleBuffer = false; // or true as needed
traits->sharedContext = 0;

m_pbuffer =
osg::GraphicsContext::createGraphicsContext(traits.get());
if (!m_pbuffer.valid())
{
osg::notify(osg::NOTICE) << "Pixel buffer has not been created
successfully. NOTE: update your dependencies folder if you see this error!"
<< std::endl;
exit(1);
}
else
{
// Create an osgViewer running on top of a pbuffer graphics
context
m_viewer = new osgViewer::Viewer();

// in my case I use a slave camera with ortho projection
// to render whatever is needed
m_camera = new osg::Camera;
m_camera->setGraphicsContext(m_pbuffer.get());
m_camera->setComputeNearFarMode(osg::Camera::DO_NOT_COMPUTE_NEAR_FAR);
m_camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
m_camera->setViewMatrix(osg::Matrix());
m_camera->setProjectionMatrix(osg::Matrix::ortho2D(0, 1.0, 0,
1.0));
m_camera->setViewport(new osg::Viewport(0, 0, 640, 480));
m_camera->setDrawBuffer(GL_FRONT);
m_camera->setReadBuffer(GL_FRONT);
m_viewer->addSlave(m_camera.get(), osg::Matrixd(),
osg::Matrixd());
m_viewer->realize();


I do not know if the same would work on Linux, as pbuffers on Linux are an
optional extension that might not be supported.

I get this to render at arbitrary frame rates, entirely decoupled from the
screen's VBLANK interval.

Christian


2016-08-18 16:47 GMT+02:00 Chris Thomas :

> Hi,
>
> OK, I based my initial integration into my app on osgteapot.cpp. As with
> all the other examples, it os run via
>
> viewer.run();
>
> And this creates an output window in OSX (and I am assuming any other OS
> its run on). And thats the issue I have, I need OSG to run "headless", that
> is to say, producing no visible window in the OS.
>
> If OSG is rendering away, to a non visible buffer, I can then expose this
> to the user via my UI api (see above). Having this visible viewer, is the
> issue right now. Is there an option to run viewer with no visible
> display/window, or is there an alternative to viewer() ?
>
> Thank you!
>
> Cheers,
> Chris
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=68420#68420
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture ONLY?

2016-08-18 Thread Chris Thomas
Hi,

OK, I based my initial integration into my app on osgteapot.cpp. As with all 
the other examples, it os run via

viewer.run();

And this creates an output window in OSX (and I am assuming any other OS its 
run on). And thats the issue I have, I need OSG to run "headless", that is to 
say, producing no visible window in the OS.

If OSG is rendering away, to a non visible buffer, I can then expose this to 
the user via my UI api (see above). Having this visible viewer, is the issue 
right now. Is there an option to run viewer with no visible display/window, or 
is there an alternative to viewer() ?

Thank you!

Cheers,
Chris

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68420#68420





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture ONLY?

2016-08-18 Thread Nickolai Medvedev
Hi, 

https://github.com/xarray/osgRecipes

Chapter 6 - it's all what you need.

Cheers,
Nickolai

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68418#68418





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture ONLY?

2016-08-18 Thread Sebastian Messerschmidt

Hi Chris,

Take a look at the osgprerender example. It shows you how to render to  
a framebuffer object.

The bound texture can be used to be displayed later on.

Cheers
Sebastian

Hi,

I have an existing app I am developing, which itself is based on OpenGL. It 
uses an API that provides a 3D windowing system, with different media being 
displayed on planes, within this 3D space. All good...

Except, its API does not offer anything near the flexibility, and ease of use 
of OSG. So.. how to use OSG within this app.

All of the examples I have seen so far, use a very similar patern, of the 
ilk


Code:
osg::ref_ptr cessna = osgDB::readNodeFile( "cessna.osg" );
viewer.setSceneData( cessna.get() );
return viewer.run();



This is great, in that its very easy to get going, but its thew viewer() that 
is causing issues for me. Ideally the viewer would be able to render to a 
texture, rather than to a screen, or window on a screen. I basically need a 
headless 3D process running, where the OSG output is going to a texture.

Are there any examples of how to do this? Once I have a texture, I can easily 
copy its contents to my apps planes.

Thank you!

Cheers,
Chris

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68415#68415





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture ONLY?

2016-08-18 Thread Christian Buchner
have a look at the osgprerender example. That one renders to texture first,
and then uses the contents of this texture for rendering to screen.

The osgscreencapture and osgposter examples also have options to render
off-screen to FBO or pbuffers.


2016-08-18 13:19 GMT+02:00 Chris Thomas :

> Hi,
>
> I have an existing app I am developing, which itself is based on OpenGL.
> It uses an API that provides a 3D windowing system, with different media
> being displayed on planes, within this 3D space. All good...
>
> Except, its API does not offer anything near the flexibility, and ease of
> use of OSG. So.. how to use OSG within this app.
>
> All of the examples I have seen so far, use a very similar patern, of the
> ilk
>
>
> Code:
> osg::ref_ptr cessna = osgDB::readNodeFile( "cessna.osg" );
> viewer.setSceneData( cessna.get() );
> return viewer.run();
>
>
>
> This is great, in that its very easy to get going, but its thew viewer()
> that is causing issues for me. Ideally the viewer would be able to render
> to a texture, rather than to a screen, or window on a screen. I basically
> need a headless 3D process running, where the OSG output is going to a
> texture.
>
> Are there any examples of how to do this? Once I have a texture, I can
> easily copy its contents to my apps planes.
>
> Thank you!
>
> Cheers,
> Chris
>
> --
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=68415#68415
>
>
>
>
>
> ___
> osg-users mailing list
> osg-users@lists.openscenegraph.org
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to texture ONLY?

2016-08-18 Thread Chris Thomas
Hi,

I have an existing app I am developing, which itself is based on OpenGL. It 
uses an API that provides a 3D windowing system, with different media being 
displayed on planes, within this 3D space. All good...

Except, its API does not offer anything near the flexibility, and ease of use 
of OSG. So.. how to use OSG within this app.

All of the examples I have seen so far, use a very similar patern, of the 
ilk


Code:
osg::ref_ptr cessna = osgDB::readNodeFile( "cessna.osg" );
viewer.setSceneData( cessna.get() );
return viewer.run();



This is great, in that its very easy to get going, but its thew viewer() that 
is causing issues for me. Ideally the viewer would be able to render to a 
texture, rather than to a screen, or window on a screen. I basically need a 
headless 3D process running, where the OSG output is going to a texture.

Are there any examples of how to do this? Once I have a texture, I can easily 
copy its contents to my apps planes.

Thank you!

Cheers,
Chris

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68415#68415





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture without clamping values

2016-06-14 Thread Philipp Meyer
Hi,

I was able to figure out the issue.
For everyone wondering, I was missing the following line:

textureImage->setInternalTextureFormat(GL_RGBA16F_ARB);

In other words, one needs to set the format on the image as well as on the 
texture for everything to work properly. Hope this helps someone in the future!

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67607#67607





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture without clamping values

2016-06-14 Thread Philipp Meyer
Hi,

I did some more testing and it turns out that I can set a texel to a color with 
values > 1.0 just fine in the C++ code.
When using image->setColor(osg::Vec4(1,2,3,4),x,y,0) before reading it with 
getColor, I can get results > 1.0.

Does that mean that the shader itself is clamping the values somehow? Or does 
it have to do with the internal texture copy from GPU to host memory?

Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67600#67600





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to Texture without clamping values

2016-06-13 Thread Philipp Meyer
Hi,

for my current project I need to do some computations in the fragment shader 
and retrieve the values within my application. For that I am using the render 
to texture feature together with a float texture.

I'm having some trouble reading values > 1.0 though. It seems like the values 
are getting clamped to 0..1, even though I followed the osgprerender HDR setup. 
Besides the code below, I have also tried GL_RGBA32F (ARB and not ARB) for the 
internal texture format, tried double for the image and source type and tried 
using osg::ClampColor to disable clamping for the RTT camera, all without 
success.

When reading the texture, It returns (0.123,0.5,1,1) for every texel.

Code for texture setup:


Code:
radarTexture = new osg::Texture2D;
radarTexture->setInternalFormat(GL_RGBA16F_ARB);
radarTexture->setSourceFormat(GL_RGBA);
radarTexture->setSourceType(GL_FLOAT);

auto textureImage = osgHelper::make_osgref();
textureImage->allocateImage(16,16,1,GL_RGBA, GL_FLOAT);
//  textureImage->setImage(128, 128, 1, GL_RGBA, GL_RGBA, GL_UNSIGNED_BYTE,
//  nullptr, osg::Image::AllocationMode::NO_DELETE);
radarTexture->setImage(textureImage);
radarTexture->setMaxAnisotropy(0);
radarTexture->setWrap(osg::Texture::WRAP_S, 
osg::Texture::CLAMP_TO_EDGE);
radarTexture->setWrap(osg::Texture::WRAP_T, 
osg::Texture::CLAMP_TO_EDGE);
radarTexture->setFilter(osg::Texture::FilterParameter::MIN_FILTER,
osg::Texture::FilterMode::NEAREST);
radarTexture->setFilter(osg::Texture::FilterParameter::MAG_FILTER,
osg::Texture::FilterMode::NEAREST);
radarTexture->setDataVariance(osg::Object::DYNAMIC);



RTT Camera setup (some)

Code:

// set the camera to render before the main camera.
this->setRenderOrder(osg::Camera::PRE_RENDER);

// tell the camera to use OpenGL frame buffer object where supported.
this->setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);

// attach the texture and use it as the color buffer.
this->attach(osg::Camera::COLOR_BUFFER0, dest->getImage());



GLSL Fragment Shader Code (simplified):


Code:
void main()
{
gl_FragColor = vec4(0.123,0.5,3,4);
}





Thank you!

Cheers,
Philipp

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67593#67593





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture race condition

2015-02-23 Thread Robert Osfield
Hi Nicolas,

On 23 February 2015 at 09:35, Nicolas Baillard nicolas.baill...@gmail.com
wrote:

 By sharing of GL objects between contexts I assume you mean the
 sharedContext member of the GraphicsContext::Traits structure correct ?



Yes, this is how one sets up shared contexts.  This means you'll need to
run SingleThreaded or drop the shared contexts.

The OSG doesn't mutex lock GL objects for a single context as the costs
would be prohibitive, and it's only required for shared contexts usage so
it's not a penalty that is worth paying.



 I do set this member for all my contexts. If I don't set it then my
 windows don't display the texture generated by my master camera, they
 display an uninitialized texture instead.


You don't provide any information about the set up of the Cameras and
render order so there isn't any way for others to be able to guess the
cause of this.

I would suggest fixing this problem, or just go with shared contexts and
SingleThreaded.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture race condition

2015-02-23 Thread Nicolas Baillard

robertosfield wrote:
 The OSG doesn't mutex lock GL objects for a single context as the costs would 
 be prohibitive, and it's only required for shared contexts usage so it's not 
 a penalty that is worth paying.

If I didn't use render to texture at all (or if I didn't try to share the 
generated textures between the contexts) then would it be safe to use 
DrawThreadPerContext and context sharing ? Or could it cause other issues ?


robertosfield wrote:
 You don't provide any information about the set up of the Cameras and render 
 order so there isn't any way for others to be able to guess the cause of this.

My master camera (the one rendering to texture) is set to PRE_RENDER. The 
render target implementation is set to FRAME_BUFFER_OBJECT but using other 
implementations doesn't seem to change a thing. It is linked directly to the 
main scene (using View::setSceneData()).

My two slaves cameras that are displaying the generated texture are both set to 
POST_RENDER. They are linked to a single Geode with a single drawable attached 
to it and using the generated texture.

Also, both the Texture2D instance holding the generated texture and the 
drawable have their data variance set to DYNAMIC.

Do you see any obvious mistake in this ?

Anyway, thank you very much.

Regards,
Nicolas

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=62783#62783





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture race condition

2015-02-23 Thread Robert Osfield
On 23 February 2015 at 13:09, Nicolas Baillard nicolas.baill...@gmail.com
wrote:


 robertosfield wrote:
  The OSG doesn't mutex lock GL objects for a single context as the costs
 would be prohibitive, and it's only required for shared contexts usage so
 it's not a penalty that is worth paying.

 If I didn't use render to texture at all (or if I didn't try to share the
 generated textures between the contexts) then would it be safe to use
 DrawThreadPerContext and context sharing ? Or could it cause other issues ?



You can't use DrawThreadPerContext when sharing contexts except when your
scene graph uses no OpenGL objects whatsoever.  So no display lists, no
VBO's, no textures etc.  Basically if you want shared contexts you have to
use it SingleThread so the different threads don't contend with the same
resources.





 robertosfield wrote:
  You don't provide any information about the set up of the Cameras and
 render order so there isn't any way for others to be able to guess the
 cause of this.

 My master camera (the one rendering to texture) is set to PRE_RENDER. The
 render target implementation is set to FRAME_BUFFER_OBJECT but using other
 implementations doesn't seem to change a thing. It is linked directly to
 the main scene (using View::setSceneData()).

 My two slaves cameras that are displaying the generated texture are both
 set to POST_RENDER. They are linked to a single Geode with a single
 drawable attached to it and using the generated texture.

 Also, both the Texture2D instance holding the generated texture and the
 drawable have their data variance set to DYNAMIC.

 Do you see any obvious mistake in this ?


In principle this sounds like it should be OK.  Personally I wouldn't use a
master Camera as a render to texture camera.  A slave Camera is probably a
better way to do it.  I don't know enough about what you are doing to
really know where you are going with it all.  I don't really have the time
to go chasing up your implementation.  Please have a look at the OSG
examples for render to texture to see how they manage things.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture race condition

2015-02-23 Thread Robert Osfield
Hi Nicolas,

The OSG by default will use separate OpenGL objects and associated buffers
for each graphics context.  If you enable sharing of GL objects between
contexts then you'll need to run the application single theaded to avoid
these shared GL objects and associated buffers being contend.

If it's neither of these issues then try the latest version of the OSG.

Robert.

On 23 February 2015 at 08:52, Nicolas Baillard nicolas.baill...@gmail.com
wrote:

 Hello everyone.

 I have a view with a master camera rendering to a texture. Then I have two
 slave cameras that display this texture into two different windows (so two
 different rendering contexts). When I use the DrawThreadPerContext
 threading model I get a crash into Texture::TextureObjectSet::moveToBack().
 My investigation on this crash makes me believe it is caused by a race
 condition on the texture generated by the master camera : one context is
 rendering into it while another is using it.

 Does OSG provide any synchronization mechanism I could use to prevent that
 ?

 Regards,
 Nicolas

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=62761#62761





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture race condition

2015-02-23 Thread Nicolas Baillard
Thank you Robert.

By sharing of GL objects between contexts I assume you mean the sharedContext 
member of the GraphicsContext::Traits structure correct ?

I do set this member for all my contexts. If I don't set it then my windows 
don't display the texture generated by my master camera, they display an 
uninitialized texture instead.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=62763#62763





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to texture race condition

2015-02-23 Thread Nicolas Baillard
Hello everyone.

I have a view with a master camera rendering to a texture. Then I have two 
slave cameras that display this texture into two different windows (so two 
different rendering contexts). When I use the DrawThreadPerContext threading 
model I get a crash into Texture::TextureObjectSet::moveToBack(). My 
investigation on this crash makes me believe it is caused by a race condition 
on the texture generated by the master camera : one context is rendering into 
it while another is using it.

Does OSG provide any synchronization mechanism I could use to prevent that ?

Regards,
Nicolas

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=62761#62761





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture,processing, make it displayed

2014-07-02 Thread Solkar Graphics

wwwanghao wrote:
 I use the Render to texture method to get a float image, then I need to 
 process the image to make the data is in the range of 0-255


I'm not sure I understand the why, but anyway -
How did you customize the image you render to?

That's likely in the vicinity of 

Code:
pCam-setRenderTargetImplementation(/**/); 
/*and*/
pCam-attach(/**/); 



calls.

 

wwwanghao wrote:
 after that I want to display the image

You need a geometry to display it onto.
You attach st coordinates to tha tgeometry via a texCoordArray.

What comes now on how deeply you interact with the gl.
Pure OSG is that you add the texture to the geometry's geode's stateSet.
For  custom shaders you include an isampler2D uniform (with your 0-255 image) 
in the glsl source and set that uniform by using a osg::Uniform.

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=60106#60106





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture,processing, make it displayed

2014-06-30 Thread LearningOSG LearningOSG
hello hao,
 following code frame mybe will help you:
 osg::Image* p_image ;
p_image= your image adress;
int width  = p_image-s();
int height = p_image-t();
int totalbytes = width * height * 4;
// Directly copy osg image buffer to your memptr;
memptr= your allocated memory
memcpy( memptr, p_image-data(), totalbytes );
do some image data related deal with
then
memcpy( p_image-data(), memptr, totalbytes );
p_image-dirty();
 cheers

Learned osg six months


2014-06-27 20:09 GMT+08:00 Hao Wang 1650024...@qq.com:

 Hi,

 I use the Render to texture method to get a float image, then I need to
 process the image to make the data is in the range of 0-255, after that I
 want to display the image. But I don't know how to handle it,is there
 anyone know how do it? Thank you.
 Thank you!

 Cheers,
 Hao

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=59983#59983





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to texture,processing, make it displayed

2014-06-27 Thread Hao Wang
Hi,

I use the Render to texture method to get a float image, then I need to process 
the image to make the data is in the range of 0-255, after that I want to 
display the image. But I don't know how to handle it,is there anyone know how 
do it? Thank you.
Thank you!

Cheers,
Hao

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=59983#59983





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render-To-Texture GLSL program question

2013-11-19 Thread Sebastian Messerschmidt

Hi Ethan,



Thanks, that makes sense that it would just be rendering a quad and that the 
original scene geometry would be lost.  However, the GLSL geometry shader only 
accepts primitives of the type point, line, or triangle-is it perhaps 
rendering two triangles to the geometry shader to make up the quad?  How would I 
even go about determining since there's no debugging available?

But back to what I'm trying to do, I'm trying to use a geometry shader to 
calculate the min, max, mean, std dev, and histogram of an RTT texture.  Fellow 
osg forum member Aurelius has advised me that he has working code that does 
this using geometry shaders and pointed me to:
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
and
This seems rather complicated for a starter. Also it requires feedback 
buffers which I never got working in OSG, so there might be some more 
obstacles here.

http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
but the first example only has Cg code and not GLSL, and the second example is 
a paper that describers an algorithm but doesn't have any src.  These are both 
potentially great resources, but I'm struggling to just get a basic 
pass-through geometry shader working to get some sort of starting point.
Geometry shaders are really simple. You simply need to know your input 
primitives.

A very basic geometry shader looks like this:


#version 400
layout(triangles) in; //we receive triangles
layout(triangle_strip, max_vertices = 3) out;

void main()
{
 for (int i = 0; i  3; ++i)
{

vertex = gl_in[i].gl_Position;
gl_Position =  vertex;
EmitVertex();
}

EndPrimitive();
}

But seriously, there are examples in OSG touching geometry shaders and 
there are plenty of tutorials about glsl and shaders.


cheers
Sebastian


As a side note, I am also considering using a compute shader since this would be the more 
natual fit for this type of algorithm while the geometry shader method is more of a 
hack that goes against the original intention of the geometry shader, but I'd 
be happy using either method, I'm just trying to get some traction on either of them.

-Ethan


SMesserschmidt wrote:

Am 18.11.2013 15:32, schrieb Ethan Fahy:


Hello,

Preface: I realize this question comes about because I've never really learned 
OpenGL/GLSL from the ground up and am likely missing some simple concepts, but 
I mostly have been coasting by at the osg middleware level and have been doing 
OK so far.

If I want to do some simple post-processing I can create a render-to-texture camera and 
render to the framebuffer.  I can attach a texture to the framebuffer and then create 
another screen camera to render that texture to the screen.  I can add a GLSL 
shader program to this texture so that before the texture gets rendered to the screen it 
gets an effect added to it using the shaders.

When I use shaders attached to 3-D model nodes in the scene itself, the meaning of 
the vertex and frag shaders is easy to understand-the vertices of the 3-D 
model are the vertices referenced in the vertex shader.  However, When I render my 
scene to a texture and then do a simple pass-through vertex and frag shader combo, 
what is the meaning of the vertices in this scenario?  I had assumed that once you 
render your scene to a texture, all knowledge of the original scene's geometry and 
vertex locations has been lost, is this true?  If so, then what vertices am I 
dealing with?  It's easy enough to follow along with examples and to use a simple 
pass-through vertex shader, but I'd like to understand this better because I now 
want to insert a geometry shader in between the vertex and frag shaders and again 
I'm not sure whether to use point, line, or triangle in my geometry shader as the 
primitive type because I thought that the geometry and primitives of the original 
scene wou

  ld

b


e lost after rendering to texture.


Usually when doing the post-processing pass you will be rendering to a
fullscreen quad. So the vertices you are dealing with are those of the
quad you are rendering too.
And yes, If you don't any further actions, rendering to texture will not
preserve the information on your orignal vertices etc.
The question is what you want to achieve. A geometry shader inbetween
your postprocessing pass will work on the quads vertices.
Maybe you should elaborate which kind of post processing you want to
achieve, so we can help you here.


What am I missing here?  Any clarification is most welcome.

-Ethan

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=57281#57281





___
osg-users mailing list

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list

http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

  --
Post generated 

Re: [osg-users] Render-To-Texture GLSL program question

2013-11-19 Thread Ethan Fahy
Thanks Sebastian,

I have in fact looked through every geometry shader tutorial I could find and 
have tried to implement a simple pass-through sahder identical to the one you 
posted, but when I add the geometry shader I just get a black screen with no 
OpenGL error messages, and if I remove the geometry shader and keep the vertex 
and frag shaders I get my normal scene.   I'm not sure if maybe I'm missing 
passing through texture coordinates or something of that nature...?

-Ethan


SMesserschmidt wrote:
 Hi Ethan,
 
 
 
  Thanks, that makes sense that it would just be rendering a quad and that 
  the original scene geometry would be lost.  However, the GLSL geometry 
  shader only accepts primitives of the type point, line, or triangle-is it 
  perhaps rendering two triangles to the geometry shader to make up the quad? 
   How would I even go about determining since there's no debugging available?
  
  But back to what I'm trying to do, I'm trying to use a geometry shader to 
  calculate the min, max, mean, std dev, and histogram of an RTT texture.  
  Fellow osg forum member Aurelius has advised me that he has working code 
  that does this using geometry shaders and pointed me to:
  http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
  and
  
 This seems rather complicated for a starter. Also it requires feedback 
 buffers which I never got working in OSG, so there might be some more 
 obstacles here.
 
  http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
  but the first example only has Cg code and not GLSL, and the second example 
  is a paper that describers an algorithm but doesn't have any src.  These 
  are both potentially great resources, but I'm struggling to just get a 
  basic pass-through geometry shader working to get some sort of starting 
  point.
  
 Geometry shaders are really simple. You simply need to know your input 
 primitives.
 A very basic geometry shader looks like this:
 
 
 #version 400
 layout(triangles) in; //we receive triangles
 layout(triangle_strip, max_vertices = 3) out;
 
 void main()
 {
 for (int i = 0; i  3; ++i)
 {
 
 vertex = gl_in[i].gl_Position;
 gl_Position =  vertex;
 EmitVertex();
 }
 
 EndPrimitive();
 }
 
 But seriously, there are examples in OSG touching geometry shaders and 
 there are plenty of tutorials about glsl and shaders.
 
 cheers
 Sebastian
 
  
  As a side note, I am also considering using a compute shader since this 
  would be the more natual fit for this type of algorithm while the geometry 
  shader method is more of a hack that goes against the original intention 
  of the geometry shader, but I'd be happy using either method, I'm just 
  trying to get some traction on either of them.
  
  -Ethan
  
  
  SMesserschmidt wrote:
  
   Am 18.11.2013 15:32, schrieb Ethan Fahy:
   
   
Hello,

Preface: I realize this question comes about because I've never really 
learned OpenGL/GLSL from the ground up and am likely missing some 
simple concepts, but I mostly have been coasting by at the osg 
middleware level and have been doing OK so far.

If I want to do some simple post-processing I can create a 
render-to-texture camera and render to the framebuffer.  I can attach a 
texture to the framebuffer and then create another screen camera to 
render that texture to the screen.  I can add a GLSL shader program to 
this texture so that before the texture gets rendered to the screen it 
gets an effect added to it using the shaders.

When I use shaders attached to 3-D model nodes in the scene itself, the 
meaning of the vertex and frag shaders is easy to understand-the 
vertices of the 3-D model are the vertices referenced in the vertex 
shader.  However, When I render my scene to a texture and then do a 
simple pass-through vertex and frag shader combo, what is the meaning 
of the vertices in this scenario?  I had assumed that once you render 
your scene to a texture, all knowledge of the original scene's geometry 
and vertex locations has been lost, is this true?  If so, then what 
vertices am I dealing with?  It's easy enough to follow along with 
examples and to use a simple pass-through vertex shader, but I'd like 
to understand this better because I now want to insert a geometry 
shader in between the vertex and frag shaders and again I'm not sure 
whether to use point, line, or triangle in my geometry shader as the 
primitive type because I thought that the geometry and primitives of 
the original scene
  wou

   
  ld
  
   b
   
   
e lost after rendering to texture.


   Usually when doing the post-processing pass you will be rendering to a
   fullscreen quad. So the vertices you are dealing with are those of the
   quad you are rendering too.
   And yes, If you don't any further actions, rendering to texture will not
   preserve the information on your orignal vertices 

Re: [osg-users] Render-To-Texture GLSL program question

2013-11-19 Thread Sebastian Messerschmidt

Hi Ethan

Thanks Sebastian,

I have in fact looked through every geometry shader tutorial I could find and 
have tried to implement a simple pass-through sahder identical to the one you 
posted, but when I add the geometry shader I just get a black screen with no 
OpenGL error messages, and if I remove the geometry shader and keep the vertex 
and frag shaders I get my normal scene.   I'm not sure if maybe I'm missing 
passing through texture coordinates or something of that nature...?


Ok, you of course have to pass all your varyings through the geometry 
shader. Thats the burden for using a geometry shader: you will have to 
replicate the default funtionality.

For example:
Vertex shader:

out vec4 v_color;
v_color =gl_Color;

in geometry shader:
in vec4 v_color[];
out vec4 g_color;

g_color = color[i]; //i = vertex number

and then finally in your fragment shader you can access:

in g_color

cheer
Sebastian


-Ethan


SMesserschmidt wrote:

Hi Ethan,




Thanks, that makes sense that it would just be rendering a quad and that the 
original scene geometry would be lost.  However, the GLSL geometry shader only 
accepts primitives of the type point, line, or triangle-is it perhaps 
rendering two triangles to the geometry shader to make up the quad?  How would I 
even go about determining since there's no debugging available?

But back to what I'm trying to do, I'm trying to use a geometry shader to 
calculate the min, max, mean, std dev, and histogram of an RTT texture.  Fellow 
osg forum member Aurelius has advised me that he has working code that does 
this using geometry shaders and pointed me to:
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
and


This seems rather complicated for a starter. Also it requires feedback
buffers which I never got working in OSG, so there might be some more
obstacles here.


http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
but the first example only has Cg code and not GLSL, and the second example is 
a paper that describers an algorithm but doesn't have any src.  These are both 
potentially great resources, but I'm struggling to just get a basic 
pass-through geometry shader working to get some sort of starting point.


Geometry shaders are really simple. You simply need to know your input
primitives.
A very basic geometry shader looks like this:


#version 400
layout(triangles) in; //we receive triangles
layout(triangle_strip, max_vertices = 3) out;

void main()
{
for (int i = 0; i  3; ++i)
{

vertex = gl_in[i].gl_Position;
gl_Position =  vertex;
EmitVertex();
}

EndPrimitive();
}

But seriously, there are examples in OSG touching geometry shaders and
there are plenty of tutorials about glsl and shaders.

cheers
Sebastian


As a side note, I am also considering using a compute shader since this would be the more 
natual fit for this type of algorithm while the geometry shader method is more of a 
hack that goes against the original intention of the geometry shader, but I'd 
be happy using either method, I'm just trying to get some traction on either of them.

-Ethan


SMesserschmidt wrote:


Am 18.11.2013 15:32, schrieb Ethan Fahy:



Hello,

Preface: I realize this question comes about because I've never really learned 
OpenGL/GLSL from the ground up and am likely missing some simple concepts, but 
I mostly have been coasting by at the osg middleware level and have been doing 
OK so far.

If I want to do some simple post-processing I can create a render-to-texture camera and 
render to the framebuffer.  I can attach a texture to the framebuffer and then create 
another screen camera to render that texture to the screen.  I can add a GLSL 
shader program to this texture so that before the texture gets rendered to the screen it 
gets an effect added to it using the shaders.

When I use shaders attached to 3-D model nodes in the scene itself, the meaning of 
the vertex and frag shaders is easy to understand-the vertices of the 3-D 
model are the vertices referenced in the vertex shader.  However, When I render my 
scene to a texture and then do a simple pass-through vertex and frag shader combo, 
what is the meaning of the vertices in this scenario?  I had assumed that once you 
render your scene to a texture, all knowledge of the original scene's geometry and 
vertex locations has been lost, is this true?  If so, then what vertices am I 
dealing with?  It's easy enough to follow along with examples and to use a simple 
pass-through vertex shader, but I'd like to understand this better because I now 
want to insert a geometry shader in between the vertex and frag shaders and again 
I'm not sure whether to use point, line, or triangle in my geometry shader as the 
primitive type because I thought that the geometry and primitives of the original 
scene

   wou

ld


b



e lost after rendering to texture.



Usually when doing the post-processing pass you will be rendering to a
fullscreen quad. So the vertices you are dealing 

Re: [osg-users] Render-To-Texture GLSL program question

2013-11-19 Thread Ethan Fahy
Thanks again, it looks like I need to get up to speed with using in and out 
vs attribute and varying since I cut my teeth on older tutorials and apparently 
attribute and varying are officially deprecated and are supported through 
compatibility mode.  I'm not used to needing historical context and having so 
many deprecated variables and different extensions to worry about like with 
GLSL.  GLSL-land is a bit of a wild place to be!

-Ethan


SMesserschmidt wrote:
 Hi Ethan
 
  Thanks Sebastian,
  
  I have in fact looked through every geometry shader tutorial I could find 
  and have tried to implement a simple pass-through sahder identical to the 
  one you posted, but when I add the geometry shader I just get a black 
  screen with no OpenGL error messages, and if I remove the geometry shader 
  and keep the vertex and frag shaders I get my normal scene.   I'm not sure 
  if maybe I'm missing passing through texture coordinates or something of 
  that nature...?
  
 
 Ok, you of course have to pass all your varyings through the geometry 
 shader. Thats the burden for using a geometry shader: you will have to 
 replicate the default funtionality.
 For example:
 Vertex shader:
 
 out vec4 v_color;
 v_color =gl_Color;
 
 in geometry shader:
 in vec4 v_color[];
 out vec4 g_color;
 
 g_color = color[i]; //i = vertex number
 
 and then finally in your fragment shader you can access:
 
 in g_color
 
 cheer
 Sebastian
 
  
  -Ethan
  
  
  SMesserschmidt wrote:
  
   Hi Ethan,
   
   
   
   
Thanks, that makes sense that it would just be rendering a quad and 
that the original scene geometry would be lost.  However, the GLSL 
geometry shader only accepts primitives of the type point, line, or 
triangle-is it perhaps rendering two triangles to the geometry shader 
to make up the quad?  How would I even go about determining since 
there's no debugging available?

But back to what I'm trying to do, I'm trying to use a geometry shader 
to calculate the min, max, mean, std dev, and histogram of an RTT 
texture.  Fellow osg forum member Aurelius has advised me that he has 
working code that does this using geometry shaders and pointed me to:
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
and


   This seems rather complicated for a starter. Also it requires feedback
   buffers which I never got working in OSG, so there might be some more
   obstacles here.
   
   
http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
but the first example only has Cg code and not GLSL, and the second 
example is a paper that describers an algorithm but doesn't have any 
src.  These are both potentially great resources, but I'm struggling to 
just get a basic pass-through geometry shader working to get some sort 
of starting point.


   Geometry shaders are really simple. You simply need to know your input
   primitives.
   A very basic geometry shader looks like this:
   
   
   #version 400
   layout(triangles) in; //we receive triangles
   layout(triangle_strip, max_vertices = 3) out;
   
   void main()
   {
   for (int i = 0; i  3; ++i)
   {
   
   vertex = gl_in[i].gl_Position;
   gl_Position =  vertex;
   EmitVertex();
   }
   
   EndPrimitive();
   }
   
   But seriously, there are examples in OSG touching geometry shaders and
   there are plenty of tutorials about glsl and shaders.
   
   cheers
   Sebastian
   
   
As a side note, I am also considering using a compute shader since this 
would be the more natual fit for this type of algorithm while the 
geometry shader method is more of a hack that goes against the 
original intention of the geometry shader, but I'd be happy using 
either method, I'm just trying to get some traction on either of them.

-Ethan


SMesserschmidt wrote:


 Am 18.11.2013 15:32, schrieb Ethan Fahy:
 
 
 
  Hello,
  
  Preface: I realize this question comes about because I've never 
  really learned OpenGL/GLSL from the ground up and am likely missing 
  some simple concepts, but I mostly have been coasting by at the osg 
  middleware level and have been doing OK so far.
  
  If I want to do some simple post-processing I can create a 
  render-to-texture camera and render to the framebuffer.  I can 
  attach a texture to the framebuffer and then create another 
  screen camera to render that texture to the screen.  I can add a 
  GLSL shader program to this texture so that before the texture gets 
  rendered to the screen it gets an effect added to it using the 
  shaders.
  
  When I use shaders attached to 3-D model nodes in the scene itself, 
  the meaning of the vertex and frag shaders is easy to 
  understand-the vertices of the 3-D model are the vertices 
  referenced in the vertex shader.  However, When I render my scene 

Re: [osg-users] Render-To-Texture GLSL program question

2013-11-19 Thread Ethan Fahy
Hello Sebastian,
I read up on the differences between glsl 1.2 and 1.5 and then scimmed through 
the official GLSL 1.5 specification document.  I then grepped the osg src and 
examples directories to see if I could find any #version 150 shaders (I could 
not).  Are there any reference/example implementations of GLSL 1.5 shaders with 
OSG?  I ask because it looks like GLSL 1.5 requires you to be more explicit 
with declaring inputs and outputs, including needing to pass things like the 
modelviewmatrix etc into the vertex shader as uniforms.  Is this easily done?  
Examples?  Do I have this assumption wrong?  Thanks again, I promise I'm 
working hard on my end and not just fishing for easy answers 

SMesserschmidt wrote:
 Hi Ethan,
 
  Thanks again, it looks like I need to get up to speed with using in and 
  out vs attribute and varying since I cut my teeth on older tutorials and 
  apparently attribute and varying are officially deprecated and are 
  supported through compatibility mode.  I'm not used to needing historical 
  context and having so many deprecated variables and different extensions to 
  worry about like with GLSL.  GLSL-land is a bit of a wild place to be!
  
 Don't worry. You can also use varying safely. Just remember to use 
 something like #version 150 compatibility at the beginning of your shaders.
 However, it is better to get used to the new in, out as they are making 
 it much clearer what is input and what goes out your shader stage.
 cheers
 Sebastian
 
  
  -Ethan
  
  
  SMesserschmidt wrote:
  
   Hi Ethan
   
   
Thanks Sebastian,

I have in fact looked through every geometry shader tutorial I could 
find and have tried to implement a simple pass-through sahder identical 
to the one you posted, but when I add the geometry shader I just get a 
black screen with no OpenGL error messages, and if I remove the 
geometry shader and keep the vertex and frag shaders I get my normal 
scene.   I'm not sure if maybe I'm missing passing through texture 
coordinates or something of that nature...?


   Ok, you of course have to pass all your varyings through the geometry
   shader. Thats the burden for using a geometry shader: you will have to
   replicate the default funtionality.
   For example:
   Vertex shader:
   
   out vec4 v_color;
   v_color =gl_Color;
   
   in geometry shader:
   in vec4 v_color[];
   out vec4 g_color;
   
   g_color = color[i]; //i = vertex number
   
   and then finally in your fragment shader you can access:
   
   in g_color
   
   cheer
   Sebastian
   
   
-Ethan


SMesserschmidt wrote:


 Hi Ethan,
 
 
 
 
 
  Thanks, that makes sense that it would just be rendering a quad and 
  that the original scene geometry would be lost.  However, the GLSL 
  geometry shader only accepts primitives of the type point, line, or 
  triangle-is it perhaps rendering two triangles to the geometry 
  shader to make up the quad?  How would I even go about determining 
  since there's no debugging available?
  
  But back to what I'm trying to do, I'm trying to use a geometry 
  shader to calculate the min, max, mean, std dev, and histogram of 
  an RTT texture.  Fellow osg forum member Aurelius has advised me 
  that he has working code that does this using geometry shaders and 
  pointed me to:
  http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
  and
  
  
  
 This seems rather complicated for a starter. Also it requires feedback
 buffers which I never got working in OSG, so there might be some more
 obstacles here.
 
 
 
  http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
  but the first example only has Cg code and not GLSL, and the second 
  example is a paper that describers an algorithm but doesn't have 
  any src.  These are both potentially great resources, but I'm 
  struggling to just get a basic pass-through geometry shader working 
  to get some sort of starting point.
  
  
  
 Geometry shaders are really simple. You simply need to know your input
 primitives.
 A very basic geometry shader looks like this:
 
 
 #version 400
 layout(triangles) in; //we receive triangles
 layout(triangle_strip, max_vertices = 3) out;
 
 void main()
 {
 for (int i = 0; i  3; ++i)
 {
 
 vertex = gl_in[i].gl_Position;
 gl_Position =  vertex;
 EmitVertex();
 }
 
 EndPrimitive();
 }
 
 But seriously, there are examples in OSG touching geometry shaders and
 there are plenty of tutorials about glsl and shaders.
 
 cheers
 Sebastian
 
 
 
  As a side note, I am also considering using a compute shader since 
  this would be the more natual fit for this type of algorithm while 
  the geometry shader 

Re: [osg-users] Render-To-Texture GLSL program question

2013-11-19 Thread Sebastian Messerschmidt

Hi Ethan,

Hello Sebastian,
I read up on the differences between glsl 1.2 and 1.5 and then scimmed through 
the official GLSL 1.5 specification document.  I then grepped the osg src and 
examples directories to see if I could find any #version 150 shaders (I could 
not).  Are there any reference/example implementations of GLSL 1.5 shaders with 
OSG?  I ask because it looks like GLSL 1.5 requires you to be more explicit 
with declaring inputs and outputs, including needing to pass things like the 
modelviewmatrix etc into the vertex shader as uniforms.  Is this easily done?  
Examples?  Do I have this assumption wrong?  Thanks again, I promise I'm 
working hard on my end and not just fishing for easy answers

Simply go with the #version 150 compatible
It allows to use the built-in uniforms and varyings. So 
gl_ModelViewMatrix and gl_Vertex etc. are still valid tokens.
This is less painful than doing the hard-core way. If you want to really 
really go this way you have to setup vertex attribute aliasing in OSG, 
but then you will really have to do a lot of things using visitors and 
so on. So for starting, simply go with the compatibility profile and use 
in/out only on your varyings.


cheers
Sebastian


SMesserschmidt wrote:

Hi Ethan,


Thanks again, it looks like I need to get up to speed with using in and out 
vs attribute and varying since I cut my teeth on older tutorials and apparently attribute and 
varying are officially deprecated and are supported through compatibility mode.  I'm not used to 
needing historical context and having so many deprecated variables and different extensions to 
worry about like with GLSL.  GLSL-land is a bit of a wild place to be!


Don't worry. You can also use varying safely. Just remember to use
something like #version 150 compatibility at the beginning of your shaders.
However, it is better to get used to the new in, out as they are making
it much clearer what is input and what goes out your shader stage.
cheers
Sebastian


-Ethan


SMesserschmidt wrote:


Hi Ethan



Thanks Sebastian,

I have in fact looked through every geometry shader tutorial I could find and 
have tried to implement a simple pass-through sahder identical to the one you 
posted, but when I add the geometry shader I just get a black screen with no 
OpenGL error messages, and if I remove the geometry shader and keep the vertex 
and frag shaders I get my normal scene.   I'm not sure if maybe I'm missing 
passing through texture coordinates or something of that nature...?



Ok, you of course have to pass all your varyings through the geometry
shader. Thats the burden for using a geometry shader: you will have to
replicate the default funtionality.
For example:
Vertex shader:

out vec4 v_color;
v_color =gl_Color;

in geometry shader:
in vec4 v_color[];
out vec4 g_color;

g_color = color[i]; //i = vertex number

and then finally in your fragment shader you can access:

in g_color

cheer
Sebastian



-Ethan


SMesserschmidt wrote:



Hi Ethan,






Thanks, that makes sense that it would just be rendering a quad and that the 
original scene geometry would be lost.  However, the GLSL geometry shader only 
accepts primitives of the type point, line, or triangle-is it perhaps 
rendering two triangles to the geometry shader to make up the quad?  How would I 
even go about determining since there's no debugging available?

But back to what I'm trying to do, I'm trying to use a geometry shader to 
calculate the min, max, mean, std dev, and histogram of an RTT texture.  Fellow 
osg forum member Aurelius has advised me that he has working code that does 
this using geometry shaders and pointed me to:
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
and




This seems rather complicated for a starter. Also it requires feedback
buffers which I never got working in OSG, so there might be some more
obstacles here.




http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
but the first example only has Cg code and not GLSL, and the second example is 
a paper that describers an algorithm but doesn't have any src.  These are both 
potentially great resources, but I'm struggling to just get a basic 
pass-through geometry shader working to get some sort of starting point.




Geometry shaders are really simple. You simply need to know your input
primitives.
A very basic geometry shader looks like this:


#version 400
layout(triangles) in; //we receive triangles
layout(triangle_strip, max_vertices = 3) out;

void main()
{
for (int i = 0; i  3; ++i)
{

vertex = gl_in[i].gl_Position;
gl_Position =  vertex;
EmitVertex();
}

EndPrimitive();
}

But seriously, there are examples in OSG touching geometry shaders and
there are plenty of tutorials about glsl and shaders.

cheers
Sebastian




As a side note, I am also considering using a compute shader since this would be the more 
natual fit for this type of algorithm while the geometry shader method is more of a 
hack that goes against 

Re: [osg-users] Render-To-Texture GLSL program question

2013-11-19 Thread Ethan Fahy
If I use #version 150 compatibility, do I still have to explicitly do the in 
out specifications, such as declaring out gl_FragColor in the frag shader?  


SMesserschmidt wrote:
 Hi Ethan,
 
  Hello Sebastian,
  I read up on the differences between glsl 1.2 and 1.5 and then scimmed 
  through the official GLSL 1.5 specification document.  I then grepped the 
  osg src and examples directories to see if I could find any #version 150 
  shaders (I could not).  Are there any reference/example implementations of 
  GLSL 1.5 shaders with OSG?  I ask because it looks like GLSL 1.5 requires 
  you to be more explicit with declaring inputs and outputs, including 
  needing to pass things like the modelviewmatrix etc into the vertex shader 
  as uniforms.  Is this easily done?  Examples?  Do I have this assumption 
  wrong?  Thanks again, I promise I'm working hard on my end and not just 
  fishing for easy answers
  
 Simply go with the #version 150 compatible
 It allows to use the built-in uniforms and varyings. So 
 gl_ModelViewMatrix and gl_Vertex etc. are still valid tokens.
 This is less painful than doing the hard-core way. If you want to really 
 really go this way you have to setup vertex attribute aliasing in OSG, 
 but then you will really have to do a lot of things using visitors and 
 so on. So for starting, simply go with the compatibility profile and use 
 in/out only on your varyings.
 
 cheers
 Sebastian
 
  
  SMesserschmidt wrote:
  
   Hi Ethan,
   
   
Thanks again, it looks like I need to get up to speed with using in 
and out vs attribute and varying since I cut my teeth on older 
tutorials and apparently attribute and varying are officially 
deprecated and are supported through compatibility mode.  I'm not used 
to needing historical context and having so many deprecated variables 
and different extensions to worry about like with GLSL.  GLSL-land is a 
bit of a wild place to be!


   Don't worry. You can also use varying safely. Just remember to use
   something like #version 150 compatibility at the beginning of your 
   shaders.
   However, it is better to get used to the new in, out as they are making
   it much clearer what is input and what goes out your shader stage.
   cheers
   Sebastian
   
   
-Ethan


SMesserschmidt wrote:


 Hi Ethan
 
 
 
  Thanks Sebastian,
  
  I have in fact looked through every geometry shader tutorial I 
  could find and have tried to implement a simple pass-through sahder 
  identical to the one you posted, but when I add the geometry shader 
  I just get a black screen with no OpenGL error messages, and if I 
  remove the geometry shader and keep the vertex and frag shaders I 
  get my normal scene.   I'm not sure if maybe I'm missing passing 
  through texture coordinates or something of that nature...?
  
  
  
 Ok, you of course have to pass all your varyings through the geometry
 shader. Thats the burden for using a geometry shader: you will have to
 replicate the default funtionality.
 For example:
 Vertex shader:
 
 out vec4 v_color;
 v_color =gl_Color;
 
 in geometry shader:
 in vec4 v_color[];
 out vec4 g_color;
 
 g_color = color[i]; //i = vertex number
 
 and then finally in your fragment shader you can access:
 
 in g_color
 
 cheer
 Sebastian
 
 
 
  -Ethan
  
  
  SMesserschmidt wrote:
  
  
  
   Hi Ethan,
   
   
   
   
   
   
Thanks, that makes sense that it would just be rendering a quad 
and that the original scene geometry would be lost.  However, 
the GLSL geometry shader only accepts primitives of the type 
point, line, or triangle-is it perhaps rendering two triangles 
to the geometry shader to make up the quad?  How would I even 
go about determining since there's no debugging available?

But back to what I'm trying to do, I'm trying to use a geometry 
shader to calculate the min, max, mean, std dev, and histogram 
of an RTT texture.  Fellow osg forum member Aurelius has 
advised me that he has working code that does this using 
geometry shaders and pointed me to:
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
and




   This seems rather complicated for a starter. Also it requires 
   feedback
   buffers which I never got working in OSG, so there might be some 
   more
   obstacles here.
   
   
   
   
http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
but the first example only has Cg code and not GLSL, and the 
second example is a paper that describers an algorithm but 
doesn't have any src.  These 

Re: [osg-users] Render-To-Texture GLSL program question

2013-11-19 Thread Ethan Fahy
Also, is there any good reason to use #version 150 compatibility vs using 
#version 120 and using the extension required to use geometry shaders other 
than using #version 150 compatibility is more forward looking syntactically?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=57326#57326





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render-To-Texture GLSL program question

2013-11-19 Thread Sebastian Messerschmidt

Sorry Ethan,

Personally I try to take the profile which doesn't require the 
extension. Simply go ahead an try.
Concerning your other question: Please check the web for answers. The 
OpenGL/GLSL Specification is freely available and there might be some 
tutorials for this.

For reference I use:
http://www.opengl.org/sdk/docs/manglsl/xhtml/

http://www.opengl.org/wiki/Main_Page


Also, is there any good reason to use #version 150 compatibility vs using 
#version 120 and using the extension required to use geometry shaders other 
than using #version 150 compatibility is more forward looking syntactically?

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=57326#57326





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render-To-Texture GLSL program question

2013-11-19 Thread Sebastian Messerschmidt

If I use #version 150 compatibility, do I still have to explicitly do the in 
out specifications, such as declaring out gl_FragColor in the frag shader?
No, as I already said: You can use the old syntax and mix it with the 
new one.
You absolutely don't have to use the layout, in, out things in this 
profile.
Why don't you go ahead and write some simple shader program and see when 
it fails? This way you will learn much more than asking about every 
little potential problem ;-)


cheers
Sebastina




SMesserschmidt wrote:

Hi Ethan,


Hello Sebastian,
I read up on the differences between glsl 1.2 and 1.5 and then scimmed through 
the official GLSL 1.5 specification document.  I then grepped the osg src and 
examples directories to see if I could find any #version 150 shaders (I could 
not).  Are there any reference/example implementations of GLSL 1.5 shaders with 
OSG?  I ask because it looks like GLSL 1.5 requires you to be more explicit 
with declaring inputs and outputs, including needing to pass things like the 
modelviewmatrix etc into the vertex shader as uniforms.  Is this easily done?  
Examples?  Do I have this assumption wrong?  Thanks again, I promise I'm 
working hard on my end and not just fishing for easy answers


Simply go with the #version 150 compatible
It allows to use the built-in uniforms and varyings. So
gl_ModelViewMatrix and gl_Vertex etc. are still valid tokens.
This is less painful than doing the hard-core way. If you want to really
really go this way you have to setup vertex attribute aliasing in OSG,
but then you will really have to do a lot of things using visitors and
so on. So for starting, simply go with the compatibility profile and use
in/out only on your varyings.

cheers
Sebastian


SMesserschmidt wrote:


Hi Ethan,



Thanks again, it looks like I need to get up to speed with using in and out 
vs attribute and varying since I cut my teeth on older tutorials and apparently attribute and 
varying are officially deprecated and are supported through compatibility mode.  I'm not used to 
needing historical context and having so many deprecated variables and different extensions to 
worry about like with GLSL.  GLSL-land is a bit of a wild place to be!



Don't worry. You can also use varying safely. Just remember to use
something like #version 150 compatibility at the beginning of your shaders.
However, it is better to get used to the new in, out as they are making
it much clearer what is input and what goes out your shader stage.
cheers
Sebastian



-Ethan


SMesserschmidt wrote:



Hi Ethan




Thanks Sebastian,

I have in fact looked through every geometry shader tutorial I could find and 
have tried to implement a simple pass-through sahder identical to the one you 
posted, but when I add the geometry shader I just get a black screen with no 
OpenGL error messages, and if I remove the geometry shader and keep the vertex 
and frag shaders I get my normal scene.   I'm not sure if maybe I'm missing 
passing through texture coordinates or something of that nature...?




Ok, you of course have to pass all your varyings through the geometry
shader. Thats the burden for using a geometry shader: you will have to
replicate the default funtionality.
For example:
Vertex shader:

out vec4 v_color;
v_color =gl_Color;

in geometry shader:
in vec4 v_color[];
out vec4 g_color;

g_color = color[i]; //i = vertex number

and then finally in your fragment shader you can access:

in g_color

cheer
Sebastian




-Ethan


SMesserschmidt wrote:




Hi Ethan,







Thanks, that makes sense that it would just be rendering a quad and that the 
original scene geometry would be lost.  However, the GLSL geometry shader only 
accepts primitives of the type point, line, or triangle-is it perhaps 
rendering two triangles to the geometry shader to make up the quad?  How would I 
even go about determining since there's no debugging available?

But back to what I'm trying to do, I'm trying to use a geometry shader to 
calculate the min, max, mean, std dev, and histogram of an RTT texture.  Fellow 
osg forum member Aurelius has advised me that he has working code that does 
this using geometry shaders and pointed me to:
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
and





This seems rather complicated for a starter. Also it requires feedback
buffers which I never got working in OSG, so there might be some more
obstacles here.





http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
but the first example only has Cg code and not GLSL, and the second example is 
a paper that describers an algorithm but doesn't have any src.  These are both 
potentially great resources, but I'm struggling to just get a basic 
pass-through geometry shader working to get some sort of starting point.





Geometry shaders are really simple. You simply need to know your input
primitives.
A very basic geometry shader looks like this:


#version 400
layout(triangles) in; //we 

[osg-users] Render-To-Texture GLSL program question

2013-11-18 Thread Ethan Fahy
Hello,

Preface: I realize this question comes about because I've never really learned 
OpenGL/GLSL from the ground up and am likely missing some simple concepts, but 
I mostly have been coasting by at the osg middleware level and have been doing 
OK so far.

If I want to do some simple post-processing I can create a render-to-texture 
camera and render to the framebuffer.  I can attach a texture to the 
framebuffer and then create another screen camera to render that texture to 
the screen.  I can add a GLSL shader program to this texture so that before the 
texture gets rendered to the screen it gets an effect added to it using the 
shaders.  

When I use shaders attached to 3-D model nodes in the scene itself, the meaning 
of the vertex and frag shaders is easy to understand-the vertices of the 3-D 
model are the vertices referenced in the vertex shader.  However, When I render 
my scene to a texture and then do a simple pass-through vertex and frag shader 
combo, what is the meaning of the vertices in this scenario?  I had assumed 
that once you render your scene to a texture, all knowledge of the original 
scene's geometry and vertex locations has been lost, is this true?  If so, then 
what vertices am I dealing with?  It's easy enough to follow along with 
examples and to use a simple pass-through vertex shader, but I'd like to 
understand this better because I now want to insert a geometry shader in 
between the vertex and frag shaders and again I'm not sure whether to use 
point, line, or triangle in my geometry shader as the primitive type because I 
thought that the geometry and primitives of the original scene would b
 e lost after rendering to texture.  

What am I missing here?  Any clarification is most welcome.

-Ethan

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=57281#57281





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render-To-Texture GLSL program question

2013-11-18 Thread Sebastian Messerschmidt

Am 18.11.2013 15:32, schrieb Ethan Fahy:

Hello,

Preface: I realize this question comes about because I've never really learned 
OpenGL/GLSL from the ground up and am likely missing some simple concepts, but 
I mostly have been coasting by at the osg middleware level and have been doing 
OK so far.

If I want to do some simple post-processing I can create a render-to-texture camera and 
render to the framebuffer.  I can attach a texture to the framebuffer and then create 
another screen camera to render that texture to the screen.  I can add a GLSL 
shader program to this texture so that before the texture gets rendered to the screen it 
gets an effect added to it using the shaders.

When I use shaders attached to 3-D model nodes in the scene itself, the meaning of 
the vertex and frag shaders is easy to understand-the vertices of the 3-D 
model are the vertices referenced in the vertex shader.  However, When I render my 
scene to a texture and then do a simple pass-through vertex and frag shader combo, 
what is the meaning of the vertices in this scenario?  I had assumed that once you 
render your scene to a texture, all knowledge of the original scene's geometry and 
vertex locations has been lost, is this true?  If so, then what vertices am I 
dealing with?  It's easy enough to follow along with examples and to use a simple 
pass-through vertex shader, but I'd like to understand this better because I now 
want to insert a geometry shader in between the vertex and frag shaders and again 
I'm not sure whether to use point, line, or triangle in my geometry shader as the 
primitive type because I thought that the geometry and primitives of the original 
scene would

 b

  e lost after rendering to texture.


Usually when doing the post-processing pass you will be rendering to a 
fullscreen quad. So the vertices you are dealing with are those of the 
quad you are rendering too.
And yes, If you don't any further actions, rendering to texture will not 
preserve the information on your orignal vertices etc.
The question is what you want to achieve. A geometry shader inbetween 
your postprocessing pass will work on the quads vertices.
Maybe you should elaborate which kind of post processing you want to 
achieve, so we can help you here.


What am I missing here?  Any clarification is most welcome.

-Ethan

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=57281#57281





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render-To-Texture GLSL program question

2013-11-18 Thread Ethan Fahy
Thanks, that makes sense that it would just be rendering a quad and that the 
original scene geometry would be lost.  However, the GLSL geometry shader only 
accepts primitives of the type point, line, or triangle-is it perhaps 
rendering two triangles to the geometry shader to make up the quad?  How would 
I even go about determining since there's no debugging available?

But back to what I'm trying to do, I'm trying to use a geometry shader to 
calculate the min, max, mean, std dev, and histogram of an RTT texture.  Fellow 
osg forum member Aurelius has advised me that he has working code that does 
this using geometry shaders and pointed me to:
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
and
http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
but the first example only has Cg code and not GLSL, and the second example is 
a paper that describers an algorithm but doesn't have any src.  These are both 
potentially great resources, but I'm struggling to just get a basic 
pass-through geometry shader working to get some sort of starting point.

As a side note, I am also considering using a compute shader since this would 
be the more natual fit for this type of algorithm while the geometry shader 
method is more of a hack that goes against the original intention of the 
geometry shader, but I'd be happy using either method, I'm just trying to get 
some traction on either of them.

-Ethan


SMesserschmidt wrote:
 Am 18.11.2013 15:32, schrieb Ethan Fahy:
 
  Hello,
  
  Preface: I realize this question comes about because I've never really 
  learned OpenGL/GLSL from the ground up and am likely missing some simple 
  concepts, but I mostly have been coasting by at the osg middleware level 
  and have been doing OK so far.
  
  If I want to do some simple post-processing I can create a 
  render-to-texture camera and render to the framebuffer.  I can attach a 
  texture to the framebuffer and then create another screen camera to 
  render that texture to the screen.  I can add a GLSL shader program to this 
  texture so that before the texture gets rendered to the screen it gets an 
  effect added to it using the shaders.
  
  When I use shaders attached to 3-D model nodes in the scene itself, the 
  meaning of the vertex and frag shaders is easy to understand-the vertices 
  of the 3-D model are the vertices referenced in the vertex shader.  
  However, When I render my scene to a texture and then do a simple 
  pass-through vertex and frag shader combo, what is the meaning of the 
  vertices in this scenario?  I had assumed that once you render your scene 
  to a texture, all knowledge of the original scene's geometry and vertex 
  locations has been lost, is this true?  If so, then what vertices am I 
  dealing with?  It's easy enough to follow along with examples and to use a 
  simple pass-through vertex shader, but I'd like to understand this better 
  because I now want to insert a geometry shader in between the vertex and 
  frag shaders and again I'm not sure whether to use point, line, or triangle 
  in my geometry shader as the primitive type because I thought that the 
  geometry and primitives of the original scene wou
 ld
  
 b
 
  e lost after rendering to texture.
  
 
 Usually when doing the post-processing pass you will be rendering to a 
 fullscreen quad. So the vertices you are dealing with are those of the 
 quad you are rendering too.
 And yes, If you don't any further actions, rendering to texture will not 
 preserve the information on your orignal vertices etc.
 The question is what you want to achieve. A geometry shader inbetween 
 your postprocessing pass will work on the quads vertices.
 Maybe you should elaborate which kind of post processing you want to 
 achieve, so we can help you here.
 
  
  What am I missing here?  Any clarification is most welcome.
  
  -Ethan
  
  --
  Read this topic online here:
  http://forum.openscenegraph.org/viewtopic.php?p=57281#57281
  
  
  
  
  
  ___
  osg-users mailing list
  
  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  
 
 
 ___
 osg-users mailing list
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
  --
 Post generated by Mail2Forum


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=57285#57285





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render-To-Texture GLSL program question

2013-11-18 Thread WillScott
Hi,
 Sorry for borthering. I seem to be encountered with the same kind of 
problems which confused me for a long time. I hope that we can do further 
communicate on the GLSL subject.
 Up to now, I still didn't know what kind of  vertex that vertex shader 
dealt with in deed? Is the vertex in the vertex shader totally about primitives 
such as triangles, lines ,points? Or about the whole surface that is assembled 
by primitives?

 
 From: ethanf...@gmail.com
 Date: Mon, 18 Nov 2013 16:50:53 +0100
 To: osg-users@lists.openscenegraph.org
 Subject: Re: [osg-users] Render-To-Texture GLSL program question
 
 Thanks, that makes sense that it would just be rendering a quad and that the 
 original scene geometry would be lost.  However, the GLSL geometry shader 
 only accepts primitives of the type point, line, or triangle-is it perhaps 
 rendering two triangles to the geometry shader to make up the quad?  How 
 would I even go about determining since there's no debugging available?
 
 But back to what I'm trying to do, I'm trying to use a geometry shader to 
 calculate the min, max, mean, std dev, and histogram of an RTT texture.  
 Fellow osg forum member Aurelius has advised me that he has working code that 
 does this using geometry shaders and pointed me to:
 http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
 and
 http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
 but the first example only has Cg code and not GLSL, and the second example 
 is a paper that describers an algorithm but doesn't have any src.  These are 
 both potentially great resources, but I'm struggling to just get a basic 
 pass-through geometry shader working to get some sort of starting point.
 
 As a side note, I am also considering using a compute shader since this would 
 be the more natual fit for this type of algorithm while the geometry shader 
 method is more of a hack that goes against the original intention of the 
 geometry shader, but I'd be happy using either method, I'm just trying to get 
 some traction on either of them.
 
 -Ethan
 
 
 SMesserschmidt wrote:
  Am 18.11.2013 15:32, schrieb Ethan Fahy:
  
   Hello,
   
   Preface: I realize this question comes about because I've never really 
   learned OpenGL/GLSL from the ground up and am likely missing some simple 
   concepts, but I mostly have been coasting by at the osg middleware level 
   and have been doing OK so far.
   
   If I want to do some simple post-processing I can create a 
   render-to-texture camera and render to the framebuffer.  I can attach a 
   texture to the framebuffer and then create another screen camera to 
   render that texture to the screen.  I can add a GLSL shader program to 
   this texture so that before the texture gets rendered to the screen it 
   gets an effect added to it using the shaders.
   
   When I use shaders attached to 3-D model nodes in the scene itself, the 
   meaning of the vertex and frag shaders is easy to understand-the 
   vertices of the 3-D model are the vertices referenced in the vertex 
   shader.  However, When I render my scene to a texture and then do a 
   simple pass-through vertex and frag shader combo, what is the meaning of 
   the vertices in this scenario?  I had assumed that once you render your 
   scene to a texture, all knowledge of the original scene's geometry and 
   vertex locations has been lost, is this true?  If so, then what vertices 
   am I dealing with?  It's easy enough to follow along with examples and to 
   use a simple pass-through vertex shader, but I'd like to understand this 
   better because I now want to insert a geometry shader in between the 
   vertex and frag shaders and again I'm not sure whether to use point, 
   line, or triangle in my geometry shader as the primitive type because I 
   thought that the geometry and primitives of the original scene wou
  ld
   
  b
  
   e lost after rendering to texture.
   
  
  Usually when doing the post-processing pass you will be rendering to a 
  fullscreen quad. So the vertices you are dealing with are those of the 
  quad you are rendering too.
  And yes, If you don't any further actions, rendering to texture will not 
  preserve the information on your orignal vertices etc.
  The question is what you want to achieve. A geometry shader inbetween 
  your postprocessing pass will work on the quads vertices.
  Maybe you should elaborate which kind of post processing you want to 
  achieve, so we can help you here.
  
   
   What am I missing here?  Any clarification is most welcome.
   
   -Ethan
   
   --
   Read this topic online here:
   http://forum.openscenegraph.org/viewtopic.php?p=57281#57281
   
   
   
   
   
   ___
   osg-users mailing list
   
   http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
   
  
  
  ___
  osg-users mailing list

Re: [osg-users] Render-To-Texture GLSL program question

2013-11-18 Thread Sebastian Messerschmidt

Hi Will, Scott?

Hi,
 Sorry for borthering. I seem to be encountered with the same kind 
of problems which confused me for a long time. I hope that we can do 
further communicate on the GLSL subject.
 Up to now, I still didn't know what kind of  vertex that vertex 
shader dealt with in deed? Is the vertex in the vertex 
shader totally about primitives such as triangles, lines ,points? 
Or about the whole surface that is assembled by primitives?
No, the vertex shader is all about vertices (i.e. one at a time). It has 
no information about the primitive. This is done in the primitive 
assembly stage which is followed by the geometry shader. In the geometry 
shader you get what you sent down the pipeline.
E.g. all primitives not being lines will end up as triangles (there is 
no such thing as quads/polygons at this stage of the pipeline. All 
line-like primitive/points will end up as points in your geometry 
shader. So the geometry shader can iterate over the input primitive 
and use potential adjacency.



This is a relatively good starting point:
http://www.lighthouse3d.com/tutorials/glsl-core-tutorial/geometry-shader/

cheers Sebastian



 From: ethanf...@gmail.com
 Date: Mon, 18 Nov 2013 16:50:53 +0100
 To: osg-users@lists.openscenegraph.org
 Subject: Re: [osg-users] Render-To-Texture GLSL program question

 Thanks, that makes sense that it would just be rendering a quad and 
that the original scene geometry would be lost. However, the GLSL 
geometry shader only accepts primitives of the type point, line, or 
triangle-is it perhaps rendering two triangles to the geometry shader 
to make up the quad? How would I even go about determining since 
there's no debugging available?


 But back to what I'm trying to do, I'm trying to use a geometry 
shader to calculate the min, max, mean, std dev, and histogram of an 
RTT texture. Fellow osg forum member Aurelius has advised me that he 
has working code that does this using geometry shaders and pointed me to:

 http://http.developer.nvidia.com/GPUGems3/gpugems3_ch41.html
 and
 
http://developer.amd.com/wordpress/media/2012/10/GPUHistogramGeneration_I3D07.pdf
 but the first example only has Cg code and not GLSL, and the second 
example is a paper that describers an algorithm but doesn't have any 
src. These are both potentially great resources, but I'm struggling to 
just get a basic pass-through geometry shader working to get some sort 
of starting point.


 As a side note, I am also considering using a compute shader since 
this would be the more natual fit for this type of algorithm while the 
geometry shader method is more of a hack that goes against the 
original intention of the geometry shader, but I'd be happy using 
either method, I'm just trying to get some traction on either of them.


 -Ethan


 SMesserschmidt wrote:
  Am 18.11.2013 15:32, schrieb Ethan Fahy:
 
   Hello,
  
   Preface: I realize this question comes about because I've never 
really learned OpenGL/GLSL from the ground up and am likely missing 
some simple concepts, but I mostly have been coasting by at the osg 
middleware level and have been doing OK so far.

  
   If I want to do some simple post-processing I can create a 
render-to-texture camera and render to the framebuffer. I can attach a 
texture to the framebuffer and then create another screen camera to 
render that texture to the screen. I can add a GLSL shader program to 
this texture so that before the texture gets rendered to the screen it 
gets an effect added to it using the shaders.

  
   When I use shaders attached to 3-D model nodes in the scene 
itself, the meaning of the vertex and frag shaders is easy to 
understand-the vertices of the 3-D model are the vertices referenced 
in the vertex shader. However, When I render my scene to a texture and 
then do a simple pass-through vertex and frag shader combo, what is 
the meaning of the vertices in this scenario? I had assumed that once 
you render your scene to a texture, all knowledge of the original 
scene's geometry and vertex locations has been lost, is this true? If 
so, then what vertices am I dealing with? It's easy enough to follow 
along with examples and to use a simple pass-through vertex shader, 
but I'd like to understand this better because I now want to insert a 
geometry shader in between the vertex and frag shaders and again I'm 
not sure whether to use point, line, or triangle in my geometry shader 
as the primitive type because I thought that the geometry and 
primitives of the original scene wou

 ld
  
  b
 
   e lost after rendering to texture.
  
 
  Usually when doing the post-processing pass you will be rendering 
to a
  fullscreen quad. So the vertices you are dealing with are those of 
the

  quad you are rendering too.
  And yes, If you don't any further actions, rendering to texture 
will not

  preserve the information on your orignal vertices etc.
  The question is what you want to achieve. A geometry shader inbetween
  your

Re: [osg-users] Render to texture and write image to file.

2012-10-13 Thread Peterakos
Ok found it... I just had to write to image in post camera callback...
nothing more.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture, display pre-rendered scene

2012-06-25 Thread Christian Rumpf
Hey again. As I posted before I managed to render to texture a scene and 
projected it onto a Geometry. But it is just an image of a scene and the RTT 
camera can't move around my object (cessna.osg).

Is there a way to move the camera freely around like you would do with viewer's 
camera?

lg Christian

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=48474#48474





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture, display pre-rendered scene

2012-06-25 Thread Robert Osfield
Hi Christian,

Have a look at the osgdistortion example.

Robert.

On 25 June 2012 11:20, Christian Rumpf ru...@student.tugraz.at wrote:
 Hey again. As I posted before I managed to render to texture a scene and 
 projected it onto a Geometry. But it is just an image of a scene and the RTT 
 camera can't move around my object (cessna.osg).

 Is there a way to move the camera freely around like you would do with 
 viewer's camera?

 lg Christian

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=48474#48474





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture, display pre-rendered scene

2012-06-25 Thread Christian Rumpf

robertosfield wrote:
 Hi Christian,
 
 thank you
 
 Have a look at the osgdistortion example.
 
 Robert.
 
 On 25 June 2012 11:20, Christian Rumpf  wrote:
 
  Hey again. As I posted before I managed to render to texture a scene and 
  projected it onto a Geometry. But it is just an image of a scene and the 
  RTT camera can't move around my object (cessna.osg).
  
  Is there a way to move the camera freely around like you would do with 
  viewer's camera?
  
  lg Christian
  
  --
  Read this topic online here:
  http://forum.openscenegraph.org/viewtopic.php?p=48474#48474
  
  
  
  
  
  ___
  osg-users mailing list
  
  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  
 ___
 osg-users mailing list
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
  --
 Post generated by Mail2Forum


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=48476#48476





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture, display pre-rendered scene

2012-06-11 Thread Christian Rumpf
This really sounds helpful, Robert, but for some reason this examples you 
recommended aren't understandable for me. They all tells about reflecting, 
printing it into a flag and so on. isn't there an example which just renders a 
single node, a single box or something else into a texture and projects it into 
the viewer? Just a single object, no camera effects, no artefacts, no 
movements, nothing but a single object.

My problem is that all this examples (and I found a lot in the internet) 
explains how useful this RTT technique is, but aren't understandable after all. 
This sounds like I'm not good in graphics programming, but sometimes small 
steps are necessary. I can't take this big stuffs as start.

lg Christian

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=48171#48171





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture, display pre-rendered scene

2012-06-11 Thread Robert Osfield
Hi Christian,

I'm afraid I do have time to walk you through the baby steps of this
type of problem.  The OSG has lots of examples, there are several
books, lots of resources you can call upon to learn about the OSG.

Robert.

On 11 June 2012 16:26, Christian Rumpf ru...@student.tugraz.at wrote:
 This really sounds helpful, Robert, but for some reason this examples you 
 recommended aren't understandable for me. They all tells about reflecting, 
 printing it into a flag and so on. isn't there an example which just renders 
 a single node, a single box or something else into a texture and projects it 
 into the viewer? Just a single object, no camera effects, no artefacts, no 
 movements, nothing but a single object.

 My problem is that all this examples (and I found a lot in the internet) 
 explains how useful this RTT technique is, but aren't understandable after 
 all. This sounds like I'm not good in graphics programming, but sometimes 
 small steps are necessary. I can't take this big stuffs as start.

 lg Christian

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=48171#48171





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture, display pre-rendered scene

2012-06-11 Thread Christian Rumpf
No need anymore, Robert. I finally a homepage which explains everything about 
render to texture:

http://beefdev.blogspot.de/2012/01/render-to-texture-in-openscenegraph.html

It really helped me, and I finally could load my shader files with texture as 
uniform sampler2D. Nevertheless thank you very much, Robert.

lg Christian

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=48176#48176





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to texture, display pre-rendered scene

2012-06-07 Thread Christian Rumpf
Hey!

I read nearly everything about render to texture techniques and what you can do 
with them, but not how to SIMPLY display it. My intention is to pre-render the 
scene into a texture and sending this texture into my fragment shader (GLSL) to 
simulate blur effects.

But before I can do this shading stuff I need to display this pre-rendered 
scene and I don't know how to do it. Here my code so you can see what I 
programmed:


Code:
#include osg/Node
#include osg/Texture2D
#include osg/Shader
#include osgDB/ReadFile
#include osgViewer/Viewer

#include iostream

osg::Camera* createRTTCamera(int width, int height, osg::Texture2D* texture)
{
texture-setTextureSize(width, height);
texture-setInternalFormat(GL_RGB);
texture-setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
texture-setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);

osg::ref_ptrosg::Camera rttCamera = new osg::Camera;

rttCamera-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
rttCamera-setClearColor(osg::Vec4(1.0f, 1.0f, 1.0f, 1.0f));
rttCamera-setViewport(0, 0, width, height);
rttCamera-setRenderOrder(osg::Camera::PRE_RENDER);
rttCamera-setReferenceFrame(osg::Transform::RELATIVE_RF);

rttCamera-setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
rttCamera-attach(osg::Camera::COLOR_BUFFER, texture);

return rttCamera.release();
}

int main(int argc, char** argv)
{
osg::ref_ptrosg::Node model = osgDB::readNodeFile(cessna.osg);

osg::ref_ptrosg::Texture2D texture = new osg::Texture2D;
osg::ref_ptrosg::Camera camera = createRTTCamera(1600, 900, 
texture.get());
camera-addChild(model.get());

osg::StateSet* ss = model-getOrCreateStateSet();
ss-setTextureAttributeAndModes(0, texture.get(), 
osg::StateAttribute::ON);

osg::ref_ptrosg::Group root = new osg::Group;
root-addChild(camera.get());

osgViewer::Viewer viewer;
viewer.setSceneData(root.get());
return viewer.run();
}




By running this code I get nothing, only if I add this line before my viewer my 
plane appears:


Code:
root-addChild(model.get());




Can someone help me out?

lg Christian

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=48113#48113





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture, display pre-rendered scene

2012-06-07 Thread Robert Osfield
Hi Christian,

You simply need to create a geometry and assign the RTT Texture to via
a StateSet and then render this as part of the main scene graph.  Have
a look at the osgprerender or osgdistortion examples.

Robert.

On 7 June 2012 20:52, Christian Rumpf ru...@student.tugraz.at wrote:
 Hey!

 I read nearly everything about render to texture techniques and what you can 
 do with them, but not how to SIMPLY display it. My intention is to pre-render 
 the scene into a texture and sending this texture into my fragment shader 
 (GLSL) to simulate blur effects.

 But before I can do this shading stuff I need to display this pre-rendered 
 scene and I don't know how to do it. Here my code so you can see what I 
 programmed:


 Code:
 #include osg/Node
 #include osg/Texture2D
 #include osg/Shader
 #include osgDB/ReadFile
 #include osgViewer/Viewer

 #include iostream

 osg::Camera* createRTTCamera(int width, int height, osg::Texture2D* texture)
 {
        texture-setTextureSize(width, height);
        texture-setInternalFormat(GL_RGB);
        texture-setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
        texture-setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);

        osg::ref_ptrosg::Camera rttCamera = new osg::Camera;

        rttCamera-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
        rttCamera-setClearColor(osg::Vec4(1.0f, 1.0f, 1.0f, 1.0f));
        rttCamera-setViewport(0, 0, width, height);
        rttCamera-setRenderOrder(osg::Camera::PRE_RENDER);
        rttCamera-setReferenceFrame(osg::Transform::RELATIVE_RF);
        
 rttCamera-setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);
        rttCamera-attach(osg::Camera::COLOR_BUFFER, texture);

        return rttCamera.release();
 }

 int main(int argc, char** argv)
 {
        osg::ref_ptrosg::Node model = osgDB::readNodeFile(cessna.osg);

        osg::ref_ptrosg::Texture2D texture = new osg::Texture2D;
        osg::ref_ptrosg::Camera camera = createRTTCamera(1600, 900, 
 texture.get());
        camera-addChild(model.get());

        osg::StateSet* ss = model-getOrCreateStateSet();
        ss-setTextureAttributeAndModes(0, texture.get(), 
 osg::StateAttribute::ON);

        osg::ref_ptrosg::Group root = new osg::Group;
        root-addChild(camera.get());

        osgViewer::Viewer viewer;
        viewer.setSceneData(root.get());
        return viewer.run();
 }




 By running this code I get nothing, only if I add this line before my viewer 
 my plane appears:


 Code:
        root-addChild(model.get());




 Can someone help me out?

 lg Christian

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=48113#48113





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to texture with shaders

2012-05-24 Thread Joel Graff
Hi,

I've run into a problem which I suspect is just a gap in my understanding of 
osg and RTT.  Nevertheless, I'm a bit stumped.

My goal is to do a small RTT test where I take a source texture, render it to a 
quad using an RTT camera and then apply the output to another quad.  In other 
words, I want to use RTT and a shader texture to produce the same result as I 
would get if I simply added a texture to a quad geometry's state set and 
rendered it.

I've modified the osgmultiplerendertargets example, with no luck.  The primary 
changes are that I'm using only one output texture, I'm making a texture2D call 
in the frag shader, and that I've used a Texture2D object for my output instead 
of a TextureRect (with normalized coordinates, of course).

I know my texture shader works fine, and it's pretty obvious the error is in 
the RTT side of the graph.

Any thoughts or pointers?

Thanks,

Joel

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=47840#47840





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render into texture from 2 Cameras

2011-08-05 Thread Martin Haffner
Hi,

I made an application with a camera that rendered into a texture and everything 
worked.
Now I want to render into the texture from 2 Cameras. I want cam1 to render 
into the left side, and cam1 render into the right side into the texture.
Basically this is my code:


Code:

cam1 = new osg::CameraNode;
cam1-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
cam1-setClearColor(clearColor);
cam1-setViewport(0, 0, m_rttWidth / 2 ,m_rttHeight);
cam1-setRenderOrder(osg::CameraNode::PRE_RENDER);
cam1-setRenderTargetImplementation( osg::CameraNode::FRAME_BUFFER_OBJECT );
cam1-setProjectionMatrixAsPerspective(60.0f, static_castfloat(m_rttWidth /2) 
/ m_rttHeight, 0.01f, farPlane);
osg::Camera::DO_NOT_COMPUTE_NEAR_FAR ); cam1-setComputeNearFarMode( 
Camera::DO_NOT_COMPUTE_NEAR_FAR );
cam1-attach(osg::Camera::COLOR_BUFFER, m_rttScene.get(), 0, 0, false, 4, 0);
cam1-addChild( sceneRootNode );
m_CamerasNode-addChild( cam1.get() );

//Cam2
// The code for cam2 exactly the same, except this line:
cam2-setViewport( m_rttWidth /2, 0, m_rttWidth /2 ,100 );




The left half of the texture is correctly rendered, but the right side of the 
texture (which should be rendered by cam2) is black!

Anyone has an idea what could be wrong? Maybe something with the clear flags? 
Or culling?

Thank you!

Cheers,
Martin

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=41886#41886





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render into texture from 2 Cameras

2011-08-05 Thread Sergey Polischuk
Hi, Martin

You should either use clear only on camera that render first half, or use 
scissor test (osg::Scissor and GL_SCISSOR_TEST) set to region of camera 
viewport on each camera to not clear other camera's render result.

Cheers,
Sergey.

05.08.2011, 18:51, Martin Haffner str...@gmx.net:
 Hi,

 I made an application with a camera that rendered into a texture and 
 everything worked.
 Now I want to render into the texture from 2 Cameras. I want cam1 to render 
 into the left side, and cam1 render into the right side into the texture.
 Basically this is my code:

 Code:

 cam1 = new osg::CameraNode;
 cam1-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
 cam1-setClearColor(clearColor);
 cam1-setViewport(0, 0, m_rttWidth / 2 ,m_rttHeight);
 cam1-setRenderOrder(osg::CameraNode::PRE_RENDER);
 cam1-setRenderTargetImplementation( osg::CameraNode::FRAME_BUFFER_OBJECT );
 cam1-setProjectionMatrixAsPerspective(60.0f, static_castfloat(m_rttWidth 
 /2) / m_rttHeight, 0.01f, farPlane);
 osg::Camera::DO_NOT_COMPUTE_NEAR_FAR ); cam1-setComputeNearFarMode( 
 Camera::DO_NOT_COMPUTE_NEAR_FAR );
 cam1-attach(osg::Camera::COLOR_BUFFER, m_rttScene.get(), 0, 0, false, 4, 0);
 cam1-addChild( sceneRootNode );
 m_CamerasNode-addChild( cam1.get() );

 //Cam2
 // The code for cam2 exactly the same, except this line:
 cam2-setViewport( m_rttWidth /2, 0, m_rttWidth /2 ,100 );

 The left half of the texture is correctly rendered, but the right side of the 
 texture (which should be rendered by cam2) is black!

 Anyone has an idea what could be wrong? Maybe something with the clear flags? 
 Or culling?

 Thank you!

 Cheers,
 Martin

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=41886#41886

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture Question for Integrating Vendor's Flash Renderer

2011-06-06 Thread Glen Swanger
Thanks Robert for taking the time to reply.  Also, I want to say thanks to you 
and other main developers of OSG, because I have found OSG quite easy to work 
with so far.  I haven’t had to post to the forum thus far because of the 
learning environment that the forum content has provided, along with all the 
various code examples and tutorials available.  I hope as my knowledge builds 
that I can provide additional useful forum and example content.  Back to the 
post’s thread.

Scaleform does convert Flash swf files to OpenGL calls.  One of the other 
important reasons that we chose to use Scaleform was that it has a C++ code 
interface to Action Script (Flash programming language) so we can talk directly 
to the Flash content to update state information and we have significant 
amounts of Flash content that we can reuse in our project.  Here is a bit more 
background on Scaleform for reference.  Scaleform is a product that was 
recently taken over by Autodesk.  It has been used in many games for rendering 
interactive flash screens into a 3D world, but more often for developing the 
game setup/information HUD.  It is a third party vendor product so there is a 
cost involve for our project, but it has the capabilities we need.  The product 
has SDK builds for OpenGL, Direct3D, PS2, and several other game platforms so 
we are able to integrate Scaleform into OSG with the OpenGL build.  Scaleform 
also manages thread synchronization between the render thread and 
 the thread that updates the Flash content state through its own snapshot 
buffer, so we can easily hook in our network interface to the simulation 
program that provides system state content updates.

I would appreciate it if you would review my previous description on the use of 
the FBO camera and provide any suggestions you might have.

Thanks!
Glen

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=40117#40117





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture Question for Integrating Vendor's Flash Renderer

2011-06-06 Thread Robert Osfield
Hi Glen,

Thanks for the explanation about Scaleform.  Given that it's doing
OpenGL calls for you you'll need to be mindful of the effect of OSG
state on Scalform and visa-versa.  The issues of integration of 3rd
party OpenGL codes with the OSG is something that has been disucssed a
number of times on osg-users so I recommend look into these
discussions.

I would also recommend getting the integration working in just the
main window to start with, you can use osg::Camera in scene as a
render to texture camera or just a Camera that just renders directly
as part of the main view so you can toggle to use of RTT camera later.
 Once the simple Camera usage works fine then enabling RTT by assigned
a Texture as a colour attachment as set the Camera::RenderOrder to
PRE_RENDER.  Use of FBO's shouldn't be something you need to worry
about too much - you just enable the Camera to use it if you want, as
per the osgprerender example.

For rendering multiple cameras on different frames you can simply have
a Switch above these Camera in the scene graph and toggle them off as
you need them.  Alternatively you can use a NodeMask on the Camera's
to switch them off. Finally a custom CullCallback attached to the
parent of the Camera's would enable you to decide whether to visit its
children (the Camera) or not.   Switching off a RTT Camera only
switches off the rendering traversal for that camera, any texture that
it renders to will still be valid for any geometry that is rendered
with it in the main scene.  When toggling on/off cameras you'll need
to careful to make sure that a RTT Camera renders to a texture before
the first time it's need in the scene graph - this is an obvious
requirement, but will need a little planning to make sure it all works
coherently.

Robert.

On Mon, Jun 6, 2011 at 3:56 PM, Glen Swanger glen.swan...@jhuapl.edu wrote:
 Thanks Robert for taking the time to reply.  Also, I want to say thanks to 
 you and other main developers of OSG, because I have found OSG quite easy to 
 work with so far.  I haven’t had to post to the forum thus far because of the 
 learning environment that the forum content has provided, along with all the 
 various code examples and tutorials available.  I hope as my knowledge builds 
 that I can provide additional useful forum and example content.  Back to the 
 post’s thread.

 Scaleform does convert Flash swf files to OpenGL calls.  One of the other 
 important reasons that we chose to use Scaleform was that it has a C++ code 
 interface to Action Script (Flash programming language) so we can talk 
 directly to the Flash content to update state information and we have 
 significant amounts of Flash content that we can reuse in our project.  Here 
 is a bit more background on Scaleform for reference.  Scaleform is a product 
 that was recently taken over by Autodesk.  It has been used in many games for 
 rendering interactive flash screens into a 3D world, but more often for 
 developing the game setup/information HUD.  It is a third party vendor 
 product so there is a cost involve for our project, but it has the 
 capabilities we need.  The product has SDK builds for OpenGL, Direct3D, PS2, 
 and several other game platforms so we are able to integrate Scaleform into 
 OSG with the OpenGL build.  Scaleform also manages thread synchronization 
 between the render thread and
  the thread that updates the Flash content state through its own snapshot 
 buffer, so we can easily hook in our network interface to the simulation 
 program that provides system state content updates.

 I would appreciate it if you would review my previous description on the use 
 of the FBO camera and provide any suggestions you might have.

 Thanks!
 Glen

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=40117#40117





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture Question for Integrating Vendor's Flash Renderer

2011-06-06 Thread Glen Swanger
Robert,
Just what I was looking for...Thanks!

I do have a prototype working using an RTT camera which updates a texture on an 
object in the scene.  On your suggestion about minding the state, it did take 
me a while to work through the interaction between OSG and Scaleform on the 
state since Scaleform has its own Hardware Abstraction Layer implementation, 
but I will review the osg-users discussions you recommend to make sure I 
haven't missed anything.  I hadn't thought about placing multiple cameras under 
a Switch for selecting the correct camera each frame, Greate Advice...and for 
the advice for using the custom CullCallback.

I will let you know how it all works out and a summary of my final solution.

Thank again!
Glen

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=40132#40132





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture Question for Integrating Vendor's Flash Renderer

2011-06-06 Thread Wang Rui
Hi Glen,

Have a glimpse of the osgXI project on sourceforge.net. It has an
osgFlash absolute layer and two different implementations using
Scaleform and gameswf, written by one of my cooperators. It is not
written in a uniform format at present so you may have to ignore many
Chinese comments in the source code. :-)

Cheers,

Wang Rui


2011/6/7 Glen Swanger glen.swan...@jhuapl.edu:
 Robert,
 Just what I was looking for...Thanks!

 I do have a prototype working using an RTT camera which updates a texture on 
 an object in the scene.  On your suggestion about minding the state, it did 
 take me a while to work through the interaction between OSG and Scaleform on 
 the state since Scaleform has its own Hardware Abstraction Layer 
 implementation, but I will review the osg-users discussions you recommend to 
 make sure I haven't missed anything.  I hadn't thought about placing multiple 
 cameras under a Switch for selecting the correct camera each frame, Greate 
 Advice...and for the advice for using the custom CullCallback.

 I will let you know how it all works out and a summary of my final solution.

 Thank again!
 Glen

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=40132#40132





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to Texture Question for Integrating Vendor's Flash Renderer

2011-06-04 Thread Glen Swanger
Hi,
I have some questions concerning the best approach to update textures using a 
pre-render FBO camera.  First I want to provide a little background before my 
questions.
  
For the project I'm working on I've been integrating a 3rd party vendor product 
(Scaleform) into OpenSceneGraph to pre-render Flash swf movies to texture, 
which are then mapped to objects in the main scene.  I have this successfully 
working in a prototype along with in-world mouse events being sent to the swf 
movies to interact with the Flash content.  The Flash movies I need to render 
update at a reasonable slow rate (10 fps) relative to the main scene render 
frame rate so I'm able to render just one Flash movie per render loop cycle to 
minimize the impact on performance.  I may be cycling through up to six 
different Flash movies at the same time and the textures that are the targets 
will probably include the following sizes (256x256, 512x512, 1024x1024 and 
possibly 2048x2048 (rarely)).  Also, I've derived a class from osg::Drawable 
which encapsulates the Flash movie rendering using Scaleform in the 
drawImplementation.  An instance of this class is attached to an instance of
  an osg::Geode which is then added as a child to the pre-render camera as the 
render scene.

To my questions:

Is it possible to use one pre-render camera which uses an fbo target 
implementation, then during a render loop cycle perform the following steps: 
(1) remove the last Flash movie geode child from the camera, (2) detach the 
last target texture, (3) set the viewport size based on the next target texture 
size, (4) attach the next target texture and finally (5) add the next Flash 
movie geode as a child to the camera in order to set up the camera for 
rendering the next movie?   
Or, would it be better (or necessary) to have separate pre-render cameras for 
each texture size, then attach the next target texture and Flash movie geode to 
the appropriate camera for rendering?  
Finally, where would be the best place to insert the updates to the camera(s) 
prior to rendering the actual movie during each cycle, Update callback, PreDraw 
callback, ...?

Thanks in advance for your help!

Cheers,
Glen Swanger

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=40055#40055





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture Question for Integrating Vendor's Flash Renderer

2011-06-04 Thread Robert Osfield
Hi Glen,

The solution you are explaining sounds far too complicated for what is
actually needed.  I can't see why a pre-render FBO camera would be
required, unless Scaleform is using OpenGL.

The normal way to implement video textures with the OSG is to subclass
from osg::ImageStream as the ffmpeg, quicktime plugins do.  Each of
these plugins creates it's own thread to manage the video reading and
writng to the image.  On the OSG rendering side the OSG simply
downloads to the texture when the image is dirtied - there isn't any
need for compicated interaction of the threads.

Robert.

On Fri, Jun 3, 2011 at 5:15 PM, Glen Swanger glen.swan...@jhuapl.edu wrote:
 Hi,
 I have some questions concerning the best approach to update textures using a 
 pre-render FBO camera.  First I want to provide a little background before my 
 questions.

 For the project I'm working on I've been integrating a 3rd party vendor 
 product (Scaleform) into OpenSceneGraph to pre-render Flash swf movies to 
 texture, which are then mapped to objects in the main scene.  I have this 
 successfully working in a prototype along with in-world mouse events being 
 sent to the swf movies to interact with the Flash content.  The Flash movies 
 I need to render update at a reasonable slow rate (10 fps) relative to the 
 main scene render frame rate so I'm able to render just one Flash movie per 
 render loop cycle to minimize the impact on performance.  I may be cycling 
 through up to six different Flash movies at the same time and the textures 
 that are the targets will probably include the following sizes (256x256, 
 512x512, 1024x1024 and possibly 2048x2048 (rarely)).  Also, I've derived a 
 class from osg::Drawable which encapsulates the Flash movie rendering using 
 Scaleform in the drawImplementation.  An instance of this class is attached 
 to an instance of
  an osg::Geode which is then added as a child to the pre-render camera as the 
 render scene.

 To my questions:

 Is it possible to use one pre-render camera which uses an fbo target 
 implementation, then during a render loop cycle perform the following steps: 
 (1) remove the last Flash movie geode child from the camera, (2) detach the 
 last target texture, (3) set the viewport size based on the next target 
 texture size, (4) attach the next target texture and finally (5) add the next 
 Flash movie geode as a child to the camera in order to set up the camera for 
 rendering the next movie?
 Or, would it be better (or necessary) to have separate pre-render cameras for 
 each texture size, then attach the next target texture and Flash movie geode 
 to the appropriate camera for rendering?
 Finally, where would be the best place to insert the updates to the camera(s) 
 prior to rendering the actual movie during each cycle, Update callback, 
 PreDraw callback, ...?

 Thanks in advance for your help!

 Cheers,
 Glen Swanger

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=40055#40055





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture for IPhone

2011-03-01 Thread Tang Yu
hi, Sergey,
Thank you for your help. I used Texture2D, not use mipmap.
I need to render the camera frame to the whole size of viewer.
Do you think it will improve my program's efficiency to use mipmap?

TANG


hybr wrote:
 Hi, Tang
 
 First thing that comes to mind - check if you disabled resizing of non power 
 of two textures on texture with image from camera, as well as mipmap 
 generation.
 
 Cheers, Sergey.
 
 28.02.2011, 13:30, Tang Yu :
 
  Hi,
  
  I also met the same question about rendering to texture on iphone. I tried 
  to render the video frame, captured from iphone's camera continually, as 2D 
  texture of the viewer's background, but the speed is very slowly.
  How can i fix it?
  
  Thank you for your any help!
  
  Cheers,
  Tang
  
  --
  Read this topic online here:
  http://forum.openscenegraph.org/viewtopic.php?p=37168#37168
  
  ___
  osg-users mailing list
  
  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  
 ___
 osg-users mailing list
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
  --
 Post generated by Mail2Forum


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=37194#37194





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture for IPhone

2011-03-01 Thread Sergey Polischuk
Hi, Tang

Camera frame texture is likely to be not power of two size. Osg by default 
resize textures to be power of two size, this can take a lot of time each frame 
in your case. You can disable resizing by calling 
setResizeNonPowerOfTwoHint(false) on your texture. Also use linear filtering as
setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR)
setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR)
to disable mipmap generation. If you currently use filters with mipmaps, osg 
will re-generate mipmaps on texture change, which will reduce fps.

Cheers, Sergey.

01.03.2011, 12:37, Tang Yu tangy...@yahoo.com.cn:
 hi, Sergey,
 Thank you for your help. I used Texture2D, not use mipmap.
 I need to render the camera frame to the whole size of viewer.
 Do you think it will improve my program's efficiency to use mipmap?

 TANG

 hybr wrote:

  Hi, Tang

  First thing that comes to mind - check if you disabled resizing of non 
 power of two textures on texture with image from camera, as well as mipmap 
 generation.

  Cheers, Sergey.

  28.02.2011, 13:30, Tang Yu :
  Hi,

  I also met the same question about rendering to texture on iphone. I tried 
 to render the video frame, captured from iphone's camera continually, as 2D 
 texture of the viewer's background, but the speed is very slowly.
  How can i fix it?

  Thank you for your any help!

  Cheers,
  Tang

  --
  Read this topic online here:
  http://forum.openscenegraph.org/viewtopic.php?p=37168#37168

  ___
  osg-users mailing list

  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  ___
  osg-users mailing list

  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

   --
  Post generated by Mail2Forum

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=37194#37194

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture for IPhone

2011-03-01 Thread Stephan Huber
Hi Tang,

just some tips:

* disable the autoresizing of NPOT-images via
texture-setResizeNonPowerOfTwoHint(false);

* set the internal format of the image to GL_RGBA and the format of the
image to GL_BGRA, so the conversion is done by hardware/driver.

image-setInternalTextureFormat(GL_RGBA);
image-allocateImage(w, h, 1, GL_BGRA, GL_UNSIGNED_BYTE);

HTH,
Stephan


Am 01.03.11 08:08, schrieb Tang Yu:
 Hi, dear sth,
 
 30-45fps?!! It is so attractive!!
 i used Texture2D to render my camera frame to the whole size of viewer, i set 
 the image buffer to the texture directly. But it still ran very very slowly 
 on my iphone4.
 I attached my code files and hope for your advices.
 
 PS. my osg libraries are compiled from git's iphone project.
 
 Best regard,
 Tang
 
 
 sth wrote:
 Hi,

 Am 28.02.11 11:30, schrieb Tang Yu:

 I also met the same question about rendering to texture on iphone. I tried 
 to render the video frame, captured from iphone's camera continually, as 2D 
 texture of the viewer's background, but the speed is very slowly. 
 How can i fix it?


 Without seeing any code, we can't help you, as the question remains too
 fuzzy. I can display a live video feed of the iphone-camera (3gs) as a
 osg-texture with about 30-45fps.

 cheers,
 Stephan
 ___
 osg-users mailing list

 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

  --
 Post generated by Mail2Forum
 
 
 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=37184#37184
 
 
 
 
 
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture for IPhone

2011-03-01 Thread Tang Yu
Hi, Stephan and Sergey

Thank you for your helps!!!
Finally i got a good fps on my iphone4 according to your advices. It is so 
great!!! Now I can go on working on my mobile AR project.
Appreciate you very much again!

Cheers,
Tang

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=37235#37235





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture for IPhone

2011-02-28 Thread Tang Yu
Hi,

I also met the same question about rendering to texture on iphone. I tried to 
render the video frame, captured from iphone's camera continually, as 2D 
texture of the viewer's background, but the speed is very slowly. 
How can i fix it?

Thank you for your any help!

Cheers,
Tang

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=37168#37168





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture for IPhone

2011-02-28 Thread Stephan Huber
Hi,

Am 28.02.11 11:30, schrieb Tang Yu:
 I also met the same question about rendering to texture on iphone. I tried to 
 render the video frame, captured from iphone's camera continually, as 2D 
 texture of the viewer's background, but the speed is very slowly. 
 How can i fix it?

Without seeing any code, we can't help you, as the question remains too
fuzzy. I can display a live video feed of the iphone-camera (3gs) as a
osg-texture with about 30-45fps.

cheers,
Stephan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture for IPhone

2011-02-28 Thread Sergey Polischuk
Hi, Tang

First thing that comes to mind - check if you disabled resizing of non power of 
two textures on texture with image from camera, as well as mipmap generation.

Cheers, Sergey.

28.02.2011, 13:30, Tang Yu tangy...@yahoo.com.cn:
 Hi,

 I also met the same question about rendering to texture on iphone. I tried to 
 render the video frame, captured from iphone's camera continually, as 2D 
 texture of the viewer's background, but the speed is very slowly.
 How can i fix it?

 Thank you for your any help!

 Cheers,
 Tang

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=37168#37168

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render To Texture

2011-02-18 Thread Dietmar Funck
Hi Phummipat,
I think it is not intended that within first viewer.frame() a statement causes 
an abort of viewer.frame(). However most people call viewer.frame() in a loop 
and wouldn't notice it at all.

Best regards.
Dietmar Funck


pumdo575 wrote:
 Hi Dietmar Funck
 
 Thank you very much for your reply. Do you mean, the first viewer.frame() is 
 used for initialization that why nothing is rendered in the first frame ?. 
 
 Best regards,
 Phummipat
 


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=36839#36839





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render To Texture

2011-02-18 Thread Dietmar Funck
Hi Sergey,
your proposal works very well.

Thank you very much,
Dietmar Funck


hybr wrote:
 Hi,  Dietmar Funck.
 
 In order to get another texture attached you can use something like
 
 
 _cam-setCullCallback( new fboAttachmentCullCB( this ) );
 
 void fboAttachmentCullCB::operator()(osg::Node* node, osg::NodeVisitor* nv)
 {
 osg::Camera* fboCam = dynamic_castosg::Camera*( node );
 osgUtil::CullVisitor* cv = dynamic_castosgUtil::CullVisitor*(nv);
 
 if ( fboCam  cv)
 {
 cv-getCurrentRenderBin()-getStage()-setFrameBufferObject(NULL); // reset 
 frame buffer object - see RenderStage::runCameraSetUp for details, the fbo 
 has to be created again
 cv-getCurrentRenderBin()-getStage()-setCameraRequiresSetUp( true ); // we 
 have to ensure that the runCameraSetUp will be entered!
 }
 traverse(node,nv);
 }
 
 Cheers,
 Sergey.
 


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=36841#36841





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render To Texture

2011-02-16 Thread Pumipat Doungklang
Hi everyone,

 I have some question about render to texture. The code that I give below is
work but I have some question.

1. Why I have to use viewer.frame() for 2 time for working ?. If I use
viewer.frame() just one time it doesn't work. The written image show just
bank screen.
2. How to run viewer behind the scene because I don't want viewer show on
screen that it look like pop up.

If anyone has some way to improve this code for better, could comment my
code.

int main (int argc, char**argv)
{

osg::ref_ptrosg::Group
model=dynamic_castosg::Group*(osgDB::readNodeFile(cow.osg));
osg::ref_ptrosg::Geode geodeModel =
dynamic_castosg::Geode*(model-getChild(0));
osg::ref_ptrosg::Drawable drawable =
dynamic_castosg::Drawable*(geodeModel-getDrawable(0));
osg::ref_ptrosg::StateSet stateset =
dynamic_castosg::StateSet*(drawable-getStateSet());
osg::Texture2D* oldTexture =
dynamic_castosg::Texture2D*(stateset-getTextureAttribute(0,osg::StateAttribute::TEXTURE));

int tex_width = 512, tex_height = 512;
osg::ref_ptrosg::Texture2D texture = new osg::Texture2D;
texture-setTextureSize( tex_width, tex_height);
//osg::ref_ptrosg::Image imageTex =
osgDB::readImageFile(Images/skymap.jpg);
osg::ref_ptrosg::Image imageTex = oldTexture-getImage();
texture-setImage(imageTex.get());

osg::ref_ptrosg::Geode geode = new osg::Geode;
osg::ref_ptrosg::Geometry geom =osg::createTexturedQuadGeometry(
osg::Vec3(-1.0f, -1.0f, 0.0f), osg::Vec3( 2.0f, 0.f, 0.f ), osg::Vec3( 0.f,
2.0f, 0.f ));
geode-addDrawable(geom.get());
osg::ref_ptrosg::StateSet ss = geode-getOrCreateStateSet();
ss-setTextureAttributeAndModes( 0, texture.get(),
osg::StateAttribute::ON );
ss-setMode(GL_CULL_FACE,osg::StateAttribute::ON);
ss-setMode(GL_LIGHTING,osg::StateAttribute::OFF);

osg::ref_ptrosg::Camera camera = new osg::Camera;
camera-setViewport(0,0,tex_width,tex_height);
camera-setClearColor(osg::Vec4(1.0f, 1.0f, 1.0f, 1.0f));
camera-setClearMask( GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);

camera-setRenderOrder(osg::Camera::POST_RENDER);
camera-setRenderTargetImplementation(
osg::Camera::FRAME_BUFFER_OBJECT);

osg::ref_ptrosg::Image image = new osg::Image;
image-allocateImage(tex_width,tex_height,1,GL_RGBA,GL_UNSIGNED_BYTE);
camera-attach(osg::Camera::COLOR_BUFFER,image.get());
camera-setReferenceFrame(osg::Camera::ABSOLUTE_RF);
camera-addChild(geode.get());

osg::ref_ptrosg::Group root = new osg::Group;
root-addChild(camera.get());

osgViewer::Viewer viewer;
viewer.setUpViewInWindow(400,150,512,512);
viewer.setSceneData(root.get());

viewer.setCameraManipulator(new osgGA::TrackballManipulator );


viewer.frame();
viewer.frame();
osgDB::writeImageFile(*image.get(),test.bmp);


osg::ref_ptrosg::Texture2D newTexture = new osg::Texture2D;
newTexture-setTextureSize( tex_width, tex_height);
newTexture-setImage(image.get());

osg::ref_ptrosg::Geode newGeode = new osg::Geode;
osg::ref_ptrosg::Geometry newGeom =osg::createTexturedQuadGeometry(
osg::Vec3(-1.0f, -1.0f, 0.0f), osg::Vec3( 2.0f, 0.f, 0.f ), osg::Vec3( 0.f,
2.0f, 0.f ));
newGeode-addDrawable(newGeom.get());
osg::ref_ptrosg::StateSet newss = newGeode-getOrCreateStateSet();
newss-setTextureAttributeAndModes( 0, newTexture.get(),
osg::StateAttribute::ON );
newss-setMode(GL_CULL_FACE,osg::StateAttribute::ON);
newss-setMode(GL_LIGHTING,osg::StateAttribute::OFF);

viewer.setSceneData(newGeode.get());
return viewer.run();
}


Best regards,
//Phummipat
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render To Texture

2011-02-16 Thread Dietmar Funck
Hi,
I noticed the problem with first call of viewer.frame()  too. This is happens. 
because meanwhile the viewer initialization - which is called by first call of 
viewer.frame() - a call to glGetString (SceneView::init() -  
osg::isGLExtensionSupported(_renderInfo.getState()-getContextID(),); ) 
triggers exiting to the caller of viewer.frame().
Actually nothing is rendered in the first frame.

The following code works for me to get an offscreen viewer. I don't know if you 
can leave sth. out from the traits.

osg::ref_ptrosg::GraphicsContext::Traits traits = new 
osg::GraphicsContext::Traits();
traits-x = viewport-x(); // viewport of camera
traits-y = viewport-y();
traits-width = viewport-width();
traits-height = viewport-height();
traits-windowDecoration = false;
traits-doubleBuffer = false;
traits-sharedContext = NULL;
traits-pbuffer = true;

osg::GraphicsContext *graphicsContext = 
osg::GraphicsContext::createGraphicsContext(traits.get());

if(!graphicsContext) {
osg::notify(osg::NOTICE)  Failed to create pbuffer, failing back to 
normal graphics window.  std::endl;

traits-pbuffer = false;
graphicsContext = 
osg::GraphicsContext::createGraphicsContext(traits.get());
}

viewer-getCamera()-setGraphicsContext(graphicsContext);


I found another problem:
I would like to change the textures used for render to texture. However the 
framebufferobject used by osg is only initialized once and cannot be updated - 
at least with an approach like in the osgmultiplerendertargets example. 
Camera::detach() has no effect.

Best Regards
Dietmar Funck

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=36728#36728





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render To Texture

2011-02-16 Thread Pumipat Doungklang
Hi Dietmar Funck

Thank you very much for your reply. Do you mean, the first viewer.frame() is
used for initialization that why nothing is rendered in the first frame ?.

Best regards,
Phummipat

On Wed, Feb 16, 2011 at 2:48 PM, Dietmar Funck 
dietmar.fu...@student.hpi.uni-potsdam.de wrote:

 Hi,
 I noticed the problem with first call of viewer.frame()  too. This is
 happens. because meanwhile the viewer initialization - which is called by
 first call of viewer.frame() - a call to glGetString (SceneView::init() -
  osg::isGLExtensionSupported(_renderInfo.getState()-getContextID(),); )
 triggers exiting to the caller of viewer.frame().
 Actually nothing is rendered in the first frame.

 The following code works for me to get an offscreen viewer. I don't know if
 you can leave sth. out from the traits.

 osg::ref_ptrosg::GraphicsContext::Traits traits = new
 osg::GraphicsContext::Traits();
traits-x = viewport-x(); // viewport of camera
traits-y = viewport-y();
traits-width = viewport-width();
traits-height = viewport-height();
traits-windowDecoration = false;
traits-doubleBuffer = false;
traits-sharedContext = NULL;
traits-pbuffer = true;

osg::GraphicsContext *graphicsContext =
 osg::GraphicsContext::createGraphicsContext(traits.get());

if(!graphicsContext) {
osg::notify(osg::NOTICE)  Failed to create pbuffer, failing back
 to normal graphics window.  std::endl;

traits-pbuffer = false;
graphicsContext =
 osg::GraphicsContext::createGraphicsContext(traits.get());
}

viewer-getCamera()-setGraphicsContext(graphicsContext);


 I found another problem:
 I would like to change the textures used for render to texture. However the
 framebufferobject used by osg is only initialized once and cannot be updated
 - at least with an approach like in the osgmultiplerendertargets example.
 Camera::detach() has no effect.

 Best Regards
 Dietmar Funck

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=36728#36728





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render To Texture

2011-02-16 Thread Sergey Polischuk
Hi,  Dietmar Funck.

In order to get another texture attached you can use something like


 _cam-setCullCallback( new fboAttachmentCullCB( this ) );

 void fboAttachmentCullCB::operator()(osg::Node* node, osg::NodeVisitor* nv)
 {
osg::Camera* fboCam = dynamic_castosg::Camera*( node );
osgUtil::CullVisitor* cv = dynamic_castosgUtil::CullVisitor*(nv);

if ( fboCam  cv)
{
cv-getCurrentRenderBin()-getStage()-setFrameBufferObject(NULL); // 
reset frame buffer object - see RenderStage::runCameraSetUp for details, the 
fbo has to be created again
cv-getCurrentRenderBin()-getStage()-setCameraRequiresSetUp( true ); 
// we have to ensure that the runCameraSetUp will be entered!
}
traverse(node,nv);
 }

Cheers,
Sergey.

16.02.2011, 16:48, Dietmar Funck dietmar.fu...@student.hpi.uni-potsdam.de:
 Hi,
 I noticed the problem with first call of viewer.frame()  too. This is 
 happens. because meanwhile the viewer initialization - which is called by 
 first call of viewer.frame() - a call to glGetString (SceneView::init() -  
 osg::isGLExtensionSupported(_renderInfo.getState()-getContextID(),); ) 
 triggers exiting to the caller of viewer.frame().
 Actually nothing is rendered in the first frame.

 The following code works for me to get an offscreen viewer. I don't know if 
 you can leave sth. out from the traits.

 osg::ref_ptrosg::GraphicsContext::Traits traits = new 
 osg::GraphicsContext::Traits();
 traits-x = viewport-x(); // viewport of camera
 traits-y = viewport-y();
 traits-width = viewport-width();
 traits-height = viewport-height();
 traits-windowDecoration = false;
 traits-doubleBuffer = false;
 traits-sharedContext = NULL;
 traits-pbuffer = true;

 osg::GraphicsContext *graphicsContext = 
 osg::GraphicsContext::createGraphicsContext(traits.get());

 if(!graphicsContext) {
 osg::notify(osg::NOTICE)  Failed to create pbuffer, failing back 
 to normal graphics window.  std::endl;

 traits-pbuffer = false;
 graphicsContext = 
 osg::GraphicsContext::createGraphicsContext(traits.get());
 }

 viewer-getCamera()-setGraphicsContext(graphicsContext);

 I found another problem:
 I would like to change the textures used for render to texture. However the 
 framebufferobject used by osg is only initialized once and cannot be updated 
 - at least with an approach like in the osgmultiplerendertargets example. 
 Camera::detach() has no effect.

 Best Regards
 Dietmar Funck

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=36728#36728

 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render To Texture is very slow

2011-02-02 Thread Martin Großer
Hello,

I would like use render to texture in every render step. My texture resolution 
is 2048 x 2048 and it is very slow. There are tipps and tricks to speed up the 
render to texture?
With 2048 x 2048 I get around 15 FPS and with 1024 x 1024 I get 45 FPS.

Thanks

Martin 
-- 
Neu: GMX De-Mail - Einfach wie E-Mail, sicher wie ein Brief!  
Jetzt De-Mail-Adresse reservieren: http://portal.gmx.net/de/go/demail
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render To Texture is very slow

2011-02-02 Thread David Callu
Hi Martin


What is your Hardware/Software configuration?
Which osg::Camera::RenderTargetImplementation did you use in your code ?

Try the osgprerendercubemap example to test performance of your hardware.

HTH
David Callu


2011/2/2 Martin Großer grosser.mar...@gmx.de

 Hello,

 I would like use render to texture in every render step. My texture
 resolution is 2048 x 2048 and it is very slow. There are tipps and tricks to
 speed up the render to texture?
 With 2048 x 2048 I get around 15 FPS and with 1024 x 1024 I get 45 FPS.

 Thanks

 Martin
 --
 Neu: GMX De-Mail - Einfach wie E-Mail, sicher wie ein Brief!
 Jetzt De-Mail-Adresse reservieren: http://portal.gmx.net/de/go/demail
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render To Texture is very slow

2011-02-02 Thread Martin Großer
Hello David,

So I use the FRAME_BUFFER_OBJECT and I have a NVIDIA GTX 470 grafics card.

I tried the osgprerendercubemap, but I cannot print out the frame rate. 
Additionally I tried the osgprerender example and I get a frame rate of around 
3500 FPS.

Here my Implementation:

osg::ref_ptrosg::Image img = osgDB::readImageFile(image.tga);

osg::ref_ptrosg::Group  rtt = new osg::Group;
root-addChild(rtt);


osg::ref_ptrosg::Camera camera = new osg::Camera;

camera-addChild( scene );

camera-setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

camera-setViewport(0,0,2048,2048);

camera-setRenderOrder(osg::Camera::PRE_RENDER, 0);

camera-setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT );
camera-attach(osg::Camera::COLOR_BUFFER, img, 0, 0);

rtt-addChild(camera.get());

Is the image format (internal format) a problem?

Thanks

Martin


 Original-Nachricht 
 Datum: Wed, 2 Feb 2011 13:56:09 +0100
 Von: David Callu led...@gmail.com
 An: OpenSceneGraph Users osg-users@lists.openscenegraph.org
 Betreff: Re: [osg-users] Render To Texture is very slow

 Hi Martin
 
 
 What is your Hardware/Software configuration?
 Which osg::Camera::RenderTargetImplementation did you use in your code ?
 
 Try the osgprerendercubemap example to test performance of your hardware.
 
 HTH
 David Callu
 
 
 2011/2/2 Martin Großer grosser.mar...@gmx.de
 
  Hello,
 
  I would like use render to texture in every render step. My texture
  resolution is 2048 x 2048 and it is very slow. There are tipps and
 tricks to
  speed up the render to texture?
  With 2048 x 2048 I get around 15 FPS and with 1024 x 1024 I get 45 FPS.
 
  Thanks
 
  Martin
  --
  Neu: GMX De-Mail - Einfach wie E-Mail, sicher wie ein Brief!
  Jetzt De-Mail-Adresse reservieren: http://portal.gmx.net/de/go/demail
  ___
  osg-users mailing list
  osg-users@lists.openscenegraph.org
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 

-- 
NEU: FreePhone - kostenlos mobil telefonieren und surfen!   
Jetzt informieren: http://www.gmx.net/de/go/freephone
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render To Texture is very slow

2011-02-02 Thread Peter Hrenka

Hi Martin,

Am 02.02.2011 14:35, schrieb Martin Großer:

Hello David,

So I use the FRAME_BUFFER_OBJECT and I have a NVIDIA GTX 470 grafics card.

I tried the osgprerendercubemap, but I cannot print out the frame rate. 
Additionally I tried the osgprerender example and I get a frame rate of around 
3500 FPS.

Here my Implementation:

osg::ref_ptrosg::Image  img = osgDB::readImageFile(image.tga);

osg::ref_ptrosg::Group  rtt = new osg::Group;
root-addChild(rtt);


osg::ref_ptrosg::Camera  camera = new osg::Camera;

camera-addChild( scene );

camera-setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

camera-setViewport(0,0,2048,2048);

camera-setRenderOrder(osg::Camera::PRE_RENDER, 0);

camera-setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT );
camera-attach(osg::Camera::COLOR_BUFFER, img, 0, 0);


Your performance Problem is in the previous line:
By attaching an image you instruct OSG to fetch
the whole image back to CPU memory (for each frame!).

If this is what you really want, it probably is as fast
as it will get.

If you want to use the rendered image on the GPU,
then you should attach an osg::Texture instead.

See osgprerender example, with useImage=false.



rtt-addChild(camera.get());

Is the image format (internal format) a problem?

Thanks

Martin


Cheers,

Peter
--
Vorstand/Board of Management:
Dr. Bernd Finkbeiner, Dr. Roland Niemeier, 
Dr. Arno Steitz, Dr. Ingrid Zech

Vorsitzender des Aufsichtsrats/
Chairman of the Supervisory Board:
Michel Lepert
Sitz/Registered Office: Tuebingen
Registergericht/Registration Court: Stuttgart
Registernummer/Commercial Register No.: HRB 382196 


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render To Texture is very slow

2011-02-02 Thread Martin Großer
Hello Peter,

that was the problem! Now I have around 1500 FPS.
I didn't know there are difference between to attach an image or a texture.

Thank you very much!

Cheers

Martin


 Original-Nachricht 
 Datum: Wed, 02 Feb 2011 14:58:46 +0100
 Von: Peter Hrenka p.hre...@science-computing.de
 An: OpenSceneGraph Users osg-users@lists.openscenegraph.org
 Betreff: Re: [osg-users] Render To Texture is very slow

 Hi Martin,
 
 Am 02.02.2011 14:35, schrieb Martin Großer:
  Hello David,
 
  So I use the FRAME_BUFFER_OBJECT and I have a NVIDIA GTX 470 grafics
 card.
 
  I tried the osgprerendercubemap, but I cannot print out the frame rate.
 Additionally I tried the osgprerender example and I get a frame rate of
 around 3500 FPS.
 
  Here my Implementation:
 
  osg::ref_ptrosg::Image  img = osgDB::readImageFile(image.tga);
 
  osg::ref_ptrosg::Group  rtt = new osg::Group;
  root-addChild(rtt);
 
 
  osg::ref_ptrosg::Camera  camera = new osg::Camera;
 
  camera-addChild( scene );
 
  camera-setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
 
  camera-setViewport(0,0,2048,2048);
 
  camera-setRenderOrder(osg::Camera::PRE_RENDER, 0);
 
  camera-setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT
 );
  camera-attach(osg::Camera::COLOR_BUFFER, img, 0, 0);
 
 Your performance Problem is in the previous line:
 By attaching an image you instruct OSG to fetch
 the whole image back to CPU memory (for each frame!).
 
 If this is what you really want, it probably is as fast
 as it will get.
 
 If you want to use the rendered image on the GPU,
 then you should attach an osg::Texture instead.
 
 See osgprerender example, with useImage=false.
 
 
  rtt-addChild(camera.get());
 
  Is the image format (internal format) a problem?
 
  Thanks
 
  Martin
 
 Cheers,
 
 Peter
 -- 
 Vorstand/Board of Management:
 Dr. Bernd Finkbeiner, Dr. Roland Niemeier, 
 Dr. Arno Steitz, Dr. Ingrid Zech
 Vorsitzender des Aufsichtsrats/
 Chairman of the Supervisory Board:
 Michel Lepert
 Sitz/Registered Office: Tuebingen
 Registergericht/Registration Court: Stuttgart
 Registernummer/Commercial Register No.: HRB 382196 
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

-- 
GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit 
gratis Handy-Flat! http://portal.gmx.net/de/go/dsl
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture 3D

2010-12-23 Thread Julien Valentin
Thanks a lot ! It works :D

Frederic Bouvier wrote:
 I used 
 camera-setImplicitBufferAttachmentMask( 
 osg::Camera::IMPLICIT_COLOR_BUFFER_ATTACHMENT, 
 osg::Camera::IMPLICIT_COLOR_BUFFER_ATTACHMENT );
 
 to avoid having a depth buffer attached.
 
 HTH
 Regards,
 -Fred
 
 
 - Julien Valentin a écrit :
 
 
  Thank for your answer:
  I've manage to make it work in pure GL3 without osg and see that your
  tweak in osg is the right thing to do.
  However it always doesnt work..
  here are the different GL call for fbo creation for 2 case:
  
  -working case (only one slice)
  cam-attach( osg::Camera::COLOR_BUFFER0,tex,0,0)
  
  Code:
  glGenRenderbuffersEXT(1, 048DEA40)
  |  glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 1)
  |  glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,
  GL_DEPTH_COMPONENT24_SGIX, 64, 64)
  |  glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
  GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, 1)
  //render to only one slice
  |  glFramebufferTexture3DEXT(GL_FRAMEBUFFER_EXT,
  GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_3D, 3, 0, 0)
  |  glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)
  |  glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 1)
  
  
  
  -the FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS case with
  cam-attach(
  osg::Camera::COLOR_BUFFER0,tex,0,osg::Camera::FACE_CONTROLLED_BY_GEOMETRY_SHADER)
  
  Code:
  
  |  glGenRenderbuffersEXT(1, 0484EA40)
  |  glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 1)
  |  glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,
  GL_DEPTH_COMPONENT24_SGIX, 64, 64)
  |  glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
  GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, 1)
  //render to all slice
  |  glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT,
  GL_COLOR_ATTACHMENT0_EXT, 3, 0)
  |  glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)
  
  
  
  All seems right but error occurs on glCheckFramebufferStatusEXT..
  
  
  Further, my working version (pure GL3)  dont initialize RenderBuffer
  with GL_DEPTH_COMPONENT24_SGIX, but with GL_RGBA
  
  Could it be a problem with it? Strange because it dont bother the
  only one slice version...
  
  Could it be because I use CORE 330 shaders?
  
  Please help
  
  
  Frederic Bouvier wrote:
  
   Hi Julien,
   
   It's me that submitted this change at
   http://www.mail-archive.com//msg05568.html
   
   It's hard to tell what's going wrong without the full code of your
   
  camera setup.
  
   In http://www.opengl.org/registry/specs/ARB/geometry_shader4.txt
   error 0x8da8 refers to FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS_ARB and
   
  the
  
   document has possible cause for this error.
   
   Regards,
   -Fred
   
   
   - Julien Valentin a écrit :
   
   
   
Hi,
I'm trying to make efficient fluid simulation with osg
I've just found this page :
http://www.mail-archive.com//msg05568.html

It look pretty great as my 1 camera per slice code is very CPU

   
  time
  
   
consuming.

I develop a geometry shader that change gl_layer value per

   
  primitive.
  
   
It work so i change my texture my texture attachement to camera's

   
  FBO
  
   
as follow:

Code:

for(int i=0;i_depth;i++){
//one cam per slice
_cameras[i]-attach( osg::Camera::COLOR_BUFFER,tex,0,i,false);
}

to

for(int i=0;i1;i++){
//one overalll camera
_cameras[0]-attach(


   
  osg::Camera::COLOR_BUFFER,tex,0,osg::Camera::FACE_CONTROLLED_BY_GEOMETRY_SHADER);
  
   
}

But this change make a crash:


Code:

RenderStage::runCameraSetUp(), FBO setup failed, FBO status=

   
  0x8da8
  
   ___
   osg-users mailing list
   
   
   
  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  
   
   --
   Post generated by Mail2Forum
   
  
  
  --
  Read this topic online here:
  http://forum.openscenegraph.org/viewtopic.php?p=35027#35027
  
  
  
  
  
  ___
  osg-users mailing list
  
  http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
  
 ___
 osg-users mailing list
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
  --
 Post generated by Mail2Forum
 :D  :D  :D  :D  :D  :D  :D  :D  :D  :D

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=35132#35132





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture 3D

2010-12-21 Thread Julien Valentin
Thank for your answer:
I've manage to make it work in pure GL3 without osg and see that your tweak in 
osg is the right thing to do.
However it always doesnt work..
here are the different GL call for fbo creation for 2 case: 

-working case (only one slice)
cam-attach( osg::Camera::COLOR_BUFFER0,tex,0,0)

Code:
 glGenRenderbuffersEXT(1, 048DEA40)
|  glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 1)
|  glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24_SGIX, 64, 
64)
|  glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, 
GL_RENDERBUFFER_EXT, 1)
//render to only one slice
|  glFramebufferTexture3DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, 
GL_TEXTURE_3D, 3, 0, 0)
|  glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)
|  glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 1)



-the FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS case with
cam-attach( 
osg::Camera::COLOR_BUFFER0,tex,0,osg::Camera::FACE_CONTROLLED_BY_GEOMETRY_SHADER)

Code:

|  glGenRenderbuffersEXT(1, 0484EA40)
|  glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 1)
|  glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24_SGIX, 64, 
64)
|  glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, 
GL_RENDERBUFFER_EXT, 1)
//render to all slice
|  glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, 3, 0)
|  glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)



All seems right but error occurs on glCheckFramebufferStatusEXT..


Further, my working version (pure GL3)  dont initialize RenderBuffer with 
GL_DEPTH_COMPONENT24_SGIX, but with GL_RGBA 

Could it be a problem with it? Strange because it dont bother the only one 
slice version...

Could it be because I use CORE 330 shaders?

Please help


Frederic Bouvier wrote:
 Hi Julien,
 
 It's me that submitted this change at
 http://www.mail-archive.com//msg05568.html
 
 It's hard to tell what's going wrong without the full code of your camera 
 setup.
 In http://www.opengl.org/registry/specs/ARB/geometry_shader4.txt
 error 0x8da8 refers to FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS_ARB and the 
 document has possible cause for this error.
 
 Regards,
 -Fred
 
 
 - Julien Valentin a écrit :
 
 
  Hi,
  I'm trying to make efficient fluid simulation with osg
  I've just found this page :
  http://www.mail-archive.com//msg05568.html
  
  It look pretty great as my 1 camera per slice code is very CPU time
  consuming.
  
  I develop a geometry shader that change gl_layer value per primitive.
  It work so i change my texture my texture attachement to camera's FBO
  as follow:
  
  Code:
  
  for(int i=0;i_depth;i++){
  //one cam per slice
  _cameras[i]-attach( osg::Camera::COLOR_BUFFER,tex,0,i,false);
  }
  
  to
  
  for(int i=0;i1;i++){
  //one overalll camera
  _cameras[0]-attach(
  osg::Camera::COLOR_BUFFER,tex,0,osg::Camera::FACE_CONTROLLED_BY_GEOMETRY_SHADER);
  }
  
  But this change make a crash:
  
  
  Code:
  
  RenderStage::runCameraSetUp(), FBO setup failed, FBO status= 0x8da8
  
 ___
 osg-users mailing list
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
  --
 Post generated by Mail2Forum


--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=35027#35027





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture 3D

2010-12-21 Thread Frederic Bouvier
I used 
camera-setImplicitBufferAttachmentMask( 
osg::Camera::IMPLICIT_COLOR_BUFFER_ATTACHMENT, 
osg::Camera::IMPLICIT_COLOR_BUFFER_ATTACHMENT );

to avoid having a depth buffer attached.

HTH
Regards,
-Fred


- Julien Valentin a écrit :

 Thank for your answer:
 I've manage to make it work in pure GL3 without osg and see that your
 tweak in osg is the right thing to do.
 However it always doesnt work..
 here are the different GL call for fbo creation for 2 case:
 
 -working case (only one slice)
 cam-attach( osg::Camera::COLOR_BUFFER0,tex,0,0)
 
 Code:
  glGenRenderbuffersEXT(1, 048DEA40)
 |  glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 1)
 |  glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,
 GL_DEPTH_COMPONENT24_SGIX, 64, 64)
 |  glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
 GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, 1)
 //render to only one slice
 |  glFramebufferTexture3DEXT(GL_FRAMEBUFFER_EXT,
 GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_3D, 3, 0, 0)
 |  glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)
 |  glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 1)
 
 
 
 -the FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS case with
 cam-attach(
 osg::Camera::COLOR_BUFFER0,tex,0,osg::Camera::FACE_CONTROLLED_BY_GEOMETRY_SHADER)
 
 Code:
 
 |  glGenRenderbuffersEXT(1, 0484EA40)
 |  glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, 1)
 |  glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,
 GL_DEPTH_COMPONENT24_SGIX, 64, 64)
 |  glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT,
 GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, 1)
 //render to all slice
 |  glFramebufferTextureEXT(GL_FRAMEBUFFER_EXT,
 GL_COLOR_ATTACHMENT0_EXT, 3, 0)
 |  glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT)
 
 
 
 All seems right but error occurs on glCheckFramebufferStatusEXT..
 
 
 Further, my working version (pure GL3)  dont initialize RenderBuffer
 with GL_DEPTH_COMPONENT24_SGIX, but with GL_RGBA
 
 Could it be a problem with it? Strange because it dont bother the
 only one slice version...
 
 Could it be because I use CORE 330 shaders?
 
 Please help
 
 
 Frederic Bouvier wrote:
  Hi Julien,
 
  It's me that submitted this change at
  http://www.mail-archive.com//msg05568.html
 
  It's hard to tell what's going wrong without the full code of your
 camera setup.
  In http://www.opengl.org/registry/specs/ARB/geometry_shader4.txt
  error 0x8da8 refers to FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS_ARB and
 the
  document has possible cause for this error.
 
  Regards,
  -Fred
 
 
  - Julien Valentin a écrit :
 
 
   Hi,
   I'm trying to make efficient fluid simulation with osg
   I've just found this page :
   http://www.mail-archive.com//msg05568.html
  
   It look pretty great as my 1 camera per slice code is very CPU
 time
   consuming.
  
   I develop a geometry shader that change gl_layer value per
 primitive.
   It work so i change my texture my texture attachement to camera's
 FBO
   as follow:
  
   Code:
  
   for(int i=0;i_depth;i++){
   //one cam per slice
   _cameras[i]-attach( osg::Camera::COLOR_BUFFER,tex,0,i,false);
   }
  
   to
  
   for(int i=0;i1;i++){
   //one overalll camera
   _cameras[0]-attach(
  
 osg::Camera::COLOR_BUFFER,tex,0,osg::Camera::FACE_CONTROLLED_BY_GEOMETRY_SHADER);
   }
  
   But this change make a crash:
  
  
   Code:
  
   RenderStage::runCameraSetUp(), FBO setup failed, FBO status=
 0x8da8
  
  ___
  osg-users mailing list
 
 
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
 
   --
  Post generated by Mail2Forum
 
 
 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=35027#35027
 
 
 
 
 
 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture 3D

2010-12-20 Thread Frederic Bouvier
Hi Julien,

It's me that submitted this change at
http://www.mail-archive.com/osg-submissions@lists.openscenegraph.org/msg05568.html

It's hard to tell what's going wrong without the full code of your camera setup.
In http://www.opengl.org/registry/specs/ARB/geometry_shader4.txt
error 0x8da8 refers to FRAMEBUFFER_INCOMPLETE_LAYER_TARGETS_ARB and the 
document has possible cause for this error.

Regards,
-Fred


- Julien Valentin a écrit :

 Hi,
 I'm trying to make efficient fluid simulation with osg
 I've just found this page :
 http://www.mail-archive.com//msg05568.html
 
 It look pretty great as my 1 camera per slice code is very CPU time
 consuming.
 
 I develop a geometry shader that change gl_layer value per primitive.
 It work so i change my texture my texture attachement to camera's FBO
 as follow:
 
 Code:
 
 for(int i=0;i_depth;i++){
 //one cam per slice
 _cameras[i]-attach( osg::Camera::COLOR_BUFFER,tex,0,i,false);
 }
 
 to
 
 for(int i=0;i1;i++){
 //one overalll camera
 _cameras[0]-attach(
 osg::Camera::COLOR_BUFFER,tex,0,osg::Camera::FACE_CONTROLLED_BY_GEOMETRY_SHADER);
 }
 
 But this change make a crash:
 
 
 Code:
 
 RenderStage::runCameraSetUp(), FBO setup failed, FBO status= 0x8da8
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to texture 3D

2010-12-19 Thread Robert Osfield
Hi Julien,

I haven't personally tested this feature yet, but having merged the
submissions I know that the FACE_CONTROLLED_BY_GEOMETRY_SHADER control
is only available on recent hardware and drivers so check whether this
feature is available on your hardware.

Robert.

On Sat, Dec 18, 2010 at 7:19 PM, Julien Valentin
julienvalenti...@gmail.com wrote:
 Hi,
 I'm trying to make efficient fluid simulation with osg
 I've just found this page :
 http://www.mail-archive.com//msg05568.html

 It look pretty great as my 1 camera per slice code is very CPU time consuming.

 I develop a geometry shader that change gl_layer value per primitive.
 It work so i change my texture my texture attachement to camera's FBO as 
 follow:

 Code:

 for(int i=0;i_depth;i++){
 //one cam per slice
 _cameras[i]-attach( osg::Camera::COLOR_BUFFER,tex,0,i,false);
 }

 to

 for(int i=0;i1;i++){
 //one overalll camera
 _cameras[0]-attach( 
 osg::Camera::COLOR_BUFFER,tex,0,osg::Camera::FACE_CONTROLLED_BY_GEOMETRY_SHADER);
 }





 But this change make a crash:


 Code:

 RenderStage::runCameraSetUp(), FBO setup failed, FBO status= 0x8da8




 Any idea?

 Thank you!

 Cheers,
 Julien[/code]

 --
 Read this topic online here:
 http://forum.openscenegraph.org/viewtopic.php?p=34962#34962





 ___
 osg-users mailing list
 osg-users@lists.openscenegraph.org
 http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Render to texture 3D

2010-12-18 Thread Julien Valentin
Hi,
I'm trying to make efficient fluid simulation with osg
I've just found this page :
http://www.mail-archive.com//msg05568.html

It look pretty great as my 1 camera per slice code is very CPU time consuming.

I develop a geometry shader that change gl_layer value per primitive.
It work so i change my texture my texture attachement to camera's FBO as follow:

Code:

for(int i=0;i_depth;i++){
//one cam per slice
_cameras[i]-attach( osg::Camera::COLOR_BUFFER,tex,0,i,false);
}

to

for(int i=0;i1;i++){
//one overalll camera
_cameras[0]-attach( 
osg::Camera::COLOR_BUFFER,tex,0,osg::Camera::FACE_CONTROLLED_BY_GEOMETRY_SHADER);
}





But this change make a crash:


Code:

RenderStage::runCameraSetUp(), FBO setup failed, FBO status= 0x8da8




Any idea?

Thank you!

Cheers,
Julien[/code]

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34962#34962





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-15 Thread Sajjadul Islam
Hi Delport,

I am getting the several pass with  the chain effect within the dynamic scene. 
But the scene freezes after the third pass. And the scene remains frozen(i mean 
 the animation), i can only see the blurred scene.

What should i look into to debug this?

Thanks for all the useful hint you have put forward so far.

Regards
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34881#34881





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-14 Thread Sajjadul Islam
Hi Delport,

Thanks for the support. At least i have a little bit of improvement. I can go 
upto second pass that makes the burred scene blurrer.

1. Initial scene - No blur effect.
2. First key press - Shader activated and scene blurred and scene is visualized.
3. Second key press - the scene is even more blurred and the scene freezes.
4. Fourth key press - The scene goes back to step 2.

Sceneario should be as follows:

With every key press event the scene should be more blurry. I think i am 
getting close.

Something somewhere is missing. Any hint to look into?

Thanks again!

Regards
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34829#34829





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-14 Thread Sajjadul Islam
Hi Delport,

I am attaching the code i put inside the keyboard handler. I believe it will 
provide you with more insight to suggest.

'
class BlurPassHandler : public osgGA::GUIEventHandler
{

public:
BlurPassHandler(int key,BlurPass *bp,osg::Node *node):
_bp(bp),
_node(node),
_key(key)
{

}

bool handle(const osgGA::GUIEventAdapter ea, osgGA::GUIActionAdapter)
{
if (ea.getHandled()) return false;

switch(ea.getEventType())
{
case(osgGA::GUIEventAdapter::KEYUP):
{
osg::notify(osg::NOTICE)event handlerstd::endl;

if(ea.getKey() == _key)
{
//find the node with the name in the parameter
FindNamedNode fnn(HUD);

_node-accept(fnn);

if(!_bp-shaderActive())
_bp-activateShader();
else
{
 osg::notify(osg::NOTICE)About to 
flipstd::endl;
 _bp-flip();

 //the hud cameara texture
 //has to be updated, either
 //after updating the shader
 //or after doing the flip
}

if(fnn.getNode() != NULL)
{
osg::notify(osg::NOTICE)HUD foundstd::endl;

//assign to local node variable
_node = fnn.getNode();

osg::Geode *_geode = 
dynamic_castosg::Geode*(_node);

if(_geode)
{
   osg::notify(osg::NOTICE)geode 
foundstd::endl;

   osg::Geometry *_geometry = 
dynamic_castosg::Geometry*(_geode-getDrawable(0));

   if(_geometry)
   {
   osg::notify(osg::NOTICE)geometry 
foundstd::endl;
   }

   osg::StateSet *_stateset = 
_geometry-getOrCreateStateSet();

   _stateset-setTextureAttributeAndModes(0, 
_bp-getOutputTexture().get(),osg::StateAttribute::ON);
}
}
else
{
osg::notify(osg::NOTICE)HUD not 
foundstd::endl;
}


return true;
}

break;
}
default:
break;
}

return false;
}


BlurPass *_bp;
osg::Node *_node;
int _key;

};

* 

Thank you!

Cheers,
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34831#34831





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-14 Thread J.P. Delport

Hi,

you cannot make loops in the graph, so the more passes you need, the 
more passes you will have to insert into the scene graph.


jp

On 14/12/10 13:53, Sajjadul Islam wrote:

Hi Delport,

Thanks for the support. At least i have a little bit of improvement. I can go 
upto second pass that makes the burred scene blurrer.

1. Initial scene - No blur effect.
2. First key press - Shader activated and scene blurred and scene is visualized.
3. Second key press - the scene is even more blurred and the scene freezes.
4. Fourth key press - The scene goes back to step 2.

Sceneario should be as follows:

With every key press event the scene should be more blurry. I think i am 
getting close.

Something somewhere is missing. Any hint to look into?

Thanks again!

Regards
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34829#34829





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


--
This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. 
The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.


This message has been scanned for viruses and dangerous content by MailScanner, 
and is believed to be clean.  MailScanner thanks Transtec Computers for their support.


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-14 Thread Sajjadul Islam
Hi Delport ,

Is something like happening in osggameflife without adding the pass in the 
scenegraph?

Please correct me if i m wrong.

They have multiple pass and they are are just flip flop the two output textures

... 

Thank you!

Cheers,
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34837#34837





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-14 Thread J.P. Delport

Hi,

in the case of osggameoflife, the state of a single image is updated 
during every call to viewer.frame(). In other words the processing 
starts with a single state and this is updated - the input in not 
dynamic. The flip flop is only used because one cannot read from and 
write to the same texture in a single pass.


It can work the same for blurring - at every call to frame an input can 
be blurred further, but this assumes that the scene in not changing.


If you want a variable number of blur passes for every frame of a 
dynamic scene, you will have to do mulitiple passes per frame.


jp

On 14/12/10 16:41, Sajjadul Islam wrote:

Hi Delport ,

Is something like happening in osggameflife without adding the pass in the 
scenegraph?

Please correct me if i m wrong.

They have multiple pass and they are are just flip flop the two output textures

...

Thank you!

Cheers,
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34837#34837





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


--
This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. 
The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.


This message has been scanned for viruses and dangerous content by MailScanner, 
and is believed to be clean.  MailScanner thanks Transtec Computers for their support.


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-14 Thread Sajjadul Islam
Hi Delport,

I have added one more pass branch to  the graph and i can see a new behavior. 
The code snippet for it as follows:


***'

//the first pass in the scene, with the key press the following do the blur on 
the initial scene
   _ProcessPass[0] = new 
ProcessPass(_OutTextureBlur[0].get(),_OutTextureBlur[1].get(),
   _TextureWidth,_TextureHeight);

//takes the input of the first pass and blur it even more, and the screen 
freezes   
   _ProcessPass[1] = new 
ProcessPass(_OutTextureBlur[1].get(),_OutTextureBlur[0].get(),
   _TextureWidth,_TextureHeight);


//SOMETHING INTERESTING HAPPENS HERE!
//The following pass make the last scene less blur instead of more blur than 
the last frozen scene, but the scene is not frozen anymore, i can rotate around 
the model, zoom in and zoom out
   _ProcessPass[2] = new 
ProcessPass(_OutTextureBlur[0].get(),_OutTextureBlur[1].get(),
   _TextureWidth,_TextureHeight);

   _BranchSwitch[0]-addChild(_ProcessPass[0]-getRoot().get());
   _BranchSwitch[1]-addChild(_ProcessPass[1]-getRoot().get());
   _BranchSwitch[2]-addChild(_ProcessPass[2]-getRoot().get());

'***

From the above description it seems that the last pass is reversing the blur 
effect.
In ProcessPass[1] - the scene is getting more blurry. It seems that the 
ProcessPass pass is doing the job right, but in the ProcessPass[2] it isnot.


Any idea?


Thank you!

Regards
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34871#34871





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-14 Thread J.P. Delport

Hi,

sorry, I can't quite follow the code, but do something like this:

input_texture - ProcessPass[0] - Out[0]
Out[0] - Process[1] - Out[1]
Out[1] - Process[2] - Out[2]

Just make a chain...

Depending on how many passes you have enables, view one of the Out[] 
textures.


jp

On 15/12/10 02:40, Sajjadul Islam wrote:

Hi Delport,

I have added one more pass branch to  the graph and i can see a new behavior. 
The code snippet for it as follows:


***'

//the first pass in the scene, with the key press the following do the blur on 
the initial scene
_ProcessPass[0] = new 
ProcessPass(_OutTextureBlur[0].get(),_OutTextureBlur[1].get(),
_TextureWidth,_TextureHeight);

//takes the input of the first pass and blur it even more, and the screen 
freezes
_ProcessPass[1] = new 
ProcessPass(_OutTextureBlur[1].get(),_OutTextureBlur[0].get(),
_TextureWidth,_TextureHeight);


//SOMETHING INTERESTING HAPPENS HERE!
//The following pass make the last scene less blur instead of more blur than 
the last frozen scene, but the scene is not frozen anymore, i can rotate around 
the model, zoom in and zoom out
_ProcessPass[2] = new 
ProcessPass(_OutTextureBlur[0].get(),_OutTextureBlur[1].get(),
_TextureWidth,_TextureHeight);

_BranchSwitch[0]-addChild(_ProcessPass[0]-getRoot().get());
_BranchSwitch[1]-addChild(_ProcessPass[1]-getRoot().get());
_BranchSwitch[2]-addChild(_ProcessPass[2]-getRoot().get());

'***


From the above description it seems that the last pass is reversing the blur 
effect.

In ProcessPass[1] - the scene is getting more blurry. It seems that the 
ProcessPass pass is doing the job right, but in the ProcessPass[2] it isnot.


Any idea?


Thank you!

Regards
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34871#34871





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



--
This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. 
The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.


This message has been scanned for viruses and dangerous content by MailScanner, 
and is believed to be clean.  MailScanner thanks Transtec Computers for their support.


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-13 Thread Sajjadul Islam
Hi Robert  Delport,

I have the setup with 4 cameras:

1st camera - the slave camera that inherit the master camera's relative frame 
and render the scene to 2 textures with color attachments. One of the texture 
remains as it is and other texture is left for further operation.

2. 2nd camera - the camera is hung under the 1st switch which does some 
operation over the one of texture specified above.   The result of the 
operation is rendered to another texture using the camera.

3. 3rd camera - the camera is hung under the 2nd switch that does some further 
operation over the last result done in step 2. The result is rendered to  the 
texture which was the input texture in step 2.

4. 4th Camera - is the HUD camera to visualize the output of the either of the 
above operation in step 2 / step 3.


Since step 2 and 3 are Switches i make them only one enable at a time. The 
concepts are pretty simple as you can imagine. They have been picked from 
osgdistortion, osgprerender,osgstereomatch and osggameoflife.

During the first pass , i make the first switch enabled with the keypress event 
and can visualize the scene.

With the next key press event, i flip the texture for further pass over the 
result of  the last operation, But the screen freezes without showing the 
result of the operation in the HUD camera. 
 
I need some hint on what might have gone wrong. If need more elaboration please 
ask.

Thank you!

Regards
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34754#34754





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-13 Thread J.P. Delport

Hi,

On 13/12/10 10:15, Sajjadul Islam wrote:

Hi Robert  Delport,

I have the setup with 4 cameras:

1st camera - the slave camera that inherit the master camera's
relative frame and render the scene to 2 textures with color
attachments. One of the texture remains as it is and other texture is
left for further operation.

2. 2nd camera - the camera is hung under the 1st switch which does
some operation over the one of texture specified above.   The result
of the operation is rendered to another texture using the camera.

3. 3rd camera - the camera is hung under the 2nd switch that does
some further operation over the last result done in step 2. The
result is rendered to  the texture which was the input texture in
step 2.


Why the input texture of 2? Are your cameras pre-render? What order?



4. 4th Camera - is the HUD camera to visualize the output of the
either of the above operation in step 2 / step 3.


I assume you switch the texture the HUD is viewing? How do you do it?

Maybe you can save your scene as .osg file and inspect it to see if 
something is missing.


jp




Since step 2 and 3 are Switches i make them only one enable at a
time. The concepts are pretty simple as you can imagine. They have
been picked from osgdistortion, osgprerender,osgstereomatch and
osggameoflife.

During the first pass , i make the first switch enabled with the
keypress event and can visualize the scene.

With the next key press event, i flip the texture for further pass
over the result of  the last operation, But the screen freezes
without showing the result of the operation in the HUD camera.

I need some hint on what might have gone wrong. If need more
elaboration please ask.

Thank you!

Regards Sajjadul

-- Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34754#34754





___ osg-users mailing
list osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


--
This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. 
The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.


This message has been scanned for viruses and dangerous content by MailScanner, 
and is believed to be clean.  MailScanner thanks Transtec Computers for their support.


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-13 Thread Sajjadul Islam
Hi Delport,

Reasons for rendering the scene to two textrues 

With key press event i may want to show the texture using the HUD that have not 
gone through any operation. If i do any post processing on the texture the 
initial texture value is lost. I want to preserve it. This is why i render the 
same scene to two texture.

One Texture that contains the main scene and other texture also contains the 
main scene but will go through further operation.   

All the rendering are in PRE_ORDER except the HUD camera. HUD camera is using 
the NESTED_RENDER.

When the HUD camera is viewing any texture it is accessing the texture as 
follows:

***'


..

//the scene is passed to the followng class
bp = new BlurPass(subgraph,clearColour);

distortionNode-addChild(bp-getRoot().get());


...

//the following function is accessing the texture by checking first the active 
switch node
stateset-setTextureAttributeAndModes(0, 
bp-getOutputTexture().get(),osg::StateAttribute::ON);
...

osg::ref_ptrosg::Texture2D BlurPass::getOutputTexture() const
{
   int out_tex = _ActiveBranch;

   return _ProcessPass[out_tex]-getOutputTexture();
}



//inside the BlurPass class i call the following function that make one switch 
enables while making the other disabled - in other words flip fllop

void BlurPass::activateBranch()
{
   //GET THE current activate branch
   int onb = _ActiveBranch;

   osg::notify(osg::NOTICE)  on bit   onb  std::endl;

   //get teh current inactive branch
   int offb = (onb == 1) ? 0 : 1;

   osg::notify(osg::NOTICE)  off bit   offb  std::endl;

   //make the active switch on
   _BranchSwitch[onb]-setAllChildrenOn();

   //make the inactive switch off
   _BranchSwitch[offb]-setAllChildrenOff();
}


void BlurPass::flip()
{
   _ActiveBranch = (_ActiveBranch == 1) ? 0 : 1;

   osg::notify(osg::NOTICE)active branch  _ActiveBranch  std::endl;

   activateBranch();
}


BlurPass::BlurPass(osg::Node *scene, const osg::Vec4 clearColor)
:_SubGraph(scene),
_ClearColor(clearColor)
{
//pre-define the texture
//width and height, the very same size
//we preserve all over the scene
   _TextureWidth = 1280;
   _TextureHeight = 1280;

   //initially the shader is inactive
   shaderFlag = false;

   _RootGroup = new osg::Group;

   //initialize the texture where we
   //shall render the initial scene
   createInputTexture();

   //initialize the 2 output textures
   //which will be flipped with keypress 
   //for multipass blurring
   createOutputTextures();

   //camera renders the scene to the 0-indexed texture
   //that will be goiing through the blur phase in the process pass
   setupCamera();

   //create two switch to do the flip-flop
   _BranchSwitch[0] = new osg::Switch;
   _BranchSwitch[1] = new osg::Switch;


   //add the camera and the two switches

   //camera that renders the scene to the texture
   _RootGroup-addChild(_Camera.get());


   _RootGroup-addChild(_BranchSwitch[0].get());

   _RootGroup-addChild(_BranchSwitch[1].get());

   //initialize the active switch
   _ActiveBranch = 0;


   //activate the switch based on the
   //current active branch flag
   activateBranch();

//we have both input and output textures initialized and
//the _InOutTextureBlur[0] get the scene rendering that will
//going through the blur operation
   _ProcessPass[0] = new 
ProcessPass(_OutTextureBlur[0].get(),_OutTextureBlur[1].get(),
   _TextureWidth,_TextureHeight);
   
   _ProcessPass[1] = new 
ProcessPass(_OutTextureBlur[1].get(),_OutTextureBlur[0].get(),
   _TextureWidth,_TextureHeight);

   _BranchSwitch[0]-addChild(_ProcessPass[0]-getRoot().get());
   _BranchSwitch[1]-addChild(_ProcessPass[1]-getRoot().get());

   
}

//setup the render to texture camera
//the following function renders the initial scene to 2 textures
void BlurPass::setupCamera()
{
_Camera = new osg::Camera;

_Camera-setClearColor(_ClearColor);
_Camera-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);


//just inherit the main cameras view
_Camera-setReferenceFrame(osg::Transform::RELATIVE_RF);
_Camera-setProjectionMatrix(osg::Matrixd::identity());
_Camera-setViewMatrix(osg::Matrixd::identity());


//set the viewport according to the value texture width and texture height
_Camera-setViewport(0,0,_TextureWidth,_TextureHeight);

//RENDER to texture before the main camera
_Camera-setRenderOrder(osg::Camera::PRE_RENDER);


//USE THE frame buffer object where supported
_Camera-setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);

//the camera with  the relative frame renders to the input texture - index 0
//this texture will be input for the process pass
_Camera-attach(osg::Camera::COLOR_BUFFER0,_InputTexture.get());



Re: [osg-users] Render to Texture

2010-12-13 Thread J.P. Delport

Hi,

On 13/12/10 12:26, Sajjadul Islam wrote:

Hi Delport,

Reasons for rendering the scene to two textrues

With key press event i may want to show the texture using the HUD
that have not gone through any operation. If i do any post processing
on the texture the initial texture value is lost. I want to preserve
it. This is why i render the same scene to two texture.

One Texture that contains the main scene and other texture also
contains the main scene but will go through further operation.


yes, I understand this, it's OK.



All the rendering are in PRE_ORDER except the HUD camera. HUD camera
is using the NESTED_RENDER.


OK, I just wanted to make sure you are not creating a loop and expecting 
OSG to figure out the render order based on how you connect input and 
output textures. OSG just traverses your cameras in the order you attach 
them to the scene (in the simplest case).




When the HUD camera is viewing any texture it is accessing the
texture as follows:

***'






..

//the scene is passed to the followng class bp = new
BlurPass(subgraph,clearColour);

distortionNode-addChild(bp-getRoot().get());


...

//the following function is accessing the texture by checking first
the active switch node stateset-setTextureAttributeAndModes(0,
bp-getOutputTexture().get(),osg::StateAttribute::ON);
...

osg::ref_ptrosg::Texture2D  BlurPass::getOutputTexture() const {
int out_tex = _ActiveBranch;

return _ProcessPass[out_tex]-getOutputTexture(); }



//inside the BlurPass class i call the following function that make
one switch enables while making the other disabled - in other words
flip fllop

void BlurPass::activateBranch() { //GET THE current activate branch
int onb = _ActiveBranch;

osg::notify(osg::NOTICE)  on bit   onb  std::endl;

//get teh current inactive branch int offb = (onb == 1) ? 0 : 1;

osg::notify(osg::NOTICE)  off bit   offb  std::endl;

//make the active switch on _BranchSwitch[onb]-setAllChildrenOn();

//make the inactive switch off
_BranchSwitch[offb]-setAllChildrenOff(); }


void BlurPass::flip() { _ActiveBranch = (_ActiveBranch == 1) ? 0 :
1;

osg::notify(osg::NOTICE)active branch  _ActiveBranch
std::endl;

activateBranch(); }


BlurPass::BlurPass(osg::Node *scene, const osg::Vec4clearColor)
:_SubGraph(scene), _ClearColor(clearColor) { //pre-define the
texture //width and height, the very same size //we preserve all over
the scene _TextureWidth = 1280; _TextureHeight = 1280;

//initially the shader is inactive shaderFlag = false;

_RootGroup = new osg::Group;

//initialize the texture where we //shall render the initial scene
createInputTexture();

//initialize the 2 output textures //which will be flipped with
keypress //for multipass blurring createOutputTextures();

//camera renders the scene to the 0-indexed texture //that will be
goiing through the blur phase in the process pass setupCamera();

//create two switch to do the flip-flop _BranchSwitch[0] = new
osg::Switch; _BranchSwitch[1] = new osg::Switch;


//add the camera and the two switches

//camera that renders the scene to the texture
_RootGroup-addChild(_Camera.get());


_RootGroup-addChild(_BranchSwitch[0].get());

_RootGroup-addChild(_BranchSwitch[1].get());

//initialize the active switch _ActiveBranch = 0;


//activate the switch based on the //current active branch flag
activateBranch();

//we have both input and output textures initialized and //the
_InOutTextureBlur[0] get the scene rendering that will //going
through the blur operation _ProcessPass[0] = new
ProcessPass(_OutTextureBlur[0].get(),_OutTextureBlur[1].get(),
_TextureWidth,_TextureHeight);

_ProcessPass[1] = new
ProcessPass(_OutTextureBlur[1].get(),_OutTextureBlur[0].get(),
_TextureWidth,_TextureHeight);

_BranchSwitch[0]-addChild(_ProcessPass[0]-getRoot().get());
_BranchSwitch[1]-addChild(_ProcessPass[1]-getRoot().get());


}

//setup the render to texture camera //the following function renders
the initial scene to 2 textures void BlurPass::setupCamera() {
_Camera = new osg::Camera;

_Camera-setClearColor(_ClearColor);
_Camera-setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);


//just inherit the main cameras view
_Camera-setReferenceFrame(osg::Transform::RELATIVE_RF);
_Camera-setProjectionMatrix(osg::Matrixd::identity());
_Camera-setViewMatrix(osg::Matrixd::identity());


//set the viewport according to the value texture width and texture
height _Camera-setViewport(0,0,_TextureWidth,_TextureHeight);

//RENDER to texture before the main camera
_Camera-setRenderOrder(osg::Camera::PRE_RENDER);


//USE THE frame buffer object where supported
_Camera-setRenderTargetImplementation(osg::Camera::FRAME_BUFFER_OBJECT);

 //the camera with  the relative frame renders to the input texture -
index 0 //this texture will be input for the process pass

Re: [osg-users] Render to Texture

2010-12-13 Thread Sajjadul Islam
Hi Delport,

I am sorry that i did not get much from your last reply asking how am i 
changing the texture 
when i switch the branches. Do i have to explicitly specify the texture? Even 
it is i believe that it is done as follows:

stateset-setTextureAttributeAndModes(0,
bp-getOutputTexture().get(),osg::StateAttribute::ON);


bp-getOutputTexture() is retrieving the correct texture based on the value of 
the _ActiveBranch. With key press event i have a function that flips the switch 
node that turns one switch's input to other switch's output. 


If i have to call this function again where should the call go into? It seems 
that the state is dynamic and do i have to implement a state change callback ?

Thank you!

Regards
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34786#34786





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-13 Thread Sajjadul Islam
Hi Delport,

I have created a new class inherating the osg::Drawable::UpdateCallback. The 
class structure is as follows:

*'
class BlurCallback : public osg::Drawable::UpdateCallback
{

public:
BlurCallback(BlurPass *bp)
:_bp(bp),
_blurImage(false)
{

}


virtual void update(osg::NodeVisitor *nv, osg::Drawable *drawable)
{
osg::Geometry *geo = dynamic_castosg::Geometry*(drawable);

osg::StateSet *state = geo-getOrCreateStateSet();

state-setTextureAttributeAndModes(0, 
_bp-getOutputTexture().get(),osg::StateAttribute::ON);
}


BlurPass *_bp;

mutable bool _blurImage;
};

**'

.
...


polyGeom-setUpdateCallback(bpCallback);




The program crashes when it reaches the viewer.run(); If the comment the above 
code 
the application runs. I think i need to implement some kind of callbacks 
implement what you have suggested in the last post. Isnt it?




Regards
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34801#34801





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Render to Texture

2010-12-13 Thread J.P. Delport

Hi,

On 14/12/10 02:46, Sajjadul Islam wrote:

Hi Delport,

I have created a new class inherating the osg::Drawable::UpdateCallback. The 
class structure is as follows:

*'
class BlurCallback : public osg::Drawable::UpdateCallback
{

public:
 BlurCallback(BlurPass *bp)
 :_bp(bp),
 _blurImage(false)
 {

 }


 virtual void update(osg::NodeVisitor *nv, osg::Drawable *drawable)
 {
 osg::Geometry *geo = dynamic_castosg::Geometry*(drawable);

 osg::StateSet *state = geo-getOrCreateStateSet();

 state-setTextureAttributeAndModes(0, 
_bp-getOutputTexture().get(),osg::StateAttribute::ON);
 }


 BlurPass *_bp;

 mutable bool _blurImage;
};

**'

.
...


polyGeom-setUpdateCallback(bpCallback);




The program crashes when it reaches the viewer.run(); If the comment the above 
code
the application runs. I think i need to implement some kind of callbacks 
implement what you have suggested in the last post. Isnt it?


You can for a start just do it when you modify the switch using the 
keyboard. The callback will be called every frame, so I don't think that 
is what you want.


jp






Regards
Sajjadul

--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=34801#34801





___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org



--
This message is subject to the CSIR's copyright terms and conditions, e-mail legal notice, and implemented Open Document Format (ODF) standard. 
The full disclaimer details can be found at http://www.csir.co.za/disclaimer.html.


This message has been scanned for viruses and dangerous content by MailScanner, 
and is believed to be clean.  MailScanner thanks Transtec Computers for their support.


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


  1   2   3   >