HI Ying,

The osgdistortion example uses a Camera set up to handle Render To
Texture (RTT) of the main scene graph, rendering to the texture as if
it were the original frame buffer.  This texture is then used when
rendering within a second Camera that rendering to the normal frame
buffer but with an orthographic projection.  The scene the the second
Camera rendering is simply a quad osg::Geometry that is screen aligned
so it takes up the whole window, the vertices on this osg::Geometry
are set up as a grid to cover the screen with a regular pattern of
quads.  The vertices on this geometry also have texture coordinates
associated with them these texture coordinates tell the GPU what part
the texture to use when rendering, if the texture coords were set up
in a regular pattern like the vertices then you wouldn't see any
distortion, however, for this example they are shifted so that desired
distortion is applied.

Robret.

On 6 June 2014 17:00, ying song <[email protected]> wrote:
> Hi everyone,
>
> I'm now going through the osgdistortion example. Based on my understanding, 
> this example is to rendering the scene on the texture istead of the screen. 
> Through distorting the texture, the distorted model could be build and shown.
>
> However, I'm a little confused about the rendering process to distort the 3D 
> model.
>
> Part of the code is shown below:
>
>   for(i=0;i<noSteps;++i)
>         {
>             osg::Vec3 cursor = bottom+dy*(float)i;
>             osg::Vec2 texcoord = bottom_texcoord+dy_texcoord*(float)i;
>             for(j=0;j<noSteps;++j)
>             {
>                 vertices->push_back(cursor);
>                 texcoords->push_back(osg::Vec2((sin(texcoord.x()*osg:: 
> PI-osg:: PI*0.5)+1.0f)*0.5f,(sin(texcoord.y()*osg:: PI-osg:: 
> PI*0.5)+1.0f)*0.5f));
>                 colors->push_back(osg::Vec4(1.0f,1.0f,1.0f,1.0f));
>
>                 cursor += dx;
>                 texcoord += dx_texcoord;
>             }
>         }
>
>   polyGeom->setVertexArray(vertices);
>
>   polyGeom->setColorArray(colors);
>
>   polyGeom->setTexCoordArray(0,texcoords);
>
> I'm not sure how the rendering process works. Could anyone tell me that are 
> the original points' coordinates stored in 'vertices', and the distorted 
> points' coordinates stored in 'texcoords'?
>
> Another question is that why the texture is defined as 2D? Could I define a 
> 3D texture, then rendering a distorted model on the 3D texture and display it 
> on the screen?
>
> Thank you so much for helping me on this!
>
> Cheers,
> Ying
>
> ------------------
> Read this topic online here:
> http://forum.openscenegraph.org/viewtopic.php?p=59668#59668
>
>
>
>
>
> _______________________________________________
> osg-users mailing list
> [email protected]
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to