Hi Xiaoshuxing,

I'm now using the release version 1.2, and I'm using it with the QT
windowing system.

Now , the only way I can get something about camera is through the sceneview
object in my QOSGWidget.

In CVS there is a are two QT examples osgsimpleviewerQT3 and
osgsimpleviewerQT4 which might be of interest to you, they should
simplify the task of intergrating the OSG with QT.

 How can I get  the eye position and near far clip
plane?

This all depends upon when you want to querry it,  at the application
level, during cull traversal, during draw traversal or....

Because I want to pass them to my GLSL fragment shader, or is there any way
to get those variables directly in the GLSL shaders? I've searched the GLSL
orange book, but all over the book, it assumes the eye position to be
(0,0,0).

GLSL supports several inbuilt uniforms including
gl_ModelViewProjectionMatrix and gl_ModelViewMatrix.  The OSG also
adds osg_ViewMatrix and osg_ViewMatrixInverse that are set per frame.

Using these you should be able to do what you want entirely with GLSL
without any need for extra external code.

If version 1.3 will clarify this camera thing, that'll really be great, and
I'll definitely dig into that, but for now, I can't wait a month for that.

For your GLSL work nothing will change with the osgViewer library as
you should already have what you from the built in uniforms, and the
OSG osg_ built uniforms will be maintained tool.

What the osgViewer library will resolve is that there will be just one
general purpose osg::Camera, that can be used in the scene graph for
RTT, HUD/multi-pass purposes or at the viewer/application level.

Using a camera both in the scene graph and at the application does go
against Don's model of how cameras should be used - i.e only external
to the scene graph.  I did once subscribe to this point of view, but
in my attempts to improve RTT support in the OSG I eventually came to
the realization that all the cull callbacks I was having to use to set
up projection, view matrices and buffers was in essense a camera.
Once I realised this it was obvious that you can't keep to dogma that
cameras shouldn't be part of the scene graph - you simply can't do
modern RTT work with scene graphs without them being an integral part
of the scene graph.  And so about a year and half ago osg::CameraNode
was born.

Don's article was a response to the birth of osg::CameraNode.  Don is
good writer so it sounds pretty compelling, but it doesn't address the
a paradox that the cull callback you need to do modern RTT work
actually place the camera in the scene graph.  Its this paradox which
breaks the camera analogy and trying to stick with it just causes
confusion.

Alas osg::CameraNode existing in parallel with Producer::Camera and
Producer::CameragGroup/OsgCameraGroup itself has brought confusion.
So while osg::CameraNode was a boon for RTT and other scene graph work
(such as serialization, decoupling rendering code from applications,
providing RTT fallbacks) it hasn't made understanding the various
relationships easier.

My hope for osgViewer is that we'll be able to untangle some of the
concepts and unify the API more.  One of the recently changes towards
this unification was renaming osg::CameraNode to osg::Camera.  More
info on osgViewer can be found on the wiki:

  http://www.openscenegraph.org/osgwiki/pmwiki.php/Tasks/OsgViewer

I am currently busy with other project work right now, but once this
is complete I'll return to working on osgViewer and will open out
debates about viewers then.  This should commence in a couple of weeks
time.

Robert.
_______________________________________________
osg-users mailing list
[email protected]
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/

Reply via email to