Hi All,

I'm currently working on a project that needs keystone correction for
using projectors off axis when doing a portable stereo visualization
setup.  As part of this work I've written a simple example,
osgkeystone, that I've used to prototype the UI, maths and viewer
setup to do the adjust and run keystone correction and now I'm working
on the design of how best to integrate this functionality into
osgViewer along with support for configuring via a configuration file
and an event handler to allow users to interactively adjust the
keystone correction.  As part of this work I'll be providing stereo
support via osgViewer support for slave Camera rather than using the
stereo support in SceneView like we've done for decade.

With this work it's clear that keystone correction is just one type of
distortion correction that is desirable, and osgViewer already has
some distortion correction support for spherical displays but it's
just provide via View::setUpViewAs*() methods that just set up slave
Camera's to do the distortion correction rather than something that is
more formal.  The new keystone distortion correction will also be
implement with slave Camera's so in principle working in the same way,
the only difference is the mesh used to do the distortion correction
and the RTT Camera setup use to set up the textures required.  Given
this commonality I do wonder if it might be possible to find a way to
formalize this a bit further.

Conceptually distortion correction is a per display/projector
operation which means a that one would run the distortion correction
on all the cameras that reside on the window that you want to correct.
 This makes me wonder if one would set up the basic Camera's that the
applications want on a window then specific that you want distortion
correction run on that window (or portion of window if we have
twinview or similar setup).  I took this approach with adding the
support for depth partitioning - one sets up the view then runs a
method to convert these to multiple cameras.   I'm wary of just trying
to be too clever though into trying to simplify things for end users
as extra complexity might just drag the whole design down.

Another issue I'm thinking about is how to handle mouse interaction is
the distorted space as mouse coordinates in window coordinates no
longer map directly to the 3d scene, but have to go via a look up on
distortion mesh first to convert the mouse coordinates from window
coordinates into the RTT viewport coordinates.  This issue exists
right now with spherical display support so again we potentially could
solve this problem as well.  I say solve, I haven't yet worked out how
to configure this type of mouse coordinate correction yet - the maths
is easier than the design+coding in this case.

For most users this topic is unlikely to affect you, you'll be able to
just use osgViewer as before without any changes, but for those
thinking about these issues it might well be welcome to see a
discussion and integration of some of these features.  I'm striking up
this thread to give the community the opportunity to share
thoughts/prior experience/problems so we can make sure the final
implementation works well not just for keystone correction but also
more widely.

Cheers,
Robert.
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to