Hi All,
With the work on integration of keystone distortion correction support in osgViewer I have been thinking about the general Render To Texture (RTT) approach that it uses I there are direct parallels to other effects, not just distortion correction. Other areas that are potentially related are: Video resizing is something that could easily be supported as one could dynamically change the viewport of the RTT Camera(s) to control the fill load on rendering the 3D scene, then use a TexMat to rescale the texture coordinate appropriately. This could be very useful for scenes where the fill load is very high such as when doing volume rendering - typical usage would be to lower the resolution when moving the eye point or other controls to improve the interactivity and when the user stops changing the view a higher resolution image is rendered. Post processing effects like field of view etc. are performed by a RTT pass followed by a second pass that renders the final image on the window - exactly in the same way that is done with RTT based distortion correction. I expect it should be straight forward to do both the distortion correction and any additional processing together with the same post processing pass. There are also parallels some of the needs for doing remote rendering setups where you have a OpenGL graphics context to an image that is then copied onto a non OpenGL window, with the non OpenGL window providing mouse/keyboard input and rendering of the image to screen. While it doesn't make sense on workstation such an approach may be useful for remotely logging into a system and having the graphics running on the remote sever, but the image data streamed to the client and rendered locally by the client. Such a solution could avoid some of the potential could avoid some of performance and OpenGL feature pitfalls of remote rendering. So... if you have an interest in any of these areas let me what your needs are/how you'd like to control it so I can weave this into my current work on osgViewer. Cheers, Robert. On 28 March 2013 17:32, Robert Osfield <[email protected]> wrote: > Hi All, > > I'm currently working on a project that needs keystone correction for > using projectors off axis when doing a portable stereo visualization > setup. As part of this work I've written a simple example, > osgkeystone, that I've used to prototype the UI, maths and viewer > setup to do the adjust and run keystone correction and now I'm working > on the design of how best to integrate this functionality into > osgViewer along with support for configuring via a configuration file > and an event handler to allow users to interactively adjust the > keystone correction. As part of this work I'll be providing stereo > support via osgViewer support for slave Camera rather than using the > stereo support in SceneView like we've done for decade. > > With this work it's clear that keystone correction is just one type of > distortion correction that is desirable, and osgViewer already has > some distortion correction support for spherical displays but it's > just provide via View::setUpViewAs*() methods that just set up slave > Camera's to do the distortion correction rather than something that is > more formal. The new keystone distortion correction will also be > implement with slave Camera's so in principle working in the same way, > the only difference is the mesh used to do the distortion correction > and the RTT Camera setup use to set up the textures required. Given > this commonality I do wonder if it might be possible to find a way to > formalize this a bit further. > > Conceptually distortion correction is a per display/projector > operation which means a that one would run the distortion correction > on all the cameras that reside on the window that you want to correct. > This makes me wonder if one would set up the basic Camera's that the > applications want on a window then specific that you want distortion > correction run on that window (or portion of window if we have > twinview or similar setup). I took this approach with adding the > support for depth partitioning - one sets up the view then runs a > method to convert these to multiple cameras. I'm wary of just trying > to be too clever though into trying to simplify things for end users > as extra complexity might just drag the whole design down. > > Another issue I'm thinking about is how to handle mouse interaction is > the distorted space as mouse coordinates in window coordinates no > longer map directly to the 3d scene, but have to go via a look up on > distortion mesh first to convert the mouse coordinates from window > coordinates into the RTT viewport coordinates. This issue exists > right now with spherical display support so again we potentially could > solve this problem as well. I say solve, I haven't yet worked out how > to configure this type of mouse coordinate correction yet - the maths > is easier than the design+coding in this case. > > For most users this topic is unlikely to affect you, you'll be able to > just use osgViewer as before without any changes, but for those > thinking about these issues it might well be welcome to see a > discussion and integration of some of these features. I'm striking up > this thread to give the community the opportunity to share > thoughts/prior experience/problems so we can make sure the final > implementation works well not just for keystone correction but also > more widely. > > Cheers, > Robert. >
_______________________________________________ osg-users mailing list [email protected] http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

