Hi Björn and all,

I'd like to test an application on the Oculus with some post-process effect.

On the Oculus side, the only recommendation I've read (
https://developer.oculus.com/documentation/intro-vr/latest/concepts/bp_intro/
)
is to apply the post effect to both eyes independently (taking into account
their z-depth).

On the osg integration side, I see these points to be covered:
1. setup a scene with some post-processing cameras which is compatible with
the 2-slaves cameras setup done by the OculusViewer
2. the post effect should probably affect only the color buffer copied to
the Oculus, while the depth buffer (used for the time warp) should be the
one written by the main render camera with the actual 3d scene values.

I'm not sure which would be the best scheme to achieve that.
In particular, it would be good to keep the current slave cameras setup
with respect to projection and view matrices, but move the buffer
management to the latest stage camera of the post-processing.

What do you think?
I'd like to help in coding/testing a solution.
Ricky
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to