HI Andy,

On 6/8/06, Andy Lundell <[EMAIL PROTECTED]> wrote:
The math is basically simple geometry.  Essentially the angle the eye should
be looking at can be calculated from the interocular distance and the
convergence * (zFar-zNear).   I just used that angle to make a rotation
matrix and half the interocular distance to make a translation matrix, and
applied those to the projection matrix.

This sounds a bit odd, if any rotation is required it should be to
account for any physical orientation of the display relative to mean
of each of the displays orientation.  Its not something that should
change w.r.t the scene, its purely the physical calibration of the
displays.

The projection matrices should be indentical so won't need
modification, so its just a matter of setting up the view matrices to
account for the eye position and configuration of the displays.  The
scenes contribution to the view matrix offsets should just be a scale
between how big the viewer is in the virtual world - a town
walkthrough would be 1:1, while a molecular system would really scale
down the offsets to fit the model.

This gave me results much closer to what I was expecting. Especially in a
head-mounted display.

Can't help but feel its a fudge factor that a proper sterep setup.

It's possible I was just making things hard for myself, but I just couldn't
get OSG to give me reliable stereo matrixes once I started adjusting values
like the interocular distance, or the convergence distance.

Unfortunately I don't have access a HMD to tune things against at my end.

Robert.
_______________________________________________
osg-users mailing list
[email protected]
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/

Reply via email to