On 22. May 2009, at 5:08, Won J Jeon wrote:

> I understand that stereo configure renders two times with different  
> rojection/view matrices.
>
> Actually, my question was how to compute different projection/view  
> matrices for left and right eyes.
>
> OpenGL routine by a user has only one project/view matrix for a  
> single view, and sometimes the project/view matrices are
>
> mixed with model transform matrices - which means that we cannot  
> correctly infer viewing parameters.  Even worse if we use
>
> shaders.
>
> So what is equalizer's technique to correctly infer viewing  
> parameters?
>

The parameters are computed from:

- The frustum (wall, projection) description in the config file
- The observer parameters (eye separation, head matrix) given by the  
application
- The current eye pass

The basic algorithm is:

- Compute eye positions in world space
- For each frustum
   o transform eye position to wall space
   o compute frustum corners from eye pos and wall size
   o compute head transform from frustum matrix and eye position

>  Or do users have to provide viewing parameters explicitly
>
> using some sort of API?
>

Normal applications do not need to provide anything, immersive  
applications have to provide the head matrix (see above).


HTH,

Stefan.


_______________________________________________
eq-dev mailing list
[email protected]
http://www.equalizergraphics.com/cgi-bin/mailman/listinfo/eq-dev
http://www.equalizergraphics.com

Reply via email to