I understand that stereo configure renders two times with different rojection/view matrices.
Actually, my question was how to compute different projection/view matrices for left and right eyes. OpenGL routine by a user has only one project/view matrix for a single view, and sometimes the project/view matrices are mixed with model transform matrices - which means that we cannot correctly infer viewing parameters. Even worse if we use shaders. So what is equalizer's technique to correctly infer viewing parameters? Or do users have to provide viewing parameters explicitly using some sort of API? Thanks in advance. On Wed, May 20, 2009 at 2:20 AM, Stefan Eilemann <[email protected]> wrote: > Hi Won, > > On 19. May 2009, at 19:55, Won J Jeon wrote: > > > I have a question about how the stereo works in equalizer. > > > > Users write a normal (one view) rendering program in OpenGL, then > > Equalizer can automatically renders > > two views, of which viewpoints are a bit left and right from the > > original view point. > > So, my question is how the equalizer infers the original viewpoint > > from the OpenGL code? > > > Users provide their OpenGL rendering routine to Equalizer. For stereo > rendering, Equalizer calls this method twice providing different > projection and view matrices. A high-level overview is on the website > [1], which is strangely the first hit when you search for stereo on > the website. > > > Cheers, > > Stefan. > > > [1] http://www.equalizergraphics.com/documents/design/immersive.html > > _______________________________________________ > eq-dev mailing list > [email protected] > http://www.equalizergraphics.com/cgi-bin/mailman/listinfo/eq-dev > http://www.equalizergraphics.com >
_______________________________________________ eq-dev mailing list [email protected] http://www.equalizergraphics.com/cgi-bin/mailman/listinfo/eq-dev http://www.equalizergraphics.com

