Hi,


I have a question about how the stereo works in equalizer.



Users write a normal (one view) rendering program in OpenGL, then Equalizer
can automatically renders

two views, of which viewpoints are a bit left and right from the original
view point.

So, my question is how the equalizer infers the original viewpoint from the
OpenGL code?

Sometimes it is quite easy to figure out the viewpoint information from
OpenGL calls, but in many cases, it is not.

If we can modify drivers, then we can take nvidia's way -

http://developer.download.nvidia.com/presentations/2009/GDC/GDC09-3DVision-The_In_and_Out.pd<http://developer.download.nvidia.com/presentations/2009/GDC/GDC09-3DVision-The_In_and_Out.pdf>

But I guess Equalizer does not take this way, right?   So how did you
achieve this?



Sorry for not going through the code to figure it out by myself. :)
_______________________________________________
eq-dev mailing list
[email protected]
http://www.equalizergraphics.com/cgi-bin/mailman/listinfo/eq-dev
http://www.equalizergraphics.com

Reply via email to